text
stringlengths
1
1.03M
id
stringlengths
1
7.38k
metadata
dict
\section{Introduction} Information retrieval (IR) systems have become integral to daily activities of millions and will retain their prominence in years to come. One of the reasons for such importance of a good IR system is the amount of data that is available on the web and the pace at which it is increasing. The number of websites reportedly increased from one in 1991 to more than one billion in September 2014 \footnote{http://www.internetlivestats.com/total-number-of-websites/ on 27/10/2014}. Simultaneously, there was an increasing number of users availing the hosted services. This increase in web usage is more than an issue of load that was met by computationally powerful servers. The bigger challenge was to organize and make available the huge amount of information in a readily consumable manner. This required the third entity of retrieval systems. What essentially was a two-way transaction between the host and the client has become three-way with an IR system in the middle. Clients are served by hosts, a relation facilitated by IR systems. However, current day IR systems are more than just organizer of web links. They model user choices and preferences to serve them better. We argue that the three-entity unit of the client, IR system and the host is greater than the sum of its parts. The relation between these three entities is ignored by the current web-service architecture. We present here a proposal which will exploit this relationship to better deliver some aspects for web service usage. Web designers write content on the pages based on the information provided by the owner of the site. Content in a website is primarily organized based on the categorization of the information and arranged appropriately by the designer. In this entire process of the current design paradigm, the {\it query} has no role to play during the design or presentation phases of a website. But, when a search query is given to an IR system, it retrieves links of pages that are prepared without taking query into consideration {\it on the host side}. This is because retrieval and content management are considered mutually exclusive, that is, the content management system does not know about the retrieval system and the retrieval system does not know how the content provider may respond to the query. Due to this shortcoming, both the content provider and IR system are under performing. In this paper, we try to address this issue by proposing an architecture that enables the server hosting the website to present content that is based on the query posed by the user. \begin{figure*}[htb] \centering \epsfig{file=content-retrieval-architecture.eps, width=6.8in} \caption{The proposed architecture for more responsive IR and CMS systems} \end{figure*} \section{The Proposed Architecture} The outline of the architecture we propose is presented in Figure 1. The scenario is that the user starts a retrieval system and gives a query. The retrieval system presents the search results to the user. Out of them, the users selects one and is taken to the destination website. When the user is taken to that website, the retrieval system also shares some information about the user and the query (subject to privacy requirements: see Section 5) with the server hosting the website. The host server uses this information to present the content such that the user might have a better search experience (see Section 3). This presentation might, for example, make it easier for the user to find certain things. The host server will then provide feedback to the retrieval system (again subject to privacy requirements) based on the user's stay in the website and the user's activity during the stay. Since the information shared with the host server is anonymized, so will be the feedback given to the retrieval system. The retrieval system will now use this feedback to give better results in the future (see Section 4). The overall result will be better synchronization between the retrieval system and the host server for the purpose of presenting better results to the user. Anonymization, opt-out option and customization will be the central requirements, enforced through a protocol (see Section 6), to prevent any abuse that can result from sharing the information. \section{Query-Aware Content Presentation} Current state of the art Web servers do not take the query into consideration while presenting the content to the user. Lot of work has been reported on improving the architecture of Web servers for various applications \cite{1,2,4}. Many models are available to compare the architectures of the servers \cite{3}. \cite{5} discusses improving the performance of websites by using edge servers in Fog Computing Architecture. To the best of our knowledge there is no attempt to use the query by the host server to present the content to the user. If a host server can present the content to the user based on the query, then it will be beneficial to both the user and the host organization. Suppose that user $A$ gives query ``popular movies in action genre + old'' and that $B$ gives ``popular movies in action genre + latest''. Let us assume that both the users get $Link1$ as their first link. We propose that the host server of $Link1$ should present different contents to each of them based on their query. In this case, the host server may present a list of old action movies (could be from other pages of the host server) to $A$ and a list of new action movies to $B$, in both case in addition to the content at the $Link1$. For getting maximum benefit from this kind of architecture, the current Content Management Systems (CMS) like Drupal, Joomla, Django etc. may have to be redesigned to take user queries into account for presenting the final web page to be shown to the user. This will allow the host server (and the CMS) to play an active role in the process of content retrieval. Since the content provider knows much more about the content than the retrieval system, all that knowledge could be used to present dynamic query-aware content to the user. \section{Feedback-Aware Retrieval} A classical or bare-boned retrieval system \cite{7} only takes into account the query for retrieval. Some modern retrieval system go further and use the information available about the user for personalizing the results. However, they do not take into account the user's activity once the user has selected and visited one of the web pages. In the proposed architecture, anonymized information about the user's activity will be made available to the retrieval system. It will, thus, be possible to design algorithms that take this activity into account. Some work in this direction was proposed by \cite{6}. The details about this activity might include information such as the other links on the website that the user clicked on and the total time that the user spent on the website and on various pages. A retrieval system made aware of the feedback from the host server should, intuitively, perform better. Some modern retrieval systems also provide additional links as part of the summary `snippet' while presenting the results of retrieval to the user. Such snippets can be better prepared with the suggested feedback from the host server. Additionally, and importantly, the host server can provide extra information about its content as part of the feedback. This extra information will be based on the query and the knowledge of the content that is available to the host server. This will allow the content provider to have a say in the presentation of the snippet for the concerned website. The retrieval system may or may not use this information, depending on the retrieval and snippet preparation algorithm. \section{Privacy and Customization} Our proposal requires the retrieval system to share some information about the user and the query with the host server. It also requires the host server to provide feedback to the retrieval system based on the user's stay in the website after the user selected the website from the retrieval results. This extra sharing of information immediately raises the questions of privacy. If our proposal is implemented, its detailed version will need to include stringent requirements to address all the possible privacy concerns. We list below some of these requirements: \begin{itemize} \item The first such requirement is that the user's identity, even if known to the retrieval system, will not be revealed to the host server. Whatever information is shared with the host server will have to be strictly anonymized so as not to reveal the user's identity. \item The second requirement is that only the relevant information will be shared. If we view this information as a list of attribute-value pairs, then only that subset of attribute-value pairs will be shared with the host server that the host server needs to know in order to better present its content. \item The third requirement is that an opt-out option will be available to both the user and the host server. The user will be made aware of the sharing of information and the user will decide whether this sharing is to be allowed or not. The information will be shared only if the user explicitly agrees to it. In the default case, there will be no sharing. Similarly, the host server will decide whether to provide feedback to the retrieval system or not and the default will be the latter. \item The fourth requirement is that both the user and the host server must be able to customize sharing of information. If they decide to share information, they will further be given the option to select the specific attributes that they are willing to share. For example, if the retrieval system knows about the user's location, age, gender and language, then the user may decide to share only location and language. \item The user will have to be informed that the activity on the visited website may be used for providing feedback to the retrieval system. And the user will then decide whether and what part of the activity on the website can be used to provide feedback to the retrieval system. \end{itemize} As this proposal is worked out in more detail in future work, more such requirements might be identified and will also have to be addressed. Even after addressing these issues, one concern still remains regarding the proposed architecture. Even if the shared information is anonymized and the host server does not know the identity of the user, the retrieval system may still know the identity and be able to connect the activity of the user on the visited website with the user's identity. This raises the question whether the retrieval system will come to acquire more knowledge about the user than is warranted. This may be a problematic ethical issue and requires further investigation. \section{Retrieval Response Protocol} There are many different kinds of retrieval systems. Similarly, there are many different kinds of host servers and content management systems. If there is to be a flow of information between them as suggested in the preceding sections, then it will have to be precisely regulated so that it is possible to implement systems without any conflict. This will require a well-defined and well-designed protocol. We call this the Retrieval Response Protocol (RRP). The Retrieval Response Protocol will regulate the flow of information between the retrieval system and the host server. The protocol will be used to initiate, maintain and close a {\it retrieval session}. As soon as the user select one result from the results provided by the retrieval system in response to the user query, a retrieval session will be initiated. The ending of the session will perhaps have to be timeout based as there is no other way to know when the user has left the website. During the time the session is alive, the retrieval system will first share the information about the user and the query with the host server. After that, based on the user's activity, the host server will provide the feedback to the retrieval system. All the activity during this session will be subject to the privacy and customization requirements and the protocol design will have to take this into account. The protocol will have to be designed to regulate this retrieval session. We leave the design of this protocol for future work. \section{Conclusion} In the current information retrieval paradigm, the host does not use the query information for content presentation. The retrieval system does not know what happens after the user selects a retrieval result. And the host also does not have access to the information which is available to the retrieval system. We presented the outline of an architecture that addresses these issues. The aim is to provide a better search experience to the user through better presentation of the content based on the query and better retrieval results based on the feedback to the retrieval system from the host server. The retrieval system will share some information with the host server and the host server in turn will provide relevant feedback to the retrieval system based on the user's stay in the website. The host uses all the query related information for dynamic content presentation. This revised paradigm for information retrieval also introduces the issues of privacy which will have to be addressed stringently. It also needs a new protocol for content retrieval response, which we briefly described. This protocol will regulate the flow of information between the retrieval system and the host server subject to the privacy and customization requirements. \bibliographystyle{plain}
proofpile-arXiv_067-8484
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of muon properties is practically the oldest subject of particle physics, but remains at the forefront of current research. The MEG bound~\cite{TheMEG:2016wtm} on muon flavor violating ($\mu$FV) $\mu \to e\gamma$ decay rate at 90\% C.L. is \begin{align} {\cal B}(\mu \to e\gamma) < 4.2 \times 10^{-13}, \ \ ({\rm MEG}, 2016), \label{eq:MEG16} \end{align} while a rather dated result of SINDRUM gives~\cite{Bellgardt:1987du} \begin{align} {\cal B}(\mu \to 3e) < 1.0 \times 10^{-12}, \ \ ({\rm SINDRUM, 1988}), \label{eq:SINDRUM88} \end{align} for $\mu^+ \to e^+e^-e^+$ search. A third type of $\mu$FV search studies $\mu \to e$ conversion on nuclei. Normalized to the muon capture rate, SINDRUM~II finds~\cite{Bertl:2006up} \begin{align} R_{\mu e} < 7 \times 10^{-13}, \ \ ({\rm SINDRUM~II, 2006}), \label{eq:SINDRUM06} \end{align} for $\mu \to e$ conversion on gold. With schedules delayed by the current world pandemic, MEG~II~\cite{Baldini:2018nnn} will push the $\mu \to e\gamma$ bound down to $\sim 6 \times 10^{-14}$ with three years of data taking. A new experiment to search for $\mu^+ \to e^+e^-e^+$, Mu3e~\cite{Blondel:2013ia}, plans to reach down to $5 \times 10^{-15}$ with three years of running and is limited mostly by the muon beam intensity. Projected intensity improvements~\cite{Baldini:2018uhj} by up to 2 orders of magnitude seem feasible; hence, Mu3e can eventually reach down to $10^{-16}$ in sensitivity. In contrast, to improve $\mu \to e\gamma$ sensitivity beyond MEG II, innovations are needed for background suppression. In terms of projected improvements, $\mu \to e$ conversion i.e., $\mu N \to eN$ is perhaps the most promising. SINDRUM~II operated at the limits of power consumption, so new developments~\cite{DeeMe} are based on the idea~\cite{Dzhilkibaev:1989zb} of using special solenoids for pion capture, muon transport, as well as detection, which significantly improves muon intensity. Phase I of COMET~\cite{Adamov:2018vin} aims for $R_{\mu e} < 7 \times 10^{-15}$, eventually reaching down to $10^{-17}$ for phase II. Similar to COMET phase II in design, Mu2e~\cite{Bartoszek:2014mya} aims at $2.6 \times 10^{-17}$ sensitivity. Both experiments can be improved further. For example, ongoing~\cite{Baldini:2018uhj} PRISM/PRIME~\cite{Kuno:2005mm} developments aim at bringing the limit eventually down to a staggering $10^{-19}$. Although the primary objective for $\mu N \to eN$ is contact interactions, it also probes~\cite{deGouvea:2013zba} the dipole interaction and can be in place to probe $\mu \to e\gamma$ if the associated backgrounds of the latter cannot be brought under control at high muon intensity. \begin{table*}[t!] \begin{center} \begin{tabular}{|c|l|l|} \hline \ $\mu$FV process \ & \quad\quad\quad \ Current bound & \quad\quad \ \ Future sensitivity \\ \hline \hline $\mu\to e \gamma$ & \ $4.2\times 10^{-13}$ (MEG~\cite{TheMEG:2016wtm}) & \ $6\times 10^{-14}$ (MEG II~\cite{Baldini:2018nnn}) \\ $\mu \to 3 e$ & \ $1.0\times 10^{-12}$ (SINDRUM~\cite{Bellgardt:1987du}) \ & \ $\sim 10^{-15}{\rm -}10^{-16}$ (Mu3e~\cite{Blondel:2013ia}) \\ $\mu N \to eN$ & \ \ $7 \times 10^{-13}$ (SINDRUM~II~\cite{Bertl:2006up}) \ & \ $\sim 10^{-15}{\rm -}10^{-17}$ (COMET~\cite{Adamov:2018vin}) \\ & & \ $3 \times 10^{-17}$-- \,\;\ (Mu2e~\cite{Bartoszek:2014mya}) \\ & & \ $\sim 10^{-18}{\rm -}10^{-19}$ (PRISM~\cite{Kuno:2005mm}) \ \\ \hline \hline $\tau \to \mu \gamma$ & \ $4.4\times 10^{-8}$ (BaBar~\cite{Aubert:2009ag}) & \ ${\sim 10^{-9}}$ (Belle~II~\cite{Kou:2018nap}) \\ $\tau \to 3 \mu$ & \ $2.1\times 10^{-8}$ (Belle~\cite{Hayasaka:2010np}) & \ $3.3 \times 10^{-10}$ (Belle~II~\cite{Kou:2018nap}) \\ \hline \hline \end{tabular} \caption{Summary of current experimental bounds and future sensitivities of $\mu$FV processes.} \end{center} \label{tab:cLFV} \end{table*} } The current bounds and projected sensitivities on $\mu$FV processes are summarized in Table~\ref{tab:cLFV}. The impressive bounds for the muon reflect seven decades of studies. We also list the corresponding processes for $\tau$, i.e., $\tau \to \mu\gamma$ and $\tau \to 3\mu$, where the current bounds are from B factories~\cite{Aubert:2009ag,Hayasaka:2010np}, and expectations~\cite{Kou:2018nap} are for Belle~II with 50~ab$^{-1}$ in the coming decade. LHCb can~\cite{Bediaga:2018lhg} cross check the Belle~II result on $\tau \to 3\mu$ after upgrade~II, i.e., at the High Luminosity LHC (HL-LHC). The heaviness of $\tau$ hence its later discovery, and smaller production cross section plus the difficulty in detection underlie the weaker search limits. However, its heavy mass and third generation nature offers a different window on new physics, or equivalently, beyond the Standard Model (BSM) physics. We studied~\cite{Hou:2020tgl} the $\tau \to \mu\gamma$ decay previously in conjunction with $h \to \tau\mu$, where $h$ is the 125 GeV boson discovered in 2012~\cite{PDG}. The context was the two Higgs doublet model (2HDM) with extra Yukawa couplings, which was called the general 2HDM (g2HDM). The $h$ boson picks up the extra $\rho_{\tau\mu}$ Yukawa coupling from the $CP$-even exotic Higgs boson $H$ via $h$-$H$ mixing. Given that this mixing angle, $c_\gamma$, is known to be small (the alignment phenomenon~\cite{Hou:2017hiw}, or that $h$ so closely resembles the SM Higgs boson~\cite{PDG}), only a weak constraint is placed on $\rho_{\tau\mu}$. Together with the extra top Yukawa coupling $\rho_{tt}$, the $\rho_{\tau\mu}$ coupling induces $\tau \to \mu\gamma$ decay via the two-loop mechanism~\cite{Chang:1993kw}. Taking $\rho_{tt} \sim \lambda_{t} \simeq 1$, the strength of the top Yukawa coupling of SM, it was shown that Belle~II can probe the $\rho_{\tau\mu} \lesssim \lambda_{\tau} \simeq 0.010$ parameter space. Taking $\rho_{tt}$ at ${\cal O}(\lambda_t)$ and $\rho_{\tau\mu} \lesssim \lambda_{\tau}$ together, they correspond to~\cite{Hou:2020tgl} \begin{align} \rho_{3j}^f \lesssim \lambda_3^f, \ \ \ (j \neq 1), \label{eq:rho3j} \end{align} with $ \rho_{31}^f \ll \lambda_3^f$ expected. {As we will see, this relation does not hold for down-type quarks because of tight constraints from ($K$ and) $B$ meson physics.} The probe of $\rho_{tt}$ by $\tau \to \mu\gamma$ via the two-loop mechanism is quite significant, as $\rho_{tt}$ can drive~\cite{Fuyuto:2017ewj} electroweak baryogenesis (EWBG), or the disappearance of antimatter in the very early Universe. A backup mechanism~\cite{Fuyuto:2017ewj} is through $|\rho_{tc}| \sim \lambda_t$ [{i.e.,~saturating} Eq.~(\ref{eq:rho3j})] in case $\rho_{tt}$ accidentally vanishes. In this paper, we show that the MEG~II search for $\mu \to e\gamma$ would continue to probe \begin{align} \rho_{\mu e} \lesssim \lambda_e, \label{eq:rhomue} \end{align} which echoes $|\rho_{ee}| \sim \lambda_e \cong 0.0000029$ that is suggested~\cite{Fuyuto:2019svr} by the recent ACME result~\cite{Andreev:2018ayy} on electron electric dipole moment (eEDM), where a correlation of $|\rho_{ee}/\rho_{tt}| \propto \lambda_e/\lambda_t$ is implied. That is, the tiniest $CP$ violation on Earth seems linked with the baryon asymmetry of the Universe (BAU)! The $\rho_{\mu e}$, $\rho_{ee}$ behavior suggest \begin{align} \rho_{i1}^f \lesssim \lambda_1^f, \quad\; \label{eq:rhoi1} \end{align} which likely holds also for $i = 3$, and seems plausible for $f = u, d$. Thus, the affinity of the 1-2 sector of extra Yukawa couplings may be with the first generation, while the affinity of the 3-2 sector may be with the third generation, which echo the mass-mixing hierarchy. {That the $\rho^d$ matrix is close to diagonal is a mystery.} If the ``septuagenarian'' (``octogenarian'' if counting from date of discovery) muon appear ``sanitized'', i.e., very much SM-like, as reflected in the weak strength of the extra Yukawa couplings mentioned, one cannot but think of the ``$B$ anomalies" that have been in vogue for almost the past decade. For a brief summary---and {\it critique}---of these $B$ anomalies; see, e.g., the ``{\it HEP perspective and outlook}" given by one of us in the summer 2018~\cite{Hou:2019dgh}; the situation about the $B$ anomalies has not changed by much since then. Some of the suggested remedies of the $B$ anomalies, especially the leptoquark (LQ) variant, relate to tree level effects, hence make a large impact in general. In contrast, though also at tree level, the extra Yukawa couplings have hidden themselves so well for decades, via the relations such as Eqs.~(\ref{eq:rho3j}) and (\ref{eq:rhoi1}), {the near-diagonal $\rho^d$ matrix, {\it plus alignment}~\cite{Hou:2017hiw}. A second purpose} of the present paper is therefore to contrast the predictions of g2HDM vs the ``bold'', UV-complete models such as PS$^3$~\cite{Bordone:2017bld,Bordone:2018nbg,Cornella:2019hct}. For this reason, we will extend the list of $\mu$FV processes beyond Table~I to include various rare (semi)leptonic $B$ decays. \begin{figure*}[t] \center \includegraphics[width=0.275 \textwidth]{1loop.pdf} \hskip0.35cm \includegraphics[width=0.275 \textwidth]{2Loop-top.pdf} \hskip0.35cm \includegraphics[width=0.275 \textwidth]{2Loop-W.pdf} \caption{ One-loop, two-loop fermion, and two-loop $W$ diagrams for $\mu\to e\gamma$.} \label{fig:feyndiag} \end{figure*} The paper is organized as follows. In the next section, we discuss $\mu \to e\gamma$ in g2HDM, which is pretty much parallel to what we have done for $\tau\to \mu\gamma$~\cite{Hou:2020tgl}. We show that the $\mu \to e\gamma$ process probes the $\rho_{\mu e}\rho_{tt}$ product in g2HDM, {as well as $c_\gamma \rho_{\mu e}$ where $c_\gamma$ is the $h$-$H$ mixing angle.} In Sec.~III, we cover the $\mu \to 3e$ and $\mu N \to eN$ processes, as well as $\tau \to 3\mu$. {We show that the g2HDM effects are very suppressed at tree level and that all these processes eventually pick up the $\mu e\gamma$ or $\tau \mu \gamma$ dipole couplings. } In Sec.~IV, we contrast the pr{oje}ctions of g2HDM with the PS$^3$ model~\cite{Cornella:2019hct} motivated by the $B$ anomalies, covering rare $B$ decays such as $B_q \to \tau\tau$, $\tau\mu$, $B \to K^{(*)}\tau\tau$, $K^{(*)}\tau\mu$, and {$\tau\to \mu\gamma$ as well}. We also mention $B \to \mu\nu,\; \tau\nu$ decays, where g2HDM could actually reveal~\cite{Hou:2019uxa} itself. We briefly touch upon muon EDM and $g-2$, before offering our conclusion in Sec.~V. \section{\boldmath The $\mu \to e\gamma$ Process} MEG~II~\cite{Baldini:2018nnn} has a genuine discovery potential in g2HDM with extra Yukawa couplings. We have studied~\cite{Hou:2020tgl} $\tau \to \mu\gamma$ decay previously and showed that $\rho_{\tau\mu} \lesssim \lambda_\tau \simeq 0.010$ [part of Eq.~(\ref{eq:rho3j})] can be probed by Belle~II as it pushes down to ${\cal O}(10^{-9})$~\cite{Kou:2018nap}. The $\mu \to e\gamma$ process is the template for $\tau \to \mu\gamma$ decay, for which the two loop mechanism (see Fig.~\ref{fig:feyndiag}) of Ref.~\cite{Chang:1993kw} was {originally} written in g2HDM (called model III~\cite{Hou:1991un} at that time) that possesses extra Yukawa couplings. {Our emphasis is on phenomenological discussion,} so we take Ref.~\cite{Hou:2020tgl} as a template and do not recount details of the g2HDM here. The formulas used in Ref.~\cite{Hou:2020tgl}, besides originating from Ref.~\cite{Chang:1993kw}, have also been checked against those of Ref.~\cite{Omura:2015xcg}, although one should use caution with this reference, as it was written in a time when there was a hint for $h \to \tau\mu$ from CMS, which has subsequently disappeared~\cite{PDG}. {What should be emphasized is that, in g2HDM, the exotic Higgs bosons $H$, $A$ ($CP$-odd), and $H^+$ would {\it naturally} populate the 300--600\;GeV range but which we have surprisingly little knowledge of. For example, $H, A$ could be searched for in $t\bar{c}\, (\bar t c)$ \cite{Altunkaynak:2015twa} and $\tau\mu$ \cite{Hou:2019grj, Primulando:2016eod, Primulando:2019ydt} final states.} \begin{figure}[b] \centering \includegraphics[angle=0,width=8.65cm]{comparison_plot_main} \caption{ Comparison of benchmark scenarios for $\mu\to e\gamma$ as function of scalar masses. For one loop {red dashed curves, lower (upper) curve is for $m_A = m_H + 100\; (200)$~GeV, and flipping $H \leftrightarrow A$ is not much different. For the two-loop BSM benchmark, black curve is for degenerate $m_H = m_A$, red (blue) curves show variation in $m_H\, (m_A)$ with $m_A\, (m_H)$ heavier by} 100, 200~GeV, where satisfying the MEG bound~\cite{TheMEG:2016wtm} at the low 200~GeV fixes $\rho_{\mu e} = \rho_{e\mu} \simeq 0.3 \lambda_e$. Holding this value fixed, the two-loop $h$ benchmark is the green dashed horizontal line, which lies below the MEG~II~\cite{Baldini:2018nnn} sensitivity. See text for further discussion. } \label{benchmarks} \end{figure} In g2HDM, flavor changing neutral Higgs (FCNH) couplings are controlled~\cite{Hou:1991un} by the mass-mixing hierarchy; hence, the one loop diagram, Fig.~\ref{fig:feyndiag}(left), is expected to be highly suppressed~\cite{Chang:1993kw} by multiple chirality flips. Using the one loop formula of Ref.~\cite{Hou:2020tgl} with a simple change of indices, we assume $\rho_{\mu\mu}\rho_{\mu e}$ from an intermediate muon in the loop is negligible compared with $\rho_{\tau\mu}^*\rho_{\tau e}$ from an intermediate $\tau$, which is even more so the case for an intermediate $e$. We illustrate this ``one loop benchmark'' in Fig.~\ref{benchmarks} for % {$\rho_{\tau\mu} = \rho_{\mu\tau} = \lambda_\tau$ and $\rho_{\tau e} = \lambda_e$, and for $m_A = m_H + 200$\,GeV} (or with $H \leftrightarrow A$ interchanged). The effect by itself is out of reach for any time to come, unless $A$, $H$ are very light. In fact, for $m_A = m_H \in (300,\, 500)$\,GeV, due to a cancellation mechanism, the MEG or the future MEG~II bounds would allow $\rho_{\tau\mu}\rho_{\tau e}$ at ${\cal O}(10^4)$ times larger than $\lambda_e\lambda_\tau$, which is very accommodating. For nondegenerate $m_H = 300$\,GeV, $m_A = 500$\,GeV, we find $\rho_{\tau\mu}\rho_{\tau e}/ \lambda_e\lambda_\tau \lesssim 17$ by MEG can be improved to 6.6 with MEG~II, with the results similar for flipping $H \leftrightarrow A$. It is the two loop mechanism~\cite{Chang:1993kw} that is of interest for g2HDM, where the $\rho_{\mu e}$ coupling induces $\mu \to e\gamma$ decay by inserting the $\phi \to \gamma V^*$ vertex [$\phi = h, H, A$; see Fig.~\ref{fig:feyndiag}(center) and 1(right)] related to the $h \to \gamma\gamma$ process, with $V = Z$ subdominant. Following Ref.~\cite{Hou:2020tgl} for $\tau\to\mu\gamma$, we define two BSM benchmarks for illustrating two loop effects. Taking the extra top Yukawa coupling $\rho_{tt} \simeq 1$ while setting $c_\gamma = 0$, one maximizes the $H$, $A$ effect but decouples the $h$ boson. This ``BSM benchmark'' is illustrated in Fig.~\ref{benchmarks}, where $\rho_{\mu e} = \rho_{e\mu} \simeq 0.3\lambda_e$ is taken to satisfy the current MEG bound of Eq.~(\ref{eq:MEG16}) at $m_H$ or $m_A = 200$\,GeV. The MEG~II experiment will continue to probe $\rho_{\mu e}$ down to lower values. A second benchmark illustrates the effect of the SM-like $h$ boson, where we take $\rho_{tt} = 0$ to decouple the exotic $H$, $A$ scalars, but take $c_\gamma = 0.2$ as a large value that may still be allowed. This ``$h$ benchmark'' is also plotted in Fig.~\ref{benchmarks}, giving ${\cal B}(\mu \to e\gamma) \simeq 10^{-14}$ for $\rho_{\mu e} = \rho_{e\mu} \simeq 0.3\lambda_e$, which appears out of reach for MEG~II. Depending on whether $c_\gamma$ is smaller or larger than 0.2, the rate would drop further or become larger, although a $c_\gamma$ value larger than 0.2 may not be plausible. But the rate scales only with the product of $c_\gamma^2\rho_{\mu e}^2$, and if $\rho_{tt}$ truly vanishes, a $\rho_{\mu e}$ value larger than $0.3 \lambda_e$ is allowed. We note that, unlike the $\tau \to \mu\gamma$ case where $h \to \tau\mu$~\cite{PDG} provides {a constraint~\cite{Hou:2020tgl} on $c_\gamma\rho_{\tau\mu}$, no realistic constraint on $c_\gamma\rho_{\mu e}$} can be extracted from $h \to \mu e$ search~\cite{PDG} for our purpose, as $\mu \to e\gamma$ already constrains $\rho_{\mu e}$ to be so small. On the other hand, the value of $\rho_{tt}$ is not known at present, except that any {\it finite} value may suffice~\cite{Fuyuto:2017ewj} for EWBG. For instance, in trying to account for the strong bound on electron EDM by ACME~\cite{Andreev:2018ayy}, the smaller $|\rho_{tt}| \simeq 0.1$ was chosen in Ref.~\cite{Fuyuto:2019svr} to ease the tension. While $\rho_{tt}$ at ${\cal O}(1)$ is not strictly ruled out, we stress that $\mu \to e\gamma$ probes the $\rho_{\mu e}\rho_{tt}$ product; hence, we do not really know whether we are probing $\rho_{\mu e}$ for the BSM benchmark below the strength of $\lambda_e$ yet. Thus, for example, if $\rho_{tt} = 0$ and EWBG is through the $\rho_{tc}$ mechanism~\cite{Fuyuto:2017ewj}, then the MEG bound of Eq.~(\ref{eq:MEG16}) only requires $\rho_{\mu e} = \rho_{e\mu} \lesssim 1.9 \lambda_e$ for our $h$ benchmark, and MEG~II could probe down to $0.7 \lambda_e$. Both values are still in accord with Eq.~(\ref{eq:rhoi1}), but we note that if $c_\gamma$ is lower than the value of 0.2 used, which seems likely, then the allowed $\rho_{\mu e}$ range would rise. {As a passing remark, we expect $\tau \to e\gamma$ to be much suppressed compared with $\tau \to \mu\gamma$ in g2HDM, as $\rho_{\tau e}$ is expected to be much smaller than $\rho_{\tau\mu}$.} \section{\boldmath Other $\mu$FV Processes} \subsection{\boldmath $\mu \to 3e$ and $\tau\to \mu\gamma,\,3\mu$} As Mu3e would start soon to finally probe below the old SINDRUM bound of $10^{-12}$, Eq.~(\ref{eq:SINDRUM88}), we estimate the $\mu \to3e$ rate. We find, consistent with Ref.~\cite{Crivellin:2013wna}, the simple tree level formula for $\mu \to 3e$, {\begin{align} {\cal B}(\mu\to 3e) & = \frac{1}{32} \biggl[2\Bigl|\sum\frac{y_{\phi\mu e}^\ast y_{\phi ee}}{ \hat m_\phi^2}\Bigr|^2 + 2\Bigl|\sum\frac{y_{\phi e\mu}^\ast y_{\phi ee}}{ \hat m_\phi^2}\Bigr|^2 \biggr. \nonumber\\ & \ \ \ \ \biggl. +\, \Bigl|\sum\frac{y_{\phi\mu e}y_{\phi ee}}{\hat m_\phi^2}\Bigr|^2 + \Bigl|\sum\frac{y_{\phi e\mu}y_{\phi ee}}{\hat m_\phi^2} \Bigr|^2 \biggr], \end{align} where we ignore extra Yukawa coupling corrections to the muon decay rate $\Gamma_\mu$~\cite{Hou:2019uxa}, $y_{\phi ij}$ are Yukawa couplings for $\phi = h,\, H,\, A$ that can be read off from Eq.~(3) of Ref.~\cite{Hou:2020tgl}, and $\hat m_\phi$ are scalar masses normalized to $v$. In view that 200~GeV may be too aggressive for the lowest possible exotic scalar mass, we take {for illustration} the relatively conservative $m_H = m_A = 300$ GeV. We define our benchmark further as follows: we take{, somewhat arbitrarily,} $c_\gamma = 0.05$ for the effect from $h$; we take $\rho_{\mu e} (= \rho_{e\mu})$, $\rho_{ee}$ and $\rho_{\tau e} (= \rho_{e\tau}) = \lambda_e$ [Eq.~(\ref{eq:rhoi1})], and take $\rho_{\tau\tau}$ and $\rho_{\tau\mu} (= \rho_{\mu\tau}) = \lambda_\tau$ [Eq.~(\ref{eq:rho3j})]. We then find that $\rho_{tt} \simeq 0.4$ saturates the MEG bound on $\mu \to e\gamma$, {and ${\cal B}(\mu\to3e)|^{\rm contact} \sim 5 \times 10^{-24}$ at tree level, which is far out} of experimental reach. But the $\mu e\gamma$ dipole coupling can generate $\mu \to 3e$~\cite{Kuno:1999jp}, \begin{align} {\cal B}(\mu \to 3e) \simeq \frac{\alpha}{3\pi} \left[{\rm log}\left(\frac{m_\mu^2}{m_{e}^2}\right) - \frac{11}{4} \right] {\cal B}(\mu \to e\gamma), \label{eq:mu3e} \end{align} and we find ${\cal B}(\mu \to 3e)|^{\rm dipole} \simeq {2.6} \times 10^{-15}$ for our benchmark. Though out of reach of Mu3e in early phase, it should be detectable with muon intensity upgrades, where the experiment should be able to confirm the $\mu \to e\gamma^* \to 3e$ nature. For $\tau$, our benchmark gives {${\cal B}(\tau \to \mu\gamma) \simeq 3.1 \times 10^{-9}$}, which is an order of magnitude below current B factory bound, but reachable by Belle~II. Using analogous formulas as above, we find ${\cal B}(\tau \to 3\mu)|^{\rm contact} \simeq 4.9 \times 10^{-13}$, and the larger {${\cal B}(\tau\to3\mu)|^{\rm dipole} \simeq {7.0 \times 10^{-12}}$,} which is still out of Belle~II reach. However, if Belle~II discovers $\tau \to \mu\gamma$ in early data, i.e., above $10^{-8}$, {which is certainly possible~\cite{Hou:2020tgl}} in g2HDM, it would imply $\tau\to3\mu$ at $10^{-10}$ or above, which can be probed by the fixed-target experiment, TauFV~\cite{TauFV}, that is being planned. {Also arising from the $\tau\mu\gamma$ dipole, $\tau^- \to \mu^-e^+e^-$ would be slightly higher. But, suppressed by $\rho_{e\mu}$,} the $\tau^- \to \mu^-e^+\mu^-$ process is expected to be far below the $\tau \to 3\mu$ contact process in g2HDM, while $\tau \to e^-\mu^+\mu^-$ would be suppressed by the $\tau \to e\gamma$ dipole transition. \subsection{\boldmath $\mu N \to eN$ conversion} With two competing experiments, COMET and Mu2e, prospects for pushing $\mu \to e$ conversion during the next decade or more is exceptionally bright, as the current limit~\cite{Bertl:2006up} of $R_{\mu e} < 7 \times 10^{-13}$, Eq.~(\ref{eq:SINDRUM06}), is expected to improve by $\sim$ 3--4 orders of magnitude~\cite{Adamov:2018vin,Bartoszek:2014mya}. The relevant effective Lagrangian is given by~\cite{Cirigliano:2009bz,Crivellin:2014cta} \begin{equation} \begin{aligned} \mathcal{L}_{\mathrm{eff}} =\, & m_{\mu} \bigl(C_{T}^{R}\, \bar{e} \sigma_{\alpha\beta} L \mu + C_{T}^{L}\, \bar{e} \sigma_{\alpha\beta} R \mu\bigr) F^{\alpha\beta} \\ & + \, \bigl(C_{q q}^{S R}\, \bar{e} L \mu + C_{q q}^{S L}\, \bar{e} R \mu\bigr)\, m_{\mu} m_{q} \bar{q} q, \label{eq:Leff_mue} \end{aligned} \end{equation} where $C_{T}^{L,R}$ correspond to the $\mu e\gamma$ dipole, while $C_{qq}^{SL(R)}$ are coefficients to contact terms generated by scalar exchange. There are no current-current interactions at tree level in g2HDM. One computes the conversion rate $\Gamma_{\mu \to e}$ and normalizes to the muon capture rate to get $R_{\mu e}$. The conversion rate is given by \begin{equation} \begin{aligned} \Gamma_{\mu \rightarrow e} = {m_{\mu}^{5}} & \left| {1\over 2} C_{T}^{L(R)} D + 2\Bigl[m_{\mu} m_{p}\, \tilde{C}_{p}^{SL(R)} S^{p} + p \rightarrow n\Bigr]\right|^{2}, \label{eq:Gam_mue} \end{aligned} \end{equation} where the $L$ and $R$ effects add in quadrature, {and $S^{p,n}$ accounts for lepton-nucleus overlap.} For gold, we use~\cite{Kitano:2002mt} $D = 0.189$, $S^p = 0.0614$, and $S^n=0.0918$. {In Eq.~(\ref{eq:Gam_mue}),} \begin{equation} \begin{aligned} \tilde{C}_{p}^{SL(R)} &= \sum C_{q q}^{SL(R)} f_{q}^{p}, \label{eq:Ceff_mue} \end{aligned} \end{equation} relates to {nucleon matrix elements, $f_{q}^{p,n}$,} that account for the quark content of the proton, where we use $f_u^p = f_d^n = 0.024$, $f_d^p = f_u^n = 0.033$~\cite{Harnik:2012pb}, $f_s^p = f_s^n = 0.043$~\cite{Junnarkar:2013ac}. For heavy quarks, we follow Ref.~\cite{Harnik:2012pb} and use $f_Q^{p,n}= (2/27)(1-f_u^{p,n}-f_d^{p,n}-f_s^{p,n})$~\cite{Shifman:1978zn} for $Q = c,\,b,\,t$. In g2HDM, the tree level contribution can be written in terms of Wilson coefficients~\cite{Crivellin:2014cta} for the contact terms induced by the scalar $\phi = h,\, H,\, A$ boson exchange, \begin{equation} \begin{aligned} C_{qq}^{SL} &= ({2}/v^4) \sum \hat y_{\phi e\mu}{\rm Re}\,\hat y_{\phi qq}/{\hat m_\phi^2}, \end{aligned} \end{equation} where $\hat y_{\phi e\mu}$ ($\hat y_{\phi qq}$) is normalized to $\lambda_\mu$ ($\lambda_q$), and one flips $y_{\phi e\mu} \to y_{\phi \mu e}^\ast$ to get $C_{qq}^{SR}$. The dipole $C_T^{L,R}$ contributions are related to $\mu\to e \gamma$, i.e., $C_T^{R,L} = \sqrt{\alpha_e \pi } \,A_{L, R}$, where $A_{L, R}$ contribute to ${\cal B}(\mu\to e \gamma$) [see Ref.~\cite{Hou:2020tgl} for ${\cal B}(\tau \to \mu\gamma)$ formulas]. The $\mu e\gamma$ dipole again dominates $\mu N \to eN$ conversion, with contact terms subdominant. For our benchmark, we obtain the conversion ratio $R_{\mu e}|^{\rm contact} \simeq {2.4 \times 10^{-16}}$ for gold as an example, while $R_{\mu e}|^{\rm dipole} \simeq 1.6 \times 10^{-15}$. Here, we have used $\rho_{qq} = \lambda_q$ for all quarks, {except $\rho_{tt} \simeq 0.4$ as inferred from MEG bound with our benchmark}. We note that contact terms are relatively important in $\mu \to e$ conversion compared to $\mu\to 3e$ process. These values can be probed at COMET and Mu2e. In fact, these experiments are posed to overtake MEG~II in probing $\mu \to e\gamma$ in g2HDM. Furthermore, if observed, together with {the knowledge of nuclear matrix elements}, one can use several different nuclei to probe and {extract the effect of the contact term(s) in Eq.~(\ref{eq:Leff_mue})}. We see that the extra $\rho_{\mu e}$ and $\rho_{ee}$ couplings of g2HDM hide very well so far from muon probes. It is with the help of extra $\rho_{tt}$ coupling via the two loop mechanism~\cite{Chang:1993kw} for $\mu \to e\gamma$ decay that MEG constrains $\rho_{\mu e} \lesssim \lambda_e$ [see Eq.~(\ref{eq:rhoi1})]. MEG~II would continue this program, but the $\mu N \to eN$ experiments, COMET and Mu2e, would become competitive when $10^{-15}$ sensitivity is reached. Mu3e can confirm the dipole nature once $\mu \to 3e$ is also observed with high muon intensity upgrades. Likewise, $\tau \to \mu\gamma$ would probe $\rho_{\tau\mu}$ modulo $\rho_{tt}$, but the $\tau \to 3\mu$ process seems out of reach for Belle~II (hence LHCb) if g2HDM holds, even if Belle~II quickly observes $\tau \to \mu\gamma$. Thus, while there remains hope for discovery, $\mu$FV physics look ``sanitized'' within g2HDM that possesses these extra $\rho_{\ell\ell'}$ (and $\rho_{tt}$) Yukawa couplings, which bears witness to the long history of muon research. \begin{table*}[t] \begin{center} \begin{tabular}{|c|l|l|} \hline Decay mode & \quad\quad Current bound & \quad\quad Future sensitivity \\ \hline \hline $B_s \to \tau\tau$ & \ $5.2\times 10^{-3}$ (LHCb~\cite{Aaij:2017xqt}) & \ $\sim 8\times 10^{-4}$ (Belle~II,\,$5\,{\rm ab}^{-1}$\,\cite{Kou:2018nap}) \\ & & \ $\sim 5\times 10^{-4}$ (LHCb~phase~II~\cite{Bediaga:2018lhg}) \\ $B_d\to\tau\tau$ & \ $1.6\times 10^{-3}$ (LHCb~\cite{Aaij:2017xqt}) & \ $\sim 1\times 10^{-4}$ (Belle~II~\cite{Kou:2018nap}) \\ $B \to K\tau\tau$ & \ $2.3 \times 10^{-3}$ (BaBar~\cite{TheBaBar:2016xwe}) & \ $\sim 2\times 10^{-5}$ (Belle~II~\cite{Kou:2018nap}) \ \\ \hline \hline $B_s \to \tau\mu$ & \ $3.4\times 10^{-5}$ (LHCb~\cite{Aaij:2019okb}) & \ {[Not yet publicized]} \\ $B_d \to \tau\mu$ & \ $1.2\times 10^{-5}$ (LHCb~\cite{Aaij:2019okb}) & \ $1.3 \times 10^{-6}$ (Belle~II~\cite{Kou:2018nap})) \\ & & \ $3 \times 10^{-6}$ (LHCb~phase~II~\cite{Bediaga:2018lhg}) \\ $B \to K\tau\mu$ & \ $2.8\times 10^{-5}$ (BaBar~\cite{Lees:2012zz}) & \ $\sim 3 \times 10^{-6}$ (Belle~II~\cite{Kou:2018nap}) \\ & \ $3.9 \times 10^{-5}$ (LHCb~\cite{Aaij:2020mqb}) & \ [LHCb competitive] \\ \hline \hline $B_s \to \mu e$ & \ $5.4 \times 10^{-9}$ (LHCb~\cite{Aaij:2017cza}) & \ $3 \times 10^{-10}$ (LHCb~phase~II~\cite{Bediaga:2018lhg}) \\ $B_d \to \mu e$ & \ $1.0 \times 10^{-9}$ (LHCb~\cite{Aaij:2017cza}) & \ $9 \times 10^{-11}$ (LHCb~phase~II~\cite{Bediaga:2018lhg}) \\ $B \to K \mu e$ & \ $6.4\times 10^{-9}$ (LHCb~\cite{Aaij:2019nmj}) & \ $\sim 6 \times 10^{-10}$ (LHCb~phase~II~\cite{Bediaga:2018lhg}) \ \\ \hline \hline $B_s \to \mu\mu$ & \ $(3.0\pm 0.4)\times 10^{-9}$ (PDG~\cite{PDG})\; & \ $ \sim 4.4\%$ (LHCb ($300~{\rm fb}^{-1}$)~\cite{Cerri:2018ypt}) \\ $B_d \to \mu\mu$ & \ $(1.1^{+1.4}_{-1.3})\times 10^{-10}$\,\, (PDG~\cite{PDG}) & \ $ \sim 9.4\%$ (LHCb ($300~{\rm fb}^{-1}$)~\cite{Cerri:2018ypt}) \\ \hline \hline $B \to \tau\nu$ & \ $(1.1 \pm 0.2)\times 10^{-4}$ (PDG~\cite{PDG}) & \ $ \sim 5\%$ (Belle~II~\cite{Kou:2018nap}) \\ $B \to \mu\nu$ & \ $(5.3\pm 2.2)\times 10^{-7}$ (Belle~\cite{Prim:2019gtj}) & \ $ \sim 7\%~({\rm stat})$ (Belle~II~\cite{Kou:2018nap}) \\ \hline \hline \end{tabular} \caption{ Summary of current experimental data on $B$ decays considered in our analysis. All upper bounds are at 90\% C.L., and phase~II for LHCb stands for HL-LHC running after upgrade~II.} \end{center} \label{tab:B-decays} \end{table*} } \section{\boldmath Contrast: Muon, or Bold} In this section, we contrast the ``sanitized'' muon front of the previous sections with what we dub the ``bold'' BSM front inspired by $B$ anomalies. We refer to Ref.~\cite{Hou:2019dgh} for a discussion of all the current $B$ anomalies, including cautionary notes on the experimental results. Extending from $\mu$FV, we discuss BSM effects in (semi)leptonic $B$ decays, be it BSM enhancement of $B_q \to \tau\tau$, or the purely BSM decays $B_q \to \tau\mu$, $B \to K\tau\mu$. We also touch upon the $B_q \to \mu\mu$ and $B \to \mu\nu,\, \tau\nu$ decays, which already appear to be SM-like in rate. \subsection{\boldmath BSM-enhanced: $B_q \to \tau\tau$ modes} The ``BaBar anomaly'' in $B \to D^{(*)}\tau\nu$~\cite{PDG,Hou:2019dgh} suggests a large tree level BSM effect interfering with the SM $b\to c\tau\nu$ amplitude. Based on general arguments, it was pointed out~\cite{Capdevila:2017iqn} that such a large effect should be accompanied by similar effects in $b \to s\tau\tau$. Note that, because of the difficult $\tau^+\tau^-$ signature, the experimental bounds~\cite{PDG} are rather poor. Projecting from the BaBar anomaly, Ref.~\cite{Capdevila:2017iqn} suggested that ${\cal B}(B_s \to \tau\tau) \sim 5\times 10^{-4}$ (or larger) is possible, to be compared with $\simeq 7.7 \times 10^{-7}$ in SM~\cite{Bobeth:2013uxa}. Similarly, ${\cal B}(B \to K^{(*)}\tau\tau) \sim 10^{-4}$ is projected. The theory suggestion was in part stimulated by the LHCb search~\cite{Aaij:2017xqt}, based on 3~fb$^{-1}$ run 1 data, setting the 90\% C.L. bound of \begin{align} {\cal B}(B_s \to \tau\tau) < 5.2 \times 10^{-3}, \quad ({\rm LHCb},\,2017), \label{eq:Bstautau} \end{align} which is an order of magnitude higher than the theory suggestion. Likewise, the only limit on three-body search, ${\cal B}(B^+ \to K^+\tau^+\tau^-) < 2.3 \times 10^{-3}$ from BaBar~\cite{TheBaBar:2016xwe}, is also poor. One suffers from lack of mass reconstruction capability, and only at the HL-LHC after LHCb upgrade~II~\cite{Bediaga:2018lhg} can the sensitivity reach $\sim 5 \times 10^{-4}$, touching the upper reaches of projected enhancement~\cite{Capdevila:2017iqn}. Belle~II plans to take some $\Upsilon(5S)$ data early on, and projects the reach of $\sim 8.1 \times 10^{-4}$~\cite{Kou:2018nap}. As the environment is clean, Belle~II would likely take more $\Upsilon(5S)$ data if the BaBar anomaly is confirmed. {For $B \to K^{(*)}\tau\tau$, the Belle~II sensitivity of $\sim 2 \times 10^{-5}$~\cite{Kou:2018nap} should be able to probe the range of interest at ${\cal O}(10^{-4})$.} We list the current limits and future prospects for the $B_q \to \tau\tau$ and $B \to K^{(*)}\tau\tau$ modes in % {Table~II}. \subsection{\boldmath Purely BSM: $B_q \to \tau\mu$ and $B \to K\tau\mu$ modes} The $B$ anomalies suggest lepton universality violation (LUV), such as $B \to D^{(*)}\tau\nu$ vs $B \to D^{(*)}\mu\nu$, or $B \to K^{(*)}\mu\mu$ vs $B \to K^{(*)}ee$. It was suggested~\cite{Glashow:2014iga} on general grounds the possibility of accompanying lepton flavor violation (LFV), giving rise to interesting decays such as $B_q \to \ell\ell'$ and $B \to K\ell\ell'$ for $\ell \neq \ell'$. As the $B$ anomalies persisted, serious model building went underway, and we take the so-called PS$^3$ model~\cite{Bordone:2017bld} as the standard bearer for ambitious UV-complete models (which we term ``bold''). To handle severe low energy constraints and focus on the third generation, the Pati-Salam (PS) model~\cite{Pati:1974yy} comes in {\it three copies}. The presence of leptoquarks (LQ) in the Pati-Salam model induce the decays such as $B_q \to \tau\mu$ and $B \to K\tau\mu$, where detailed phenomenology was given in Ref.~\cite{Bordone:2018nbg}. These are striking signatures! Before long, with 3~fb$^{-1}$ run 1 data, LHCb sets~\cite{Aaij:2019okb} the 90\% C.L. limit of \begin{align} & {\cal B}(B_s \to \tau\mu) < 3.4 \times 10^{-5}, \quad \ ({\rm LHCb},\,2019), \label{eq:Bstaumu} \end{align} which contrasts with the poor performance of Eq.~(\ref{eq:Bstautau}) for $B_s \to \tau\tau$. This limit practically ruled out the entire ${\cal B}(B_s \to \tau\mu)$ range projected by Ref.~\cite{Bordone:2018nbg}, forcing model builders to introduce~\cite{Cornella:2019hct} right-handed LQ interaction as tune parameters. In so doing, $B_s \to \tau\tau$ and $B \to K\tau\tau$ decays get enhanced~\cite{Cornella:2019hct}, which is in accordance with Ref.~\cite{Capdevila:2017iqn}. It would be interesting to see the full 9~fb$^{-1}$ run 1\,+\,2 result for $B_s \to \tau\mu,\, \tau\tau$ modes. {Perhaps because the analysis of Ref.~\cite{Aaij:2019okb} was still underway when the LHCb upgrade~II document~\cite{Bediaga:2018lhg} was being prepared, we cannot find the sensitivity projections of $B_s \to \tau\mu$ for full LHCb Upgrade~II data (and neither for Belle~II); hence, we state this explicitly in Table~II.} BaBar has searched~\cite{Lees:2012zz} for the companion $B \to K\tau\mu$ mode. Using a full hadronic tag to reconstruct the other charged $B$, hence with full kinematic control, by measuring $K^+$ and $\mu^-$, {one projects into the $m_\tau$ window without reconstructing the $\tau$}. The result at 90\% C.L. is~\cite{Lees:2012zz} \begin{align} {\cal B}(B^+ \to K^+\tau^+\mu^-) & < 2.8 \times 10^{-5}, \ \; ({\rm BaBar},\,2012) \label{eq:Kmutau_BB} \\ & < 3.9 \times 10^{-5}, \ \; ({\rm LHCb},\,\, 2020) \label{eq:Kmutau_LHC} \end{align} for the better measured charge combination, and Eq.~(\ref{eq:Kmutau_LHC}) is the recent LHCb measurement~\cite{Aaij:2020mqb} with {\it full}\;9\;fb$^{-1}$\,run 1\,+\,2 data. We first note that Belle has not performed this measurement so far, despite having more data than BaBar. The second point to stress is that, although the LHCb result may not appear competitive at first sight, they exploit $B_{s2}^{*0} \to B^+K^-$ decay and use the $K^-$ to tag~\cite{Stone:2014mza} the $B^+$ for full kinematic control, putting LHCb in the game for the $B^+ \to K^+\tau^+\mu^-$ pursuit, and making things more interesting for the Belle~II era. LHCb also places the best bounds~\cite{Aaij:2017cza} for ${\cal B}(B_s \to \mu e) < 5.4 \times 10^{-9}$ and ${\cal B}(B_d \to \mu e) < 1.0 \times 10^{-9}$, as well as ${\cal B}(B^+ \to K^+\mu^+e^-) < 6.4 \times 10^{-9}$~\cite{Aaij:2019nmj}. The current limits and future prospects for the $B_q \to \tau\mu$ and $B \to K^{(*)}\tau\mu$ modes are listed in Table~II. The $\mu e$ counterparts are also listed, but aside from the comment given in Ref.~\cite{Glashow:2014iga}, it is not easy from the model building point of view to make projections that are experimentally accessible. \subsection{\boldmath SM-like: $B_q \to \mu\mu$ and $B \to \tau\nu,\, \mu\nu$ modes} It is useful to recall that $B_s \to \mu\mu$ was a front runner~\cite{PDG} in the 2000's as possibly greatly enhanced, but a few years into LHC running, the $B_{s, d} \to \mu\mu$ decays became consistent with SM: the PDG values~\cite{PDG} are ${\cal B}(B_s \to \mu\mu) = (3.0 \pm 0.4) \times 10^{-9}$ and ${\cal B}(B^0 \to \mu\mu) = (1.1^{+1.4}_{-1.3}) \times 10^{-10}$, compared with the {SM expectation~\cite{Beneke:2019slt} of ${\cal B}(B_s \to \mu\mu) = (3.66 \pm 0.14) \times 10^{-9}$ and ${\cal B}(B^0 \to \mu\mu) = (1.03 \pm 0.05) \times 10^{-10}$.} We note that ATLAS, CMS, and LHCb have recently combined~\cite{Amhis} their 2011--2016 data to give ${\cal B}(B_s \to \mu\mu) = (2.69^{+0.37}_{-0.35}) \times 10^{-9}$ and ${\cal B}(B^0 \to \mu\mu) < 1.6 \times 10^{-10}$ at 90\% C.L. {A discrepancy for $B_s \to \mu\mu$ at $\sim 2\sigma$ is suggested, which was already indicative with PDG average, while the low value for $B_d \to \mu\mu$ is in part due to the negative central value from ATLAS.} We will use the PDG result (see Table~II), which should be good enough for our illustrative purpose. In any case, the $B_d$ mode is not yet observed, but should emerge with sufficient data. The estimated errors for LHCb at 300 fb$^{-1}$~\cite{Cerri:2018ypt} are given in Table~II. Naturally, models such as PS$^3$ do not give large enhancement for $B_q \to \mu\mu$, {but $B_s \to \mu\mu$ serves as a reminder of how things might evolve for the $B$ anomalies, in as much as these ``anomalies'' are data-driven.} The $B \to \tau\bar\nu$ rate receives a neat correction~\cite{Hou:1992sy} in type two 2HDM (2HDM-II), while Belle measurements~\cite{PDG} have settled around SM expectation, and in fact, provides a constraint~\cite{Cornella:2019hct} on PS$^3$. Since the correction factor of Ref.~\cite{Hou:1992sy} does not depend on the flavor of the charged lepton, one has the ratio $R_B^{\mu/\tau} = {\cal B}(B \to \mu\bar\nu)/{\cal B}(B \to \tau\bar\nu) \cong 0.0045$ for both SM and 2HDM-II~\cite{Chang:2017wpl}. But some subtleties {such as $V_{tb}/V_{ub}$ enhancement and the nondetection of neutrino flavor $\bar \nu_i$ (it could be $\bar \nu_\tau$ that escapes)}, as discussed in Ref.~\cite{Hou:2019uxa}, allow $R_B^{\mu/\tau}$ to deviate from the expected value precisely in g2HDM, and one probes the $\rho_{\tau\mu}\rho_{tu}$ product. {Note that our actual knowledge~\cite{Hou:2020ciy} of $\rho_{tu}$ is rather poor compared with what is suggested in Eq.~(4).} The recent Belle update~\cite{Prim:2019gtj} gives \begin{align} {\cal B}(B \to \mu\bar\nu) = (5.3 \pm 2.2) \times 10^{-7}, \ \ ({\rm Belle},\,2020) \label{eq:Bmunu} \end{align} where we add the statistical and systematic errors in quadrature, treating as Gaussian. Equation~(\ref{eq:Bmunu}) is consistent with SM, {but gives a two-sided bound, i.e., ${\cal B}(B \to \mu\bar\nu)$ could be above or below the nominal SM value~\cite{Hou:2019uxa} of $3.9 \times 10^{-7}$, and the $R_B^{\mu/\tau}$ ratio provides a good probe of g2HDM for Belle~II} in the next few years. We reiterate that, though $B_q \to \mu\mu$ are loop processes while $B \to \tau\nu,\, \mu\nu$ are at tree level, and the measured values still have to settle, none are in disagreement with SM expectation, which put constraints on BSM models inspired by $B$ anomalies, as well as g2HDM. The current status and future prospects are listed in Table~II. \begin{figure*}[t] \centering \includegraphics[angle=0,width=17.8cm]{LFV-summary} \caption{ Transcription of Table~II, with blue solid circles for current bounds, orange dotted circles for future sensitivities, green shaded bands for the measured ranges of $B_s \to \mu\mu$ and $B \to \tau\nu,\, \mu\nu$, and red $\star$ marking SM predictions. The grey shaded bands illustrate the five leading predictions of the PS$^3$ model, while red $\Downarrow$ illustrate g2HDM benchmark projections, where we use {$c_\gamma = 0.05$, $m_{H,\, A} = 300$~GeV, $\rho_{\mu e} = \lambda_e$, $\rho_{\tau\mu}=\lambda_\tau$, and $\rho_{ii} = \lambda_i$}, except $\rho_{tt} = 0.4$. See the text for further details.} \label{contrast} \end{figure*} \subsection{Contrasting g2HDM with ``boldness''} Having presented the status of various (semi)leptonic rare $B$ decays, where some striking projections arise from models motivated by $B$ anomalies, we turn to contrasting with g2HDM, the projections of which conform better with the more ``sanitized'" tradition of muon physics. \subsubsection{From $\mu$FV to PS$^3$} The purely leptonic $\mu$FV processes discussed previously, such as $\mu \to e\gamma$ in Sec.~II, and $\mu \to 3e$, $\tau \to \mu\gamma$, $\tau\to 3\mu$, and $\mu N \to eN$ in Sec.~III, are illustrated in Fig.~\ref{contrast}. That is, the current bounds and future sensitivities listed in Table~I are plotted as blue solid and orange dotted circles, respectively. None are so far observed, so the current MEG bound on $\mu \to e\gamma$ is also marked by a downward red {\boldmath $\Downarrow$} for the g2HDM projection, where, for sake of illustration, we have set up a benchmark consistent with Eqs.~(\ref{eq:rho3j}) and (\ref{eq:rhoi1}) and with small $h$-$H$ mixing. As the scalar-induced contact effect is rather small, the dipole $\mu \to 3e$ transition is also marked by a downward red {\boldmath $\Downarrow$}. {However, though subdominant, the scalar-induced contact effect for $\mu N \to eN$ is not negligible, and the downward red {\boldmath $\Downarrow$} shows the combined dipole plus contact effect, which is destructive. The sign of interference, however, could be easily flipped, so the actual possibilities are considerably broader.} The $\tau\to \mu\gamma$ rate with this benchmark is also illustrated, which falls toward the lower range of Belle~II reach, while we predict that $\tau \to 3\mu$ is out of reach in g2HDM. Likewise, the current bounds and future sensitivities for (semi)leptonic rare $B$ decays discussed in Secs.~III.A and III.B are also plotted in Fig.~3. Of interest here is some {\it two-sided projections}, as they stand at present, for the striking signatures arising from PS$^3$~\cite{Cornella:2019hct}, \begin{align} 10^{-4} \lesssim {\cal B}(B_s \to \tau\tau) & \lesssim 4.5 \times 10^{-3}, \label{eq:Bstataps3} \\ 10^{-6} \lesssim {\cal B}(B_s \to \tau\mu) & \lesssim 6 \times 10^{-5}, \label{eq:Bstamups3} \\ 10^{-9} \lesssim {\cal B}(\tau \to \mu\gamma) & \lesssim 8 \times 10^{-8}, \label{eq:tamugamps3} \end{align} while ${\cal B}(B \to K\tau\mu)$ scales down from ${\cal B}(B_s \to \tau\mu)$ by a factor of $\sim 9$, and for ${\cal B}(B \to K\tau\tau)$ vs ${\cal B}(B_s \to \tau\tau$) the factor is $\sim 13$. We do not show the ${\cal B}(\tau \to \mu\phi)$ mode~\cite{Cornella:2019hct} as it seems out of Belle~II reach. These ranges are shown in Fig.~\ref{contrast} as grey shaded bands, where existing bounds for $B_s \to \tau\mu$ and $\tau \to \mu\gamma$ cut into the upper ranges of PS$^3$ projections, and are the points of our comparison with g2HDM expectations. {As noted, the future sensitivity for $B_s \to \tau\mu$ is not quite known at present.} {We note further that, with $\tau \to \mu\gamma$ generated by LQ in the loop, there is an anticorrelation with ${\cal B}(B_s \to \tau\mu)$ within the PS$^3$ scenario~\cite{Cornella:2019hct}: if the limit on $B_s \to \tau\mu$ is pushed further down with 9\;fb$^{-1}$ full run 1\,+\,2 data, then ${\cal B}(\tau \to \mu\gamma)$ will move up and become closer to the current limit, and would be a boon to Belle~II in the model scenario. Likewise, pushing down on $\tau \to \mu\gamma$ would imply an increased lower bound for $B_s \to \tau\mu,\,\tau\tau$ in PS$^3$. These bounds and (anti)correlations allow the PS$^3$ model to ``provide a smoking-gun signature for this framework \ldots or could lead us to rule it out~\cite{Cornella:2019hct}.'' } {The $B_q \to \mu\mu$ and $B \to \mu\nu,\,\tau\nu$ processes discussed in Sec.~III.C are plotted differently in Fig.~3, as they are now mostly found to be consistent with SM expectations (marked as red {\boldmath $\star$}). The measured $B_s\to\mu\mu$ rate, shown as the narrow green shaded band, covers the SM expectation but appears slightly on the low side. Likewise, $B \to \tau\nu$ is also measured to be consistent with SM, which Belle~II would continue to probe. For $B_d \to \mu\mu$, we plot the more conservative upper limit from PDG, while the latest Belle update on $B \to \mu\nu$ gives a two-sided bound, which is illustrated by the broad green shaded band that covers the SM expectation. The PS$^3$ model shies away from processes that involve only muons, but $B \to \tau\nu$ does provide~\cite{Cornella:2019hct} some constraint.} \subsubsection{The $bq\ell\ell'$ processes in g2HDM} {The rare $B$ decay processes of interest (we only quote results for $B \to \ell\nu$) are in the form of $bq\ell\ell'$ four-fermi interactions.} Thus, the extra Yukawa couplings that enter on the quark side are $\rho_{bs}$, $\rho_{bd}$ at tree level, and $\rho_{\ell\ell'}$ for $\ell^{(\prime)} = \tau,\, \mu,\, e$ on the charged lepton side. For the latter, we continue to use our benchmark values $\rho_{\tau\tau},\, \rho_{\tau\mu} = \lambda_\tau \simeq 0.010$ [Eq.~(\ref{eq:rho3j})], and $\rho_{\mu e},\, \rho_{ee} = \lambda_e \cong 0.0000029$ [Eq.~(\ref{eq:rhoi1})]. The issue is that, for $\ell = \ell'$, SM loop effects seem affirmed by experiment, while for $\ell \neq \ell'$, there is no SM loop effect, and one would need the leptonic FCNH couplings in g2HDM to act. In the following, we will use tree level approach to $B_q \to \mu\mu$ to infer $B_q \to \ell\ell'$ for $\ell \neq \ell'$ case, while using loop corrections for $B_q \to \mu\mu$ to discuss $B_q \to \tau\tau$. {In each case, the corresponding $B_q$ mixing constraints are taken into account.} {It is well known that the measured~\cite{PDG} $B_q$ mixings can be accounted for quite well by SM loop effects}. For example, the operator $O_1 = (\bar s_\alpha \gamma^\mu L b_\alpha)(\bar s_\beta \gamma_\mu L b_\beta)$ for $B_s$ mixing has coefficient $(G_F m_W V_{ts}^{\ast} V_{tb}/2\pi)^2 S_0(x_t)$, with $x_t = m_t^2/m_W^2$ and $S_0(x_t)\simeq 2.35$ from SM box diagram, and one just replaces $s \to d$ for $B_d$ mixing. {In g2HDM, $\rho_{bq}$ ($q = s,\,d$) enters $B_q$ mixing at {\it tree} level, hence stringent constraints are implied.} {The NP effects in $B_q$ mixings can be parametrized by defining $C_{B_q} e^{2 i \Phi_{B_q}} = \langle \bar B_q\lvert {\cal H}_{\rm eff}^{\rm Full}\rvert B_q\rangle / \langle \bar B_q\lvert {\cal H}_{\rm eff}^{\rm SM}\rvert B_q\rangle$. Using the 2018 NP fit performed by UTfit~\cite{UTfit2018}, one finds \begin{equation} \begin{aligned} C_{B_s} &= 1.110 \pm 0.090, \quad \Phi_{B_s} = (0.42 \pm 0.89)^{\circ}, \\ C_{B_d} &= 1.05 \pm 0.11, \quad ~~~\Phi_{B_d} = (-2.0 \pm 1.8)^{\circ}. \end{aligned}\label{eq:CBsCBd} \end{equation} For sake of illustration {and to reduce the number of parameters, we will treat extra Yukawas as real} and assume that adding the g2HDM effect, $C_{B_q}$ and $\Phi_{B_q}$ stay within 2$\sigma$ ranges of Eq.~(\ref{eq:CBsCBd}).} In g2HDM, {the leading effect} comes from the operator $O_4 = (\bar s_\alpha L b_\alpha)(\bar s_\beta R b_\beta)$ at tree level, which constrains the product $\rho_{sb}\rho_{bs}^\ast$, while the operators $O_2 = (\bar s_\alpha L b_\alpha)(\bar s_\beta L b_\beta)$ and $O'_2 = (\bar s_\alpha R b_\alpha)(\bar s_\beta R b_\beta)$ constrain individual couplings $\rho_{bs}^\ast$, $\rho_{sb}$ but are less constraining. Furthermore, the coefficients of $O_2^{(\prime)}$ suffer cancellation between $H$ and $A$ contributions. {Assuming $O_4$ dominance, one has the coefficient} $C_4= -{y_{\phi b s}^\ast \,y_{\phi s b}}/{m_\phi^2}$, where $\phi$ is summed over $h$, $H$, $A$, and we take {$c_\gamma = 0.05$} and $m_H = m_A = 300$~GeV as before. Taking renormalization group evolution into account~\cite{Becirevic:2001jj}, using bag factors from Ref.~\cite{Carrasco:2013zta} and decay constants from Ref.~\cite{Aoki:2019cca}, we find $|\rho_{sb}\rho_{bs}^\ast| \lesssim (0.021 \,\lambda_b)^2$. In similar vein, we obtain $|\rho_{db}\,\rho_{bd}^\ast| \lesssim (0.0046\,\lambda_b)^2$, where we take $\lambda_b \simeq 0.016$. Assuming reality, {we adopt $\rho_{sb} \simeq \rho_{bs}^* \simeq 0.021\lambda_b \sim 0.00034$, and $\rho_{db} \simeq \rho_{bd}^* \simeq 0.0046\lambda_b \sim 0.000074$,} respectively. {With $\rho_{bs}$, $\rho_{bd}$, and $\rho_{\ell\ell'}$ so small, one may expect $B_q \to \ell\ell$ modes would be SM-like in g2HDM, which is the case for $B_s \to \mu\mu$, and to some extent $B_d \to \mu\mu$ as well: the measured strengths are indeed SM-like.} At tree level, we find that $B_s \to \mu\mu$ gives stringent constraints on $\rho_{bs(sb)}$, and can be on a par with those from $B_s$ mixing constraints. For example, {for our benchmark of} $c_\gamma =0.05$, $\rho_{\mu\mu}=\lambda_\mu \sim 0.00061$, and $m_H=m_A=300$~GeV, the $2\sigma$ range of ${\cal B}(B_s\to\mu\mu)$ gives the bound of $\rho_{sb}=\rho_{bs} \in [-0.019 \lambda_b, 0.143 \lambda_b]~ \vee ~[1.173 \lambda_b, 1.334 \lambda_b] $, which is relaxing than $B_s$ mixing.} On the other hand, due to poorer measurement of $B_d \to \mu\mu$ so far, bounds on $\rho_{db(bd)}$ from $B_d\to\mu\mu$ are weaker than $B_d$ mixing. Thus, by the fact that $B_q \to \mu\mu$ rates are already SM-like in g2HDM, {we expect $B_q \to \tau\tau$ to be not so different from SM expectations if tree contributions prevail.} With $\rho_{sb} = \rho_{bs}$ and $\rho_{db} = \rho_{bd}$ so suppressed, one has to take up-type extra Yukawa couplings into account, which contribute to $B_q$ mixings and $B_q \to \ell\ell$ at one loop order. The leading contributions to $B_q$ mixings come from the same box diagrams as SM, but with either one $W^+$ or both replaced by $H^+$, which also generates $O_1$. Considering the effect of $\rho_{tt}$ only, we obtain {$\Delta C_1^{WH} = y x_t V_{ts}^{\ast 2} V_{tb}^2\,|\rho_{tt}|^2 g(y, y x_t)/32 \pi^2 v^2$, where $y = M_W^2/m_{H}^2$} for the $WH$ box correction, and $\Delta C_1^{HH}= -V_{ts}^{\ast 2}V_{tb}^2 \,|\rho_{tt}|^4 f(y x_t)/128 \pi^2 m_{H}^2$ for the $HH$ box correction. {Here, $H$ stands as shorthand for $H^+$}, and the loop functions $f$ and $g$ are given in the Appendix. Considering this one loop contribution by itself gives a constraint on the $\rho_{tt}$--$m_{H^+}$ plane. For example, for a $300$\;GeV charged Higgs boson, we find $|\rho_{tt}|\lesssim 0.8$, and similar bound from $B_d$ mixing as well. However, we caution that inclusion of additional up-type {Yukawa couplings} can induce cancellation effects, thereby weakening the constraint. Most notably, with $\rho_{ct}$ as small as ${\cal O}(10^{-2})$, one can relax $\rho_{tt}$ to $\sim 1 $. As stated, we avoid cancellations and discuss tree and loop contributions separately. The same treatment is applied to rare $B$ decays, and we continue to assume $\rho_{qb} = \rho_{bq}$ and take them as real. $B_q\to\mu\mu$ can also receive significant contribution through one loop diagrams, where the leading effect is from $Z$ penguins with $H^+$ and top in the loop. This is a lepton flavor universal contribution and modifies the coefficient of $O_{10} = (\bar s \gamma^\alpha L b) (\bar \ell \gamma_\alpha \gamma_5 \ell)$. We find~\cite{Crivellin:2019dun} the $\rho_{tt}$ correction {$\Delta C_{10}^{H^+} = |\rho_{tt}|^2 h(y x_t)/16 \pi \alpha_e$}, where the loop function $h$ is given in the Appendix. The other loop diagrams {are suppressed in the small $\rho^d_{ij}$ approximation and/or by extra lepton $\rho^\ell$ Yukawa couplings (such as in box diagrams)}. Similar to $B_q$ mixing, $\Delta C_{10}^{H^+}$ puts a constraint on the $\rho_{tt}$--$m_H^+$ plane. For $m_H=m_A=300$ GeV, we obtain $\rho_{tt} \lesssim 0.4$ for $2\sigma$ range of ${\cal B}(B_s\to\mu\mu)$, which is {more stringent than $B_{s}$ mixing}. However, as already noted, the bound weakens if one includes other extra Yukawa couplings such as $\rho_{ct}$, which receives $|V_{cs}/V_{ts}|$ enhancement. In our numerical analysis, we therefore keep the tree level and one loop discussions separate, and only comment on cancellation effects later. Since LFV decays such as $B_s \to \ell\ell^\prime$ for $\ell\ne \ell^\prime$ arise at tree level in g2HDM, we give {tree level upper reaches with $\rho_{sb}$ and $\rho_{bs}$ satisfying $2\sigma$ range of $B_s$ mixing and $B_s\to\mu\mu$. } The effective Hamiltonian {for flavor violating $B_s$} $\to \tau\mu$ and $B \to K\tau\mu$ decays is of the form~\cite{Becirevic:2016zri}, \begin{equation} {\cal H} = - (C_S O_S + C_P O_P + C'_S O'_S + C'_P O'_P), \end{equation} where \begin{align} \mathcal{O}_{S} & = (\bar s R b)(\bar\ell \ell'), \quad\;\ \mathcal{O}_{P} = (\bar s R b)(\bar\ell \gamma_{5} \ell'), \label{eq:op_bsll'} \end{align} and ${\cal O}'_{S,P}$ are obtained by exchanging $L\leftrightarrow R$. Although $C_S$ and $C_P$ vanish for $\ell = \ell'$ in SM, tree level exchange of scalar bosons in the g2HDM lead to \begin{eqnarray} C_{S,P}^{\ell\ell'} &=& \sum {y_{\phi sb} (y_{\phi \ell\ell'} \pm y_{\phi \ell' \ell})}/{2m_\phi^2}, \label{eq:CSll'} \end{eqnarray} with $\phi$ summed over $h$, $H$ and $A$, and $C_{S,P}^{\prime\,\ell\ell'}$ is obtained from $C_{S,P}^{\ell\ell'}$ by changing $y_{\phi sb} \to y_{\phi bs}^\ast$. For $B_s\to \ell\ell'$ decay, we use~\cite{Becirevic:2016zri} \begin{align} & \mathcal{B}(B_{s}\rightarrow \ell \ell') \simeq \frac{f_{B_s}^2 m_{B_s} \lambda^{1/2}(m_{B_s}, m_\ell, m_{\ell'})} {32 \pi (m_b + m_s)^2\,\Gamma_{B_s}^{\rm heavy}} \nonumber \\ & \times \left[(m_{B_s}^2 - m_+^2)|\Delta C_S|^2 + (m_{B_s}^2 - m_-^2)|\Delta C_P|^2\right], \end{align} where $\lambda(a, b, c) = [a^2-(b-c)^2][a^2-(b+c)^2]$, {$\Gamma_{B_s}^{\rm heavy}$ is the decay width of the heavy $B_s$ state}, $m_\pm = m_\ell\, \pm\, m_{\ell^\prime}$, and $\Delta C_i = C_i - C_i^\prime$. With our benchmark of $c_\gamma = 0.05$, $m_H = m_A = 300$~GeV, and leptonic couplings, and the allowed range of $\rho_{sb, bs}$ extracted from flavor conserving $B_q\to \mu\mu$ (and in conjunction with bounds from $B_s$ mixing), {the projections} of various LFV $B$ decays in g2HDM are given in Fig.~\ref{contrast} as red {\boldmath $\Downarrow$}. Analogously, for $B\to K \ell\ell'$, we use~\cite{Becirevic:2016zri} \begin{equation} \begin{aligned} & {{d} \mathcal{B}}({B} \rightarrow {K} \ell \ell')/{{d} q^{2}} = \mathcal{N}_{K}^{2} \sum_{i = S, P} \varphi_{i}\, |C_{i}+C'_{i}|^{2}, \end{aligned} \end{equation} where $\varphi_S$ is a function of $B\to K$ form factors and $\mathcal{N}_K$ a normalization factor. Both are $q^2$ dependent, and explicit expressions can be found in Ref.~\cite{Becirevic:2016zri}. \subsubsection{Comparing g2HDM with PS$^3$} Let us now make the comparison of {the spectacular PS$^3$ projections with the modesty of g2HDM.} We have taken a simplified approach of treating $B_s\to\mu\mu$ and $B_s$ mixing either at tree level, or at one loop level, but not both simultaneously. Either way, the fact that $B_s \to \mu\mu$ is already consistent with SM expectation implies $B_s \to \tau\tau$ in g2HDM should also be SM-like{, which is more so if loop is dominant}. This is in contrast with the sizable enhancement projected in PS$^3$ (grey shaded band in Fig.~\ref{contrast}), which can be probed by LHCb upgrade~II, or dedicated runs by Belle~II on $\Upsilon(5S)$. For g2HDM, some enhancement (or suppression) of $B_s \to \tau\tau$ is possible, given that tree effect is controlled by {$\rho_{\tau\tau}$} which is at ${\cal O}(\lambda_\tau)$, while tree effect for $B_s \to \mu\mu$ is controlled by {$\rho_{\mu\mu}$} which is at ${\cal O}(\lambda_\mu)$. {But these order of magnitude estimates suggest that bridging the 2 orders of magnitude gap is unlikely}, and g2HDM should be distinguishable from PS$^3$. In any case, {measurement of $B_s\to \tau\tau$ is a challenge}, while prospects for $B_d \to \tau\tau$ at Belle~II remains to be seen. More promising for PS$^3$-type of models would be $B_s \to \tau\mu$, which can saturate the current bound, and the discovery, {perhaps even with run $1\,+\,2$ data of LHCb, would be truly spectacular.} Projections for g2HDM, however, appears quite out of reach, as it is 3 orders of magnitude below the lower reach of the PS$^3$ projection. But our previous caution applies, that an order of magnitude enhancement is not impossible, though it would still be far out of reach. In addition, if one allows cancellation between tree and loop effects in both $B_s \to \mu\mu$ and $B_s$ mixing, it is not impossible that $\rho_{bs(sb)}$ can be larger than our suggested values, {resulting in possible further enhancement of $B_s \to \tau\mu$}. The challenge is with experiment. As we noted in Table~II, the projected sensitivities, be it for LHCb or Belle, are {not known publicly.} At this point, we remind the reader of the ``seesaw'' between $B_s \to \tau\mu$ and $\tau \to \mu \gamma$ within PS$^3$~\cite{Cornella:2019hct}. Depending on analysis prowess and/or data accumulation speed, either measurement could be improved substantially in the next couple of years. {If one limit is pushed down, then the prospect for the other would rise in PS$^3$.} In contrast, for g2HDM, while there is discovery potential for $\tau\to \mu\gamma$, one does not expect $B_s \to \tau\mu$ to be observed any time soon. The situation for the $B \to K\tau\mu$ mode is similar, where the projected sensitivity is again not yet clear, and we have given the number for Belle~II in Table~II, which barely starts to touch the PS$^3$ range. The situation for $B_d \to \tau\mu$ in g2HDM would correlate with the outcome of $B_d \to \mu\mu$ measurement, while the PS$^3$ model does not provide predictions. Neither models foresee $B_q \to \mu e$ and $B \to K\mu e$ modes to be observable. {Our projections for g2HDM are given in Fig.~\ref{contrast}.} As we have also listed in Fig~\ref{contrast}, $B \to \mu\bar\nu$ provides a unique probe~\cite{Hou:2019uxa} of g2HDM, while $B \to \tau\bar\nu$ again appears SM-like already. These are charged $B$ decays, in contrast to neutral $B$ decays for $B_q \to \ell\ell'$. As a reminder for purely leptonic $\mu$FV processes, the $\mu\to e\gamma$, $\mu N \to eN$ and $\tau \to \mu\gamma$ processes have discovery potential, all basically probing the $\mu e\gamma$ {and $\tau\mu\gamma$} dipoles in g2HDM, {though the $\mu N \to eN$ process can pick up contact effects}. In contrast, $\mu \to 3e$ and $\tau \to 3\mu$ would be higher order effects of the respective dipole transitions. We mention in passing that muon $g-2$ would not be affected in g2HDM, while muon EDM, $d_\mu$, would likely scale by $m_\mu/m_e \sim 200$, and $|d_\mu| \lesssim 2 \times 10^{-27}\; e\,$cm seems, {unlike electron EDM $d_e$, far out of experimental reach}. \section{Discussion and Conclusion} There are good reasons to take g2HDM, the general two Higgs doublet model with extra Yukawa couplings, very seriously. By discovering the $h$ boson and finding that it closely resembles the SM Higgs boson, we now have one weak scalar doublet. Whether by Gell-Mann's totalitarian principle~\cite{Gell-Mann:1956iqa} or the principle of plentitude~\cite{Tot}, with the existence of one scalar doublet, there {\it should} be a second doublet, and by the same argument, extra Yukawa couplings. To declare~\cite{Glashow:1976nt} {\it natural} flavor conservation (NFC) and forbid extra Yukawa couplings, or using a $Z_2$ symmetry to implement it, are not only {\it not natural} but quite {\it ad hoc} or artificial. Had supersymmetry (SUSY) emerged at the LHC, it would have given credence to 2HDM-II, a type of 2HDM with $Z_2$ symmetry to forbid extra Yukawa couplings. But the lack of evidence for SUSY so far~\cite{PDG} suggests that the SUSY scale is considerably above $v$, the electroweak symmetry breaking scale. With three types of charged fermions, each coming in three generations, and that the extra Yukawa couplings are naturally complex, one has {\it 54 new Yukawa couplings}, which may appear excessive. There are also seven new Higgs parameters, which include the $h$-$H$ mixing parameter $c_\gamma$, and the exotic Higgs masses $m_H$, $m_A$, and $m_{H^+}$. But the increment of 54 new flavor parameters is on top of the existing plentitude of 13 within SM, while the {\it structure} built-in by nature seems to have helped ``obscure'' the presence of the extra Higgs sector parameters: as we have stated, $m_H$, $m_A$ and $m_{H^+}$ in g2HDM {\it naturally} populate the 300--600~GeV range. The latter follows if one takes~\cite{Hou:2017hiw} the principle that all dimensionless parameters in the Higgs potential are ${\cal O}(1)$ in strength, with $v$ as the only scale parameter. It is curious to note that, with $\rho_{tt}$ naturally ${\cal O}(1)$ because it is a cousin to $\lambda_t \cong 1$, it may help keep $c_\gamma$ small~\cite{Hou:2017vvp}. So the alignment phenomenon may be emergent, while $\rho_{tt}$ could drive EWBG quite effectively. At any rate, and as we have emphasized, the flavor parameter structure seems to have hidden itself rather well from our view, obscuring also the extra Higgs bosons, which we know so little about. The flavor structure was first revealed in the 1970s through the fermion mass hierarchy, although the existence of three generations triggered Ref.~\cite{Glashow:1976nt}. But then the mixing hierarchy of $|V_{ub}|^2 \ll |V_{cb}|^2 \ll |V_{us}|^2$ came as a surprise in the early 1980s, which led to the Cheng-Sher ansatz~\cite{Cheng:1987rs}, suggesting that NFC may be too strong an assumption. Unknown back then was nature's further design of alignment, which suppressed FCNH coupling effects of the light, SM-like $h$ boson. As we stressed in the Introduction, at this point one may find fault in the near diagonal nature of the $\rho^d$ Yukawa matrix: Why would nature turn off the FCNH effects precisely in the sector that we have the best access to? It is a mystery. But nature has her mysterious ways, and as an experimental science we can only probe further. In summary, the extra Yukawa couplings of g2HDM has the built-in mass-mixing hierarchy protection as exemplified by Eqs.~(4) and (6), plus near diagonal $\rho^d$ Yukawa matrix and alignment. The $\mu \to e\gamma$ and $\tau \to \mu \gamma$ processes probe $\rho_{\mu e}\rho_{tt}$ and $\rho_{\tau \mu}\rho_{tt}$ via the two loop mechanism, and generate $\mu \to 3e$ and $\tau \to 3\mu$ at higher order. The $\mu N \to eN$ process probes the combined effect of dipole plus contact terms, and by nature of the process and experimental prowess, one might disentangle the two effects. As a second theme, we do not expect LUV or LFV effects to be observed soon in (semi)leptonic rare $B$ decays for g2HDM. This is in contrast with the UV-complete PS$^3$ model that is the epitome of the recent $B$ anomalies, where the modes to watch are $B_s \to \tau\mu$, $B \to K\tau\mu$, and to a lesser extent, $B_s \to \tau\tau$, $B \to K\tau\tau$; discovering only $\tau \to \mu\gamma$ does not distinguish between the two scenarios. For g2HDM, besides the aforementioned $\mu$FV processes, $B \to \mu\nu$ may be the mode to watch, which probes $\rho_{\tau\mu}\rho_{tu}$. \vskip0.2cm \noindent{\bf Acknowledgments} \ We thank Jack Chen, Gino Isidori, Matt Rudolph, and Sheldon Stone for communications. This research is supported by MOST 106-2112-M-002-015-MY3, 108-2811-M-002-626 of Taiwan, and NTU~109L104019.
proofpile-arXiv_067-8594
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The identification of the particle nature of dark matter~(DM) is one of the most pressing problems facing modern physics, and will be a key focus for high energy physics and cosmology in the coming decade~\cite{CosmicVisions,DarkMatterBRN}. In the absence of evidence for dark matter at the weak scale, interest has grown in direct searches for DM with sub-GeV mass~\cite{Essig:2011nj,Essig:2012yx,Graham:2012su,Essig:2015cda,diamonddetectors,Budnik:2017sbu,Hochberg:2016ntt,Cavoto:2017otc,Hochberg:2015pha,Hochberg:2015fth,Hochberg:2016ajh,Hochberg:2019cyy,Hochberg:2017wce,Knapen:2017ekk,Griffin:2018bjn,Schutz:2016tid,Knapen:2016cue,hertel}. The technical challenge inherent in searching for non-relativistic, sub-GeV, weakly-interacting particles can be seen by considering the case of a classical nuclear recoil. For DM with a mass $m_{\chi}$ much smaller than the target nucleus, moving at the escape velocity of the galaxy, the maximum energy transfer for a classical elastic scattering nuclear recoil event is \begin{equation} \Delta E \approx \frac{2~\mathrm{meV}}{A_{T}}\left(\frac{m_{\chi}}{1~\mathrm{MeV}}\right)^2\,, \end{equation} with $A_T$ the atomic number of the target. This motivates using lighter nuclei to increase the energy transfer, as well as new detector technologies sensitive to meV-scale energy deposits. In Ref.~\cite{diamonddetectors}, a subset of the authors explored the ability of diamond (crystalline carbon $C$ with $A_{T}=12$) as a detector medium to meet these criteria. The long-lived phonon states with meV energies, coupled with the light carbon nuclei, make diamond an excellent medium with which to search for dark matter. Diamond suffers from two significant drawbacks, however: it is currently difficult to produce single crystals in bulk at masses sufficient to achieve the kg-year exposures required to probe significant DM parameter space, and the non-polar nature of diamond limits the DM candidates to which it can be sensitive. Here we propose for the first time the use of silicon carbide (SiC) as a DM detector, as it overcomes these drawbacks. Large wafers, and therefore also large boules, of SiC can be readily obtained at prices comparable to silicon (Si). Importantly, as a polar semiconductor, SiC has optical phonon modes which can be excited by sub-GeV DM with dark photon interactions~\cite{Knapen:2017ekk}. Furthermore, as we demonstrate in this paper, SiC behaves in most ways as a near substitute to diamond, with many relevant properties intermediate between crystalline diamond and silicon. SiC has already seen widespread adoption as a target for radiation detectors~\cite{Nava_2008}, microstrip detectors~\cite{SiCReview} and UV photodiodes~\cite{laine} as a drop-in replacement for Si in environments where greater radiation hardness, improved UV sensitivity or higher temperature operation are required. The latter two considerations are possible due to the higher band gap of SiC, $3.2$~eV, compared to 1.12~eV for Si. It is thus natural to observe the parallels between the development of Si, diamond, and SiC detector technologies, and explore the ability of future SiC detectors to search for sub-GeV DM. Moreover, SiC is an attractive material to explore because of its polymorphism---the large number of stable crystal structures which can be readily synthesized---and the resulting range of properties they possess. In fact SiC exhibits \textit{polytypism}---a special type of polymorphism where the crystal structures are built up from a common unit with varying connectivity between the units (see Fig.~\ref{fig:crystal_structures}). The variety of available polytypes results in a corresponding variety of physical properties relevant to DM detection, such as band gap and phonon mode frequencies. In this paper, we explore six of the most common polytypes (3C, 2H, 4H, 6H, 8H and 15R, described in detail in Section~\ref{sec:polytypes}) which span the range of variation in physical properties, and evaluate their suitability as target materials for a detector, as well as their differences in DM reach for given detector performance goals. In particular, we show that the hexagonal (H) polytypes are expected to exhibit stronger daily modulation, due to a higher degree of anisotropy in their crystal structure. In this work we explore the potential of SiC-based single charge detectors and meV-scale microcalorimeters for DM detection. The paper is organized as follows. In Section~\ref{sec:polytypes}, we discuss the electronic and vibrational properties of the SiC polytypes explored in this work. In Section~\ref{sec:SiCDetector}, we explore the measured and modeled response of SiC crystals to nuclear and electronic energy deposits over a wide energy range, and the expected performance of SiC detectors given realistic readout schemes for charge and phonon operating modes. Sections~\ref{sec:theoreticalFramework} and \ref{sec:results} summarize the DM models considered in this paper, and compare the reach of different SiC polymorphs into DM parameter space for nuclear recoils, direct phonon production, electron recoils and absorption processes, and also compare directional detection prospects. The high-energy theorist interested primarily in the DM reach of SiC polytypes can thus proceed directly to Section~\ref{sec:theoreticalFramework}. We find excellent DM sensitivity, comparable and complementary to other proposals, which place SiC detectors in the limelight for rapid experimental development. \section{Electronic and Phononic Properties of SiC Polytypes}\label{sec:polytypes} Silicon carbide is an indirect-gap semiconductor with a band gap (2.3 - 3.3 eV) intermediate between those of crystalline silicon (1.1 eV) and diamond (5.5 eV). While there exists a zincblende form of SiC, which has the same structural form of diamond and Si, there are over 200 additional stable crystal polymorphs with a range of band gap energies and physical properties. These polymorphs broadly fall into three groups based on lattice symmetry: cubic (C), hexagonal (H), and rhombohedral (R). To compare the expected performance of these polytypes as particle detectors, we first explore how the differences in band structure between polytypes manifests in charge and phonon dynamics. In all SiC polytypes, the common unit is a sheet of corner-sharing tetrahedra and the polytypes are distinguished by variations in stacking sequences. The polytype 3C adopts the cubic zincblende structure with no hexagonal close-packing of the layers, whereas 2H has a wurtzite structure with hexagonal close-packing between all the layers. The different polytypes can thus be characterized by their hexagonality fraction $f_{H}$, with 2H (3C) having $f_{H}=1$ ($f_{H}=0$). This single number correlates strongly with the material's band gap, with 3C having the smallest gap, and 2H the largest gap~\cite{physprop}. The other polytypes, including those considered in this paper, consist of lattices with different sequences of hexagonal and cubic stacking layers, and can be listed in order of increasing hexagonal close-packing: 3C, 8H, 6H, 4H, 2H. The number refers the number of layers in the stacking sequence. Rhombohedral structures also occur, and these are characterized by long-range stacking order, as shown in Fig.~\ref{fig:crystal_structures}(f). Crystal structures for the polytypes considered here are shown in Fig.~\ref{fig:crystal_structures}. \begin{figure}[!t] \begin{center} \includegraphics[width=\linewidth]{crystalstructures.jpg} \caption{Crystal structures of the polytypes of SiC considered in this work. Si atoms are blue and C atoms are brown.} \label{fig:crystal_structures} \end{center} \end{figure} The difference in stability between cubic and hexagonal stacking is very small, which can be understood as a balance between the attractive and repulsive interactions between third nearest neighbors stemming from the specific degree of charge asymmetry in the Si—C bond~\cite{Park1994}. This results in a difference in total energy between the polytypes of only a few meV per atom, therefore many crystal structures of SiC are experimentally accessible. To limit this paper to a reasonable scope, we restrict our analysis to 6 of the most common forms, as shown in Fig.~\ref{fig:crystal_structures} and with properties summarized in Table~\ref{tab:properties}. Despite the relative stability of polytypes with respect to one another, only three of these polytypes (3C, 4H and 6H) are available commercially~\cite{physprop} as of this writing; of these, 6H is the most widely available in the large wafer and crystal sizes typically employed in semiconductor processing. To capture a representative range of SiC polytype behavior in our analysis, and to observe trends in properties relevant for sub-GeV DM detection, we also include 2H, 8H, and 15R in our analysis. \begin{figure*} \centering \includegraphics[width=\textwidth]{dos_bands_isosurfaces.pdf} \caption{{\bf (a) to (f):} Calculated electronic band structures of SiC polytypes, with high-symmetry paths selected using SeeK-path~\cite{HINUMA2017140}, alongside the density of states. Valence band maxima and conduction band minima are highlighted with blue and pink circles respectively. To show the conduction band valleys in momentum space, {\bf (g) to (l)} are isosurfaces of the electronic energy bands at 0.2 eV above the conduction band minima, plotted within the first Brillouin zone boundaries of the polytypes. For the positions of the high-symmetry points, see Fig.~\ref{fig:BZ}, and for details of calculations, see Appendix.~\ref{app:first_principles_calcs}.\label{fig:bandStructure} } \end{figure*} Calculations of the interaction of various DM models with SiC requires materials-specific information for each polymorph, namely the electron and phonon spectra, to estimate sensitivity to electron and phonon interactions respectively. We calculate these quantities using state-of-the-art Density Functional Theory (DFT) calculations as described in detail in Appendix \ref{app:first_principles_calcs}. The electronic band structures for the six representative polytypes are shown in Fig.~\ref{fig:bandStructure}, and the phonon band structures are plotted in Fig.~\ref{fig:SiC_phonons}. For reference, the Brillouin zones (BZ) for the same polytypes are shown in Fig.~\ref{fig:BZ}. \begin{figure*}[th!] \begin{center} \includegraphics[width=0.98\textwidth]{SiC_phonons.pdf} \caption{ \label{fig:SiC_phonons} First-principles calculations of phonon band structures, with high-symmetry paths selected using SeeK-path~\cite{HINUMA2017140}. For details of calculations, see Appendix~\ref{app:first_principles_calcs}. } \end{center} \end{figure*} The band structure of a material is important for understanding its charge or phonon dynamics, in particular charge mobility and lifetime, and phonon losses during charge propagation. As with Si, Ge, and diamond, the indirect band gap of all SiC polytypes ensures long charge lifetimes, allowing charge to be drifted and collected with a modest electric field. At low temperature, this also produces anisotropic propagation of electrons due to localized mimina in the first BZ away from the $\Gamma$ point (as shown in Si and Ge at low temperature~\cite{moffatt,StanfordSi}), which has a significant impact on charge mobility as a function of crystal orientation relative to an applied field. In Si and diamond, for example these electron valleys lie at the three X symmetry points, along the cardinal directions in momentum space. Depending on the crystal orientation relative to the electric field, spatially separated charge clusters are observed as charges settle into one of these conduction valleys. Due to the range of stable crystal forms of SiC, in contrast to Si, diamond, and Ge, we cannot make a general statement about the location in momentum or position space of the indirect band gap in SiC (see {\it e.g.} Refs.~\cite{StanfordSi,moffatt,Moffatt:2016kok}), but we can locate the BZ minima from the band structures shown in Fig.~\ref{fig:bandStructure}. The 3C polytype, like Si and diamond, has X-valley minima, and therefore three charge valleys in the first BZ~\cite{shur2006sic}, so we can expect that the charge mobility will behave similarly to Si and diamond. The hexagonal forms, as shown in Fig.~\ref{fig:bandStructure}~(b)-(e), generally have minima along the L-M symmetry line, while the 2H polytype has minima along the K-points. All of these polytypes have 6 charge minima in the first BZ, however charge propagation in 2H will be maximally restricted to propagation along the horizontal plane of the BZ (the plane aligned with [100], [010] Miller indices). As we go to larger unit cells, charge propagation perpendicular to that plane becomes more kinematically accessible, allowing for more isotropic charge propagation. The valence bands are more consistent between polytypes, with a concentration of the valence band near the $\Gamma$ point, which is also the location of the valence band maximum for all polytypes considered. The dominant influence of the carbon-silicon bond (rather than the electronic orbital overlaps) on the phonon properties leads to the phonon dynamics being similar between the polytypes. Since Si and C have near identical bonding environments, the Born effective charges are almost identical in all of the compounds considered, and so will have similar dipolar magnitudes and hence responses to dark-photon-mediated interactions. The phonon band structures for these polytypes are plotted in Fig.~\ref{fig:SiC_phonons}. While we show the entire band structure, the DM-phonon interactions are most sensitive to the phonon properties near the $\Gamma$ point. In particular, the similarities of the properties described above implies that the sensitivity of SiC for DM scattering will be similar for all polytypes. Anisotropies in the phonon band structure will give rise to differences in the directional dependence of the DM signal, as will be discussed later in this paper. \begin{table*}[t] \begin{tabular}{| c | c | c | c c c c c c |} \hline Parameter & Diamond (C) & Si & \multicolumn{6}{c|}{SiC} \\ \hline Polymorph & - & - & 3C ($\beta$) & 8H & 6H ($\alpha$) & 4H & 2H & 15R \\ \hline Crystal Structure & \multicolumn{3}{c|}{cubic} & \multicolumn{4}{c|}{hexagonal} & rhombohedral \\ \hline $\rho$ (g cm$^{-3}$) & 3.51 & 2.33 & \multicolumn{6}{c|}{$\sim$3.2~\cite{SiCProperties,bertuccio}} \\ $N$ ($10^{23}$cm$^{-3}$) & 1.76 & 0.5 & \multicolumn{6}{c|}{0.96} \\ $n_e$ ($10^{23}$cm$^{-3}$) & 3.54 & 1 & \multicolumn{6}{c|}{1.95} \\ $\hbar\omega_p$ (eV) & 22 & 16.6 & \multicolumn{6}{c|}{22.1\cite{SiCPlasmon}} \\ \hline a (c) (\AA) & 3.567 & 5.431 & 4.36 & 3.07 (20.15) & 3.08 (15.12) & 3.07 (10.05) & 3.07 (5.04) & 3.07 (37.80) \\ $f_{H}$ & 0.0 & 0.0 & 0.0 & 0.25 & 0.33 & 0.5 & 1.0 & 0.4 \\ $E_{\rm gap}$ (eV) & 5.47 & 1.12 & 2.39 & 2.7 & 3.02 & 3.26 & 3.33 & 3.0 \\ $E_{\rm gap}$ (eV)$^{[calc]}$ & & & 2.24 & 2.66 & 2.92 & 3.15 & 3.17 & 2.86 \\ $E_{eh}$ (eV) & $\sim$13 & 3.6-3.8 & 5.7 -- 7.7$^{\dagger}$ & 6.4 -- 8.7$^{\dagger}$ & 6.7~\cite{6HDet} & 7.7 -- 7.8~\cite{ivanov2005,bertuccio} & 7.8 -- 10.5 $^{\dagger}$ & 7.1 -- 9.6 $^{\dagger}$ \\ $E_{\rm defect}$ (eV) & 38--48 & 11--22 & 19 (C), 38 (Si) & & 22 (C) & 22--35~\cite{Nava_2008} & & 17--30 (C) \\ \hline $\epsilon_{0\perp}$ & \multirow{2}{*}{5.7} & \multirow{2}{*}{11.7} & \multirow{2}{*}{9.7} & & 9.67 & 9.76 & & \\ $\epsilon_{0\parallel}$ & & & & & 10.03 & 10.32 & & \\ $\epsilon_{0\perp}$$^{[calc]}$ & & & \multirow{2}{*}{10.40} & 10.40 & 10.39 & 10.36 & 10.24 & 10.38 \\ $\epsilon_{0\parallel}$$^{[calc]}$ & & & & 10.80 & 10.90 & 11.06 & 11.41 & 10.96 \\ $\epsilon_{\infty\perp}$ & & & \multirow{2}{*}{6.5} & & 6.6 & 6.6 & 6.5 & 6.5 \\ $\epsilon_{\infty\parallel}$ & & & & & 6.7 & 6.8 & 6.8 & 6.7 \\ $\epsilon_{\infty,\perp}$$^{[calc]}$ & & & \multirow{2}{*}{7.07} & 7.10 & 7.11 & 7.10 & 7.03 & 7.11 \\ $\epsilon_{\infty,\parallel}$$^{[calc]}$ & & & & 7.31 & 7.36 & 7.41 & 7.40 & 7.38 \\ $\Theta_{\rm Debye}$ (K) & 2220 & 645 & 1430 & & 1200 & 1200 & & \\ $\hbar\omega_{\rm Debye}$ (meV) & 190 & 56 & 122 & & 103 & 103 & & \\ $\hbar\omega_{\rm TO}$ (meV) & 148 & 59 & 98.7 & & 97.7, 98.8 & 97.0, 98.8 & 95.3, 99.0 & 98.9 \\ $\hbar\omega_{\rm LO}$ (meV) & 163 & 63 & 120.5 & & 119.7, 120.3 & 119.5, 120.0 & 120.0, 120.7 & 119.6 \\ $c_s$ (m/s) & 13360 & 5880 & 12600 & & 13300 & 13730 & & \\ $c_s$ (m/s)$^{[calc]}$ & & & 13200 & 16300 & 14300 & 14300 & 15500 & 11900 \\ $v_{d,{\rm sat}}$, $\mathrm{e^{-}}$ ($10^5$ m/s) & 2.7~\cite{bertuccio} & 1.35 & 2 & & 2 & 2 & & \\ $E_{\textrm{Bd}}$ (MV/cm) & $>$20 & 0.3 & 1.2 & & 2.4 & 2.0 & & \\ \hline \end{tabular} \caption{Bulk material properties of diamond, Si, and the SiC polymorphs considered in this work (measurements taken from Refs.~\cite{Jacoboni,kurinsky,Nava_2008,bertuccio,SiCProperties,pines} unless otherwise stated). All gaps are indirect, as discussed in the text and shown in Fig.~\ref{fig:bandStructure}. $\epsilon_{0,\infty \perp}$ ($\epsilon_{0,\infty \parallel}$) refer to relative permittivity perpendicular (parallel) to the crystal c-axis at low and high frequency, with values from Ref.~\cite{physprop}. Optical phonon energies and high-frequency permittivity are taken from Ref.~\cite{mutschke1999infrared}. $E_{eh}$ values denoted by $\dagger$ have been estimated as described in the text. Defect creation energies are from Refs.~\cite{Koike,Lucas,Barry}. Due to the differing commercial availability/utility of different polytypes, more commonly used crystal polytypes are better characterized than less common ones, and thus for the least well-studied polytypes (2H, 8H, 15R) many experimentally determined values are unavailable. Quantities denoted as [calc] were calculated in this work to fill in some of the holes in the literature.} \label{tab:properties} \end{table*} Table~\ref{tab:properties} summarizes the physical properties of the polytypes shown in Figs.~\ref{fig:bandStructure} and \ref{fig:SiC_phonons} compared to Si and C. In addition, some derivative properties of the phonon band structures are summarized in the table; it can be seen that all SiC polytypes have sound speed, highest optical phonon energy and permittivity which are roughly the geometric mean of the Si and C values. These characteristics will inform our detector design and results for dark matter reach, as we now detail. \section{Detecting Energy Deposits in SiC}\label{sec:SiCDetector} In this section, we apply the detector performance model of Ref.~\cite{diamonddetectors} to the six representative SiC polytypes described above, and contrast expected device performance between the SiC polytypes as well as with Si, Ge and diamond targets. We begin by reviewing existing measurements and expectations for partitioning event energy into the ionization (charge) and heat (phonon) systems, relevant to reconstructing total event energy for different types of particle interactions. We then discuss expected detector performance in charge and phonon readout modes given available measurements for polytypes considered in this paper, and comment on expected performance for those polytypes without direct measurements based on band structure properties discussed above. A theorist primarily interested in the DM reach of a given SiC crystal for an assumed threshold can proceed directly to Section~\ref{sec:theoreticalFramework}. \subsection{Particle Interactions} \label{sec:particle_interactions} We first turn to the expected yield for an electron recoil or nuclear recoil in SiC. As discussed in {\it e.g.} Ref.~\cite{diamonddetectors}, interactions which probe electrons or nucleons are expected to deposit differing amounts of energy in ionization and phonon systems in semiconductor detectors. This property was used by the previous generation of DM experiments to reject electron-recoil backgrounds in the search for primary nucleon-coupled WIMP DM. The resolutions in these channels required for sub-GeV DM are just now being achieved for either heat or charge in current experiments~\cite{CRESSTIII,edelweissHV,EdelweissWIMP,pd2LTD,Abramoff_2019,damicDP,DAMIC_ERDM,nucleus,strauss,Hong}, but none of these experiments can achieve the required resolutions in both channels to employ event discrimination for recoils below 1~keV in energy. For heat readout experiments, this partition is relatively unimportant, as all energy remains in the crystal and is eventually recovered as heat. For charge readout experiments, this partition is necessary to reconstruct the initial event energy, and contributes significant systematic uncertainty to background reconstruction at energies where the energy partitioning is not well-constrained. A convenient shorthand is to refer to the energy in the electron system as $E_e$, which is related to the total recoil energy $E_r$ according to a yield model $y(E_r)$ as $E_e = y(E_r)E_r$. As discussed in {\it e.g.} Refs.~\cite{diamonddetectors,kurinsky2020dark}, for electron recoils one has $y(E_r)=1$, while for nuclear recoils the yield is reduced due to charge shielding effects and losses to phonons and crystal defects, referred to as non-ionizing energy losses (NIEL)~\cite{LindhardDiamond}. Additionally, this yield function is actually derived with respect to the charge yield for a high-energy, minimum ionizing particle~\cite{Canali72}. These events produce a number of charge carriers $n_{eh}$ in linear proportion to event energy with the relation $n_{eh} = E_{r}/E_{eh}$, where $E_{eh}$ is taken to be a fixed property of a given material, and is the effective cost to produce a single electron-hole pair. If we define measured $E_e$ as $E_e=n_{eh}E_{eh}$, we thus see that $y(E_r)=1$ is only true, by definition, for events that obey this linear relationship. For SiC, this factor $E_{eh}$ varies along with the band gap among the different polytypes. The charge yield from minimum ionizing particles ($\gamma$, $\beta$ and $\alpha$) in 4H SiC is explored in Ref.~\cite{Nava_2004}. The response of 3C, 4H, and 6H to lower energy X-rays is subsequently discussed in Ref.~\cite{bertuccio}. The results of both studies are consistent with a highly linear yield in electron-recoils down to ${\cal O}(10\, \rm keV)$ energies, but the pair creation energy $E_{eh}$ is only characterized for two of the polytypes, as shown in Table~\ref{tab:properties}. For the polytypes in which energy per electron-hole pair has not been characterized, we can predict $E_{eh}$ based on other measured properties. The generic expression for $E_{eh}$ is \cite{Klein,rothwarf,Canali72} \begin{equation} E_{eh} = E_{\rm gap} + 2L \cdot \left(E_{i,e}+E_{i,h}\right) + E_{\rm ph}\,, \end{equation} where $L$ is a factor which depends on the dispersion curve of the conduction and valence bands, $E_{i,e}$ and $E_{i,h}$ are the ionization thresholds for electrons and holes, and $E_{\rm ph}$ are phonon losses. Ref.~\cite{Canali72} shows that, for $E_{i,e}\sim E_{i,h} \propto E_{\rm gap}$, we get the formula \begin{equation} E_{eh} = A \cdot E_{\rm gap} + E_{\rm ph} \end{equation} where $E_{\rm ph}$ takes on values from 0.25 to 1.2~eV, and $A$ is found to be $\sim$2.2 to 2.9. Ref.~\cite{Klein} finds, using a broader range of materials, the parameters $A\sim2.8$ and $E_{\rm ph}\sim$ 0.5 -- 1.0~eV. These allow us to predict a probable range of $E_{eh}$ values for the polytypes without existing measurements, which we summarize in Table~\ref{tab:properties}. For the detector models in this paper, we assume the values in Table~\ref{tab:properties} apply linearly for all electron-recoil events down to the band gap energy. The response of SiC detectors to neutrons is less characterized than the electronic response. A detailed review can be found in Ref.~\cite{Nava_2008}, which we refer the reader to for more details on existing measurements. In particular, the NIEL for different particles in SiC is computed and compared to measurements for different ion beams in Ref.~\cite{Lee2003}, but this is characterized as a loss per gram, and not as a fractional energy loss compared to that lost to ionization. Ref.~\cite{dulloo2003} explores the thermal neutron response, but only a count rate is measured; a linear response with respect to fluence is measured, but there is no characterization of ionization yield on an event-by-event basis. We instead appeal to simulations calibrated to silicon measurements, in which the single tunable parameter with largest effect is the displacement energy threshold for freeing a nucleus from the lattice, $E_{\rm defect}$. Known and estimated values for $E_{\rm defect}$ are summarized in Table~\ref{tab:properties}. The large range is due to the difference in thresholds for the Si and C atoms; comparing the threshold values to Si and diamond, it seems that a Si atom in SiC has a diamond-like displacement threshold, while a C atom in SiC has a Si-like displacement threshold. Ref.~\cite{LindhardDiamond} calculates NIEL for Si and diamond, with the difference parameterized only in terms of defect energy. This suggests that SiC, with a defect energy intermediate between Si and diamond, will behave identically to Si and diamond above $\sim$1~keV, and give a yield below Si and above diamond for lower energy interactions. Finally, we consider the sub-gap excitations. The most prominent features are the optical phonons, with energies of 100--120~meV, as shown in Table~\ref{tab:properties} and Fig.~\ref{fig:SiC_phonons}. An interesting property of the hexagonal polytypes is that the optical phonon energy depends on the bond direction along which they propagate, though weakly. We can expect, due to the polar nature of SiC, to see strong absorption around the optical phonon energies. We can also expect direct optical and acoustic phonon production by nuclear recoils sourced by DM interactions. In contrast to the large change in electron gap energy and expected pair-creation energy between polytypes, we see very little variation in phonon properties, dielectric constants, and---to some degree---displacement energy. This suggests that different polytypes will be beneficial for enhancing signal-to-noise for desired DM channels. Nucleon-coupled and phonon excitation channels would prefer higher-gap polytypes with suppressed charge production, while electron-coupled channels favor the smaller gap materials. Optimization of readout will depend on the polytype due to differences in phonon lifetime and charge diffusion length, as discussed in the next subsection, as well as differences in phonon transmission between polytypes and choice of phonon sensor. Other aspects of the design, such as capacitance and bandwidth, are constant across polytypes, somewhat simplifying the comparison of polytypes. \subsection{Charge Readout} The first readout mode we consider is the direct readout of charge produced in SiC crystals by low noise charge amplifiers. This mode is limited to energy deposits exceeding the gap energy of the relevant polytype, but is of interest due to the ability to run these charge detectors at higher temperatures, without requiring a dilution refrigerator, and due to the simpler readout scheme. We begin by considering the charge collection in SiC, and how the band structure of the polytypes will affect charge mobility. We then contrast the expected resolution with diamond and silicon via the resolution model of Ref.~\cite{diamonddetectors}. The resulting expected detector performance for these devices is summarized in Table~\ref{tab:detdesigns}. \subsubsection{Charge Collection} The primary questions for charge readout of SiC are whether complete charge collection is achievable in monolithic, insulating samples, and whether charge collection varies across polytypes. While full charge collection for the 4H polytype has been demonstrated~\cite{Vittone_2009}, detailed studies of charge collection efficiency suggest that semi-insulating samples have a fairly limited charge diffusion length at room temperature~\cite{Ruddy2008}. In Ref.~\cite{Ruddy2008} this is attributed to either recombination due to impurities or the inability to separate electron-hole pairs in the initial interaction, which causes rapid carrier recombination. More recent studies of charge collection efficiency~(CCE) in SiC radiation detectors suggest that CCE is improving with substrate quality and fabrication techniques~\cite{mandal}, though single crystal 4H-SiC still has diffusion lengths closer to polycrystalline diamond than to single crystal diamond~\cite{hodgson}. The only studies to demonstrate near full charge collection in SiC are Refs.~\cite{bryant,Nava_2008,bertuccio}, which all study energy deposition in thin films ($\sim$40 $\mu$m). Studies of depositions in a ten times larger detector volume in {\it e.g.} Refs.~\cite{Nava_2008,Ruddy2008} do show much reduced collection efficiency for the same bias voltage and detector readout. These studies suggest that there remain significant bulk dislocations in these commercial wafers, which present trapping or recombination-inducing defect sites. While it is possible that charge collection will improve at lower temperatures or with higher quality substrates, there is not yet sufficient data to show this. Note that most radiation detectors to date have been constructed of 4H and 6H polytypes; it is possible that the 3C polytype, with a more symmetric band structure, could demonstrate better charge collection. A rough analogy would be comparing the charge collection of graphite to diamond, though one would expect 4H and 6H to be much more efficient than graphite. Charge collection is also likely dependent on crystal orientation relative to the BZ minima as discussed in Section~\ref{sec:polytypes}, with more efficient charge collection occurring when the electric field is aligned with an electron valley. For the resolution calculation presented later in this section, we will assume perfect collection efficiency; an incomplete efficiency will not affect resolution in the single-charge limit, but will instead reduce effective exposure. To minimize the effect of limited charge collection on detector performance, we require the drift length (detector thickness) to be equal to or less than the diffusion length of the target charge carrier at the design voltage. To make this more quantitative, one can model the CCE in terms of a few measured parameters. Given a carrier mobility $\mu$ (in principle different for electrons and holes) and saturation velocity $v_{d,{\rm sat}}$, we use an ansatz for carrier velocity as a function of voltage $V$: \begin{equation} v_d(V,d) = \left[\frac{1}{v_{d,{\rm sat}}}+\frac{d}{\mu V}\right]^{-1} \end{equation} where $d$ is the detector thickness. This gives the drift length $D = v_d\tau_{\rm scat} \rightarrow v_{d,{\rm sat}}\tau_{\rm scat}$ in the high-field limit~\cite{Nava_2008}, where $\tau_{\rm scat}$ is the carrier scattering lifetime. Given this drift length, we can model the CCE as~\cite{bryant} \begin{equation} {\rm CCE} = \frac{D}{d}\left[1-\exp\left(-\frac{d}{D}\right)\right] \end{equation} where for long diffusion length ($D \gg d$) we have CCE$\sim1$. For short diffusion length, and in the small-field limit, we find that charge collection goes as \begin{equation} {\rm CCE} \approx \frac{\mu V \tau_{\rm scat}}{d^2} = \frac{\mu \tau_{\rm scat}}{d}E \ll 1 \end{equation} with $E$ the electric field in the bulk. This tells us that when the gain is linear in voltage, the inferred CCE will be small and the effective diffusion length is much shorter than the crystal thickness. The best measure of the drift constant $\mu\tau_{\rm scat}$ in 4H-SiC (the only polytype for which detailed studies are available) was found to be $\mu\tau_{\rm scat}\sim 3\times 10^{-4}$ cm$^2$/V, and for a saturation drift field of 8~kV/cm, we find a maximum drift length $D\sim 2.4$~cm~\cite{bryant}. While this does imply full charge collection for devices up to 1~cm thick, the very high voltages required are likely to induce some measure of charge breakdown, despite the very high dielectric strength of SiC. The devices studied in Refs.~\cite{bryant,Nava_2008,bertuccio} are all thin films which did not break down at field strengths in this regime; however, for low temperature operation of these devices, voltages of this magnitude are atypical for monolithic, gram-scale detectors. Ref.~\cite{bertuccio} suggests there is a very small difference in mobility between the 3C, 4H, and 6H polytypes, but it is possible that the more isolated valleys of 3C, and different growth process, may lead to larger charge lifetime. To better determine the polytype best suited to charge collection, more studies of drift length in high-purity samples are needed. \subsubsection{Charge Resolution} \begin{table*}[t] \centering \begin{tabular}{|l|c|c|c|c|c|c|} \hline Readout & Design & Dimensions & Mass (g) & Temp. (K) & $V_{\rm bias}$ & $\sigma_{q}$ \\ \hline \multirow{4}{*}{Charge} & Single Cell & $1.0~{\rm cm~side~length} \times 0.5~{\rm cm~thick}$ & 1.6 & \multirow{4}{*}{4.2~K} & 4~kV & 1.4$e^{-}$ \\ & Single Cell & $0.5~{\rm cm~side~length} \times 0.5~{\rm cm~thick}$ & 0.4 & & 4~kV & 0.5$e^{-}$ \\ & Single Cell & $1.0~{\rm cm~diameter} \times 1.5~{\rm cm~thick}$ & 4.8 & & 500~V & 0.5$e^{-}$ \\ & Segmented & $0.2~{\rm cm~side~length}\times 0.2~{\rm cm~thick}$ & 0.025 & & 50~V & 0.25$e^{-}$/segment \\ \hline \end{tabular} \caption{Summary of the detector designs discussion for charge readout. Voltage bias for the charge designs should be high enough to ensure full charge collection. For the lower two charge readout designs, improved charge lifetime is assumed, allowing for lower voltage bias and thicker crystals. We note that, due to the relatively high dielectric constant of SiC, the optimal geometry (given current readout constraints) is such that cells have a thickness greater than or equal to the side length in order to minimize capacitance per unit mass. } \label{tab:detdesigns} \end{table*} Recalling the model for charge resolution from Ref.~\cite{diamonddetectors}, the minimum resolution of a charge integrating readout is completely determined by the noise properties of the amplifier, the bias circuit, and the capacitance of the detector ($C_{\rm det}$) and amplifier ($C_{\rm in}$) (see {\it e.g.} Ref.~\cite{shuttThesis}): \begin{equation} \sigma_{q} \ge \frac{N_{v}(C_{\rm det}+C_{\rm in})}{\epsilon_q \sqrt{\tau}}, \end{equation} where $N_{v}$ is assumed to be a flat voltage noise spectral density of the amplifier in $V/\sqrt{\rm Hz}$, $\epsilon_q$ is the CCE and $\tau$ is the response time of the detector and readout. For an integrator, the readout time $\tau$ is determined by the rate at which the input is drained by some bias resistor $R_b$, and thus $\tau = R_b (C_{\rm det}+C_{\rm in})$. Following the discussion of Ref.~\cite{diamonddetectors}, the current best cryogenic high electron mobility transistor (HEMT) amplifiers \cite{Phipps} allow for a detector resolution of \begin{equation} \sigma_{q}\approx (28\;\mathrm{e^{-}h^{+}\;pairs})\left(C_{\rm det}/(100~\mathrm{pF})\right)^{3/4}\,, \end{equation} where we have enforced the optimal design condition $C_{\rm in} = C_{\rm det}$, and we assume full CCE can be achieved (this is ensured by limiting thickness to 1~cm). Note that if this resolution for 100\% CCE is sub-electron, it affects the effective resolution on the input signal rather than the resolution of the readout.\footnote{For the case of incomplete charge collection for detectors with single electron resolution, the resolution is not smeared due to Poisson fluctuations, but the conversion from charge to energy scale requires folding in charge collection statistics. For detectors without single electron resolution, limited CCE effectively contributes an additional Poisson smearing to the Gaussian noise PDF.} We give example design parameters for a charge detector in Table~\ref{tab:detdesigns}. We consider both monolithic and segmented detectors, the latter necessary to achieve statistically significant sub-electron resolution at reasonable detector mass. One benefit of SiC over {\it e.g.} diamond is that larger crystals are readily available commercially, allowing for designs with ${\cal O}$(1e$^{-}$) resolution. All designs are limited to 1.5~cm thickness---the largest thickness currently available, and to ensure full charge collection at a field of 8~kV/cm. We likewise assume significant voltage bias in our designs for this reason, and in our latter charge designs assume improvements can be made in drift length by improving mean carrier lifetime through advances in crystal growth technology (see Ref.~\cite{bryant} for a more detailed discussion). The size and resolution of the segmented design suggest that development of SiC detectors with a skipper CCD readout is likely a more straightforward development path; large-scale fabrication of SiC devices has been available for decades, and feature sizes required are fairly modest. Such developments would be useful for employing large-area SiC sensors as beam monitors and UV photon detectors, and would be complementary to Si substrates for dark matter detection thanks to the reduced leakage current due to the higher gap and dielectric strength of SiC. \subsection{SiC Calorimetry} The most promising direction for application of SiC to dark matter searches is direct phonon readout at cryogenic temperatures. The intrinsic phonon resolution $\sigma_{\rm ph}$ is the primary metric for determining DM reach in this case. Here we take a technology-agnostic approach to computing expected phonon resolution; rather than calculating resolutions for a specific detector technology, we will relate resolution to phonon collection efficiency, intrinsic phonon properties of the material, and input-referred noise equivalent power (NEP) of the readout. For the last quantity, we will use reference values comparable to those currently achievable by a range of cryogenic sensing techniques. We will compare this with currently achieved resolutions using other crystals, as well as technologies of sufficiently low noise temperature to achieve sub-eV resolutions, and suggest form factors and noise temperature targets for the various thresholds discussed for DM sensitivities later in this paper. Following this parameterization, we thus calculate resolution as \begin{equation} \sigma_{\rm ph} = \frac{1}{\epsilon_{\rm ph}}\sqrt{S_{\rm ph}\tau_{\rm pulse}} \end{equation} where $\epsilon_{\rm ph}$ is the energy efficiency for phonon collection, $S_{\rm ph}$ is the NEP for the readout in $W^2/{\rm Hz} \propto {\rm eV}^2/{\rm s}$, and $\tau_{\rm pulse}$ is the duration of the signal in seconds.\footnote{$\tau_{\rm pulse}$ can also be thought of as the inverse of the bandwidth ($\tau_{\rm pulse}=2\pi/\omega_{\rm pulse}$). We use $\tau_{\rm pulse}$ rather than $\omega_{\rm pulse}$ for easier comparison with sensor response time $\tau_{\rm sensor}$, given that $\tau_{\rm pulse} = \tau_{\rm ph}+\tau_{\rm sensor}$, where $\tau_{\rm ph}$ is the phonon signal time.} This is similar to the detector treatment in Refs.~\cite{Hochberg:2015fth,diamonddetectors}, and uses the same terminology as for Transition Edge Sensor~(TES) noise modeling~\cite{Irwin}, but is written more generally for ease of comparison between readout technologies. \subsubsection{Phonon Collection Efficiency} The primary metric which determines whether a material will allow for efficient phonon collection is the phonon lifetime $\tau_{\rm life}$. As discussed in Ref.~\cite{diamonddetectors}, for pure crystals at low temperature, the lifetime is limited primarily by boundary scattering. This scaling for the phonon lifetime can be inferred from thermal conductance data, given knowledge of material density, sound speed and crystal size. A model for the thermal conductance and its relation to the phonon lifetime is described in Appendix~\ref{app:kappa}. For diamond, it was found that boundary scattering is dominant for phonons of $\leq$10~K, implying that the bulk mean free path for phonons at and below this energy ($\lesssim 1~$meV) is much longer than the typical crystal length scale (1-10 mm) \cite{diamonddetectors}. For SiC, the thermal conductivity (at least that of 6H~\cite{SLACK1973321}) and sound speeds are close to that of diamond, so we can infer that SiC will similarly be limited by boundary scattering, at least for phonons near the pair-breaking energy of the superconducting phonon sensors. The phonon band structure calculations from Section~\ref{sec:polytypes} (calculation details in Appendix~\ref{app:first_principles_calcs}) were used to verify that low-energy acoustic phonons (below 2 THz) have bulk lifetimes much longer than their collection timescales ($>10$~ms). For 3C, the calculated average phonon lifetime within 0-2~THz is of the order 30~ms at 2~K. In the hexagonal polytypes, the phonon lifetimes will be smaller because of increased scattering from the variations in stacking sequences inherent to the structures, but initial calculation results at 10~K indicate that the 2H lifetimes will be within an order of magnitude of those for 3C. Assume a detector in the form of a prism of thickness $\eta$ and area $A$. With only one type of phonon absorber, the phonon collection time-constant is~\cite{Hochberg:2015fth,diamonddetectors} \begin{equation} \tau_{\rm collect} = \frac{4\eta}{f_{\rm abs}\bar{n}_{\rm abs}c_s} \end{equation} where $f_{\rm abs}$ is the fraction of the detector surface area covered by phonon absorber material and $\bar{n}_{\rm abs}$ is the transmission probability between the bulk and the absorber. $\bar{n}_{\rm abs}$ is calculated in detail in Appendix~\ref{app:ptrans} with values in Table~\ref{tab:trans}. As a basis for comparison, the worst-case scenario that phonons are completely thermalized at the crystal sidewalls gives a bound on the phonon lifetime of $\tau_{\rm life}\gtrsim \eta/c_s$, a single phonon crossing time across the crystal. In the following we will explore the case where boundaries are highly reflective, in which case $\tau_{\rm life} \gg \tau_{\rm collect}$, as well as the case where boundaries are sources of phonon losses, in which $\tau_{\rm collect} \gtrsim \tau_{\rm life}$. In all cases, the phonon pulse time is determined by combining phonon collection time with phonon lifetime as~\cite{Hochberg:2015fth}: \begin{equation} \tau_{\rm pulse}^{-1} \approx \tau_{\rm ph}^{-1} = \tau_{\rm life}^{-1} + \tau_{\rm collect}^{-1}\,, \end{equation} where we assume the sensor is much faster than the timescale of phonon dynamics ($\tau_{\rm ph} \gg \tau_{\rm sensor}$). Then the overall collection efficiency is then \begin{equation} f_{\rm collect} = \frac{\tau_{\rm pulse}}{\tau_{\rm collect}} = \frac{\tau_{\rm life}}{\tau_{\rm life}+\tau_{\rm collect}}\,. \end{equation} The total detector efficiency is then given as a product of the conversion and readout efficiencies, \begin{equation} \epsilon_{\rm ph} = f_{\rm collect}\epsilon_{qp}\epsilon_{\rm trap} \end{equation} where $\epsilon_{qp}$ is the efficiency of generating quasiparticles in the phonon absorber, and $\epsilon_{\rm trap}$ is the efficiency of reading out these quasiparticles before they recombine. $\epsilon_{qp}$ has a generic limit of 60\% due to thermal phonon losses back into the substrate during the quasiparticle down-conversion process~\cite{Guruswamy_2014}, though it rises to unity as the captured energy approaches $2\Delta\sim 7k_bT_c/2$, the Cooper pair binding energy for an absorber at $T_c$. Meanwhile, $\epsilon_{\rm trap}$ is technology dependent. For quasiparticle-trap assisted TESs, $\epsilon_{\rm trap}$ is limited by quasiparticle diffusion and losses into the substrate, while for superconducting resonators such as KIDs, $\epsilon_{\rm trap}$ is governed by the response time of the resonator compared to the recombination lifetime of the quasiparticles. \subsubsection{Material-Limited Resolution} Since different readout technologies are possible, here we instead focus on the material-limited resolution of a SiC detector. We will thus consider an idealized phonon readout with a response time much faster than the characteristic phonon timescale and a benchmark noise temperature near that currently achieved by infrared photon detectors and prototype TES calorimeters. Taking a single sensor with NEP $\sqrt{S_{s}}\sim 10^{-19}$~W/$\sqrt{\mathrm{Hz}}$ \cite{lowNEP,THzSinglePhoton,fink2020characterizing},\footnote{Here we are scaling the noise power measured in the reference to the effective volume of a single QET as characterized in Ref.~\cite{Hong}.} and assuming our idealized readout is limited by the timescale of phonon dynamics, we find a single-sensor resolution of \begin{align} \sigma_{\rm ph} &\approx 10^{-19}~\mathrm{W}/\sqrt{\mathrm{Hz}} \frac{1}{\epsilon_{\rm ph}}\sqrt{\tau_{\rm pulse}} \\ &\approx 10~\mathrm{meV}\frac{1}{f_{\rm collect}\epsilon_{\rm trap}}\sqrt{\frac{\tau_{\rm pulse}}{100\; \mathrm{\mu s}}}\, \\ & \approx \frac{10~\mathrm{meV} }{\epsilon_{\rm trap}} \sqrt{ \frac{\tau_{\rm collect}^2}{\tau_{\rm pulse} \times 100~\mathrm{\mu s} } } \end{align} where we have set $\epsilon_{\rm qp} = 0.6$. We thus see that the challenges for excellent resolution are to achieve high internal quantum efficiency between phonon absorber and phonon sensor ($\epsilon_{\rm trap}$), and to ensure fast phonon collection (short $\tau_{\rm collect}$, with $\tau_{\rm life}$ not too small compared to $\tau_{\rm collect}$). \begin{table*}[] \centering \begin{tabular}{|c|c|c|c|c|c|} \hline & & \multicolumn{4}{c|}{Design} \\ \hline & Parameter & A & B & C & D \\ \hline & Polytype & 6H or 4H & 3C & 3C & Any \\ \hline & Phonon Absorber & \multicolumn{2}{c|}{Al} & \multicolumn{2}{c|}{AlMn} \\ $2\Delta$ & Pair-Breaking Threshold & \multicolumn{2}{c|}{700~$\mu$eV} & \multicolumn{2}{c|}{60~$\mu$eV} \\ \hline $\epsilon_{\rm qp}$ & Efficiency to generate quasiparticle in absorber & \multicolumn{4}{c|}{60\%} \\ $\epsilon_{\rm trap}$ & Efficiency to readout quasiparticle in absorber & \multicolumn{4}{c|}{75\%} \\ $\tau_{\rm ac}$ & Acoustic Phonon Lifetime (crystal limited) & \multicolumn{4}{c|}{$>30$~{$\rm\mu s$}} \\ \hline $\tau_{\rm life}$ & Assumed phonon lifetime (boundary limited) & \multicolumn{2}{c|}{$\sim$100~{$\rm\mu s$}} & \multicolumn{2}{c|}{$\sim$1~{$\rm\mu s$}} \\ $\sqrt{S_{s}/A_s}$ & Noise power per unit sensor area ($\rm W/{mm \cdot Hz^{1/2}}$) & \multicolumn{2}{c|}{$10^{-19}$} & \multicolumn{2}{c|}{$10^{-20}$\footnote{This noise power is the best currently achievable in any quantum sensor; see for example Refs.~\cite{lowNEP,THzSinglePhoton}.}} \\ $\sqrt{S_{s}}$ & Noise power per sensor ($\rm meV/s^{1/2}$) & \multicolumn{2}{c|}{600} & \multicolumn{2}{c|}{60} \\ \hline $A$ & Detector area & 45 $\rm cm^2$ & 5 $\rm cm^2$ & \multicolumn{2}{c|}{1~$\rm cm^2$} \\ $\eta$ & Detector thickness & 1~cm & 1~cm & \multicolumn{2}{c|}{4~mm} \\ $\bar{n}_{\rm abs}$ & Transmission probability to absorber & 0.83 & 0.94 & \multicolumn{2}{c|}{$\sim$0.94\footnote{AlMn films are primarily Al, containing $<$1\% Mn~\cite{deiker}, and we assume the transmission coefficient will be approximately equal to the pure Al case.}} \\ $f_{\rm abs}$ & Fractional coverage of detector surface with absorber & 0.1 & 0.7 & \multicolumn{2}{c|}{0.95} \\ $N_{s}$ & Number of sensors & 450 & 350 & \multicolumn{2}{c|}{95} \\ $\tau_{\rm collect}$ & Time scale to collect ballistic phonons & 34~{$\rm\mu s$} & 4.3~{$\rm\mu s$} & \multicolumn{2}{c|}{1.3~{$\rm\mu s$}} \\ $\tau_{\rm pulse}$ & Time scale of phonon pulse & 25~{$\rm\mu s$} & 4.2~{$\rm\mu s$} & \multicolumn{2}{c|}{0.5~{$\rm\mu s$}} \\ $f_{\rm collect}$ & Collection efficiency into absorber & 74\% & 95\% & \multicolumn{2}{c|}{45\%} \\ $\epsilon_{\rm ph}$ & Total signal efficiency for detector & $\sim$30\% & $\sim$40\% & \multicolumn{2}{c|}{20\%} \\ \hline & Detector mass & 145~g & 16~g & \multicolumn{2}{c|}{1~g} \\ \hline $\sigma_{\rm ph}$ & Resolution on phonon signal & 200~meV & 50~meV & 2~meV & $\sim$0.5~meV\footnote{This assumes 5 or fewer sensors can be used to read out the total phonon signal, and that the phonon dynamics are still the bandwidth limiting timescale.} \\ \hline \end{tabular} \caption{Reference phonon detector designs. Designs A and B assume performance parameters for currently demonstrated technology, while design C assumes an improvement by a factor of 10 in noise equivalent power per sensor. The main limitation affecting design C is that, for very low thresholds, effective phonon lifetime may be as short as a few times the crystal crossing time, due primarily to phonon thermalization at crystal boundaries. If significant phonon thermalization is allowed to occur, the phonon resolution will quickly be limited by statistical fluctuations in signal collection efficiency rather than sensor input noise. For this reason, our third design assumes an effective phonon lifetime equivalent to 3 crystal crossings, a lower gap absorber, and 95\% sensor coverage. The limited absorption probability and realistic constraints on sensor area coverage severely limit the overall efficiency ($\epsilon_{\rm ph}$) of the design relative to the other two reference designs. The polytype selection is primarily determined by the impedance match between the substrate and phonon absorber. All polytypes are fairly well matched to the chosen absorbers, but the 3C polytype is a close match and maximizes phonons absorbed per surface reflection.} \label{tab:calorimeterDesigns} \end{table*} Realistically, readout noise power scales with sensor volume, and we can tie the above benchmark noise temperature to a reasonable sensor area. For current technology, a single superconducting sensor can typically readout about $A_{s}\sim$1~mm$^2$ of area, and thus we can parameterize the above equations more accurately in terms of this sensor area and detector geometry. We find that \begin{align} f_{\rm abs} &= N_{s}A_{s}/A \\ S_{\rm ph} & =N_{s}S_{s} = \frac{f_{\rm abs}A}{A_s}S_{s}. \end{align} For the above reference noise temperature and assuming $\tau_{\rm life} \gg \tau_{\rm collect}$, this gives an energy resolution of \begin{align} \sigma_{\rm ph} &\approx \frac{6~\mathrm{meV}}{\epsilon_{\rm trap}}\sqrt{\frac{V}{100~\mathrm{mm^3}}\frac{1~\mathrm{mm^2}}{A_s}\frac{0.95}{\bar{n}_{\rm abs}}\frac{14 \, \mathrm{km/s}}{c_s}} \label{eq:reslonglife} \end{align} where $V$ is the detector volume. This is the generic result that an ideal athermal detector has a resolution that scales as $\sqrt{V}$ for a given readout technology, and as $(\bar{n}_{\rm abs}c_s)^{-1/2}$ for a given crystal/phonon absorber coupling. In the opposite limit where $\tau_{\rm life} \ll \tau_{\rm collect}$, we find the resolution scales as \begin{align} \sigma_{\rm ph} \approx \frac{13 ~\mathrm{meV}}{\epsilon_{\rm trap}}\left(\frac{0.95}{\bar{n}_{\rm abs}}\right)\sqrt{\frac{1}{f_{\rm abs}} \frac{V}{100~\mathrm{mm^3}} \frac{(\eta/c_s)}{\tau_{\rm life}} } \label{eq:resshortlife} \end{align} where we have again used $A_s = 1~{\rm mm}^3$ and the sound speed in SiC. In this case, the detector design relies on high surface coverage $f_{\rm abs}$ to maximize phonon collection, and the resolution is more sensitive to the phonon transmission probability, $\bar n_{\rm abs}$. For the chosen parameters this is only about twice the resolution of the long-lived phonon case, but it is more sensitive to details of sensor coverage and will be more sensitive to phonon losses both in the crystal and at the crystal-absorber interface. These estimates assume that the detector in question can be read out with sub-microsecond precision (such that $\tau_{\rm sensor} \ll \tau_{\rm ph}$, as stated earlier), while sensors at this level of power sensitivity are not necessary capable of being read out at this rate \cite{kurinskyThesis,fink2020characterizing}; we comment on this more below. Finally, this term does not include phonon shot noise, which will be a significant source of additional variance in the limit of small phonon lifetime. All of this goes to say that the ideal detector design will be highly dependent on whether phonons are completely thermalized at the boundaries, or if there is a reasonable chance of reflection of athermal phonons such that there's a non-zero survival probability for each surface interaction. \begin{figure*}[th!] \centering \includegraphics[width=0.47\textwidth]{EnergyResolution.pdf} \includegraphics[width=0.45\textwidth]{PolytypeResolution.pdf} \caption{{\bf Left:} The dependence of energy resolution on fractional surface area covered by sensors ($f_{\rm abs}$) is shown for each of the designs from Table~\ref{tab:calorimeterDesigns}. Dots indicate the resolutions quoted in the table. For designs C and D, the resolution is the same in the small $f_{\rm abs}$ limit, or where there are 5 or fewer sensors; in this limit, all sensors are assumed necessary to reconstruct events. Meanwhile, for larger $f_{\rm abs}$, the improved scaling of design D over design C results from the fixed number of sensors read out (here taken to be 5) as the detector bandwidth is increased. Also shown are devices with the current best demonstrated noise power and resolution. The TES and SNSPD benchmarks come from Refs.~\cite{fink2020characterizing,Hochberg:2019cyy}, where the shaded band corresponds to the detectors listed in Ref.~\cite{fink2020characterizing}, and the lines correspond to best DM detector performance from the respective references. We also include two superconducting photon detectors optimized for high detection efficiency for THz photons, where the best demonstrated NEP, roughly $10^{-20}$W/$\rm \sqrt{Hz}$ in both cases, is comparable to the NEP assumed for designs C and D. The quantum capacitance detector (QCD) is from Ref.~\cite{THzSinglePhoton}, and the SNS junction is from Ref.~\cite{lowNEP}. {\bf Right:} The relative change in resolution as polytype and interface transmission are changed for a range of (surface-limited) phonon lifetimes, compared to the nominal, impedance-matched 3C/Al design at the chosen phonon lifetime. The resolutions of these devices are best case scenarios for perfect phonon detection efficiency, and thus represent a lower resolution limit for the given technology.} \label{fig:energyRes} \end{figure*} Table~\ref{tab:calorimeterDesigns} summarizes our four reference designs, with resolutions varying from 200~meV (design A) down to 500~$\mu$eV (design D). Designs A and B assume the device is read out by phonon sensors comparable to those that have currently been demonstrated, and the resolution is in the long phonon lifetime regime of Eq.~\eqref{eq:reslonglife}. The design thresholds for these devices assume that the majority of initial phonons lie far above the absorption gap of the phonon sensor ($2\Delta \sim$~0.7~meV in Al) and that down-conversion at the crystal surfaces has a small impact on total phonon energy absorbed by the sensors. The resolution scaling between A and B then comes just from relative reduction of crystal volume. For designs C and D, we consider an initial phonon energy small enough that only a few phonon scattering events can occur before phonons are absorbed. This implies we will be in the short lifetime regime and need to have large coverage $f_{\rm abs} \to 1$ to avoid substantial signal loss. To attain resolutions low enough to observe single phonon production, we also assume here an order of magnitude decrease in noise power over currently demonstrated phonon sensors. Design C obeys the scaling of Eq.~\eqref{eq:resshortlife}. Design D has the same detector geometry, but here we assume that only 5 or fewer sensors need to be read out to reconstruct a signal. This provides an improvement in resolution by reducing the number of sensors read out by a factor of 20, without necessarily changing the detector or sensor properties. The timescale for this process is still the phonon crossing time for the crystal; additional resolution reduction could still be accomplished by reducing the size of the crystal, though gains would be fairly modest. In addition, the resolution for sensors using quasi-particle traps to read out phonons will hit a floor at the pair-breaking energy of the phonon absorber, which for Al is $2\Delta \sim0.7$~meV, and for AlMn (with a $T_c$ around 100 mK~\cite{deiker}) is $\sim0.06$~meV (see also Table~\ref{tab:calorimeterDesigns}). For this reason we assume that detectors with resolution $<$~50~meV (designs C and D) will need to transition to lower-gap materials; this ensures that the phonons they intend to detect can break $\gtrsim$~100 quasiparticles per sensor to minimize shot noise contributions to the noise budget. In Fig.~\ref{fig:energyRes}, we show the scaling of resolution with sensor area, along with our reference designs, in comparison to currently achieved resolutions by an array of superconducting sensors. These scalings are based on a fixed sensor form factor, with the given noise performance corresponding to an areal coverage of $A_s\sim 1 {\rm mm}^2$, and the lines assume a fixed power noise per unit area (as described earlier in this section) for a variety of sensor coverage and crystal form factors. In all cases, significant enhancements in sensor noise power are required to achieve less than 100~meV resolutions even for gram-scale detectors, and we note that the detection thresholds for these detectors will be a multiple of these resolutions. This limitation is not specific to SiC but broadly applies to any solid-state phonon calorimeters using superconducting readout. In particular, we note that only designs C and D would be expected to detect single optical phonon excitations. The choice of different polytypes in the detector designs in Table~\ref{tab:calorimeterDesigns} lead to minor changes in expected resolution due to sound speed (which varies by 25\% between polytypes) and impedance matching between the crystal and the absorber (a difference of less than 20\%). The same sensor design ported to different polytypes can therefore vary by up to around a factor of 2 in resolution, a non-trivial amount but small compared to the range of energies considered in this paper. Selection of polytype is therefore informed more by sample quality, mass, and ease of fabrication, as well as potential science reach, than by ultimate phonon resolution. The difference in science reach between the polytypes is the focus of the next sections of this paper. Finally, we note that the quoted resolutions apply to readout limited by phonon dynamics, and implicitly assume that the phonon sensors used to read out these signals have a higher bandwidth than the crystal collection time. For these designs, a sensor with a response time of $\sim$1~{$\rm\mu s$}~would be able to achieve within a factor of a few of these projected resolutions. This requirement, and the additional requirement that sensors be individually read out for design D, suggests that superconducting resonator technologies, such as Kinetic Inductance Detectors (KIDs), or switching technologies, such as superconducting nanowires, are more likely to be the technology of choice than TESs or thermal sensors. The former technologies have noise temperature and response time that are independent of thermal conductance, and are intrinsically multiplexable. The development of faster, low-$T_c$ TES detectors which are capable of frequency-domain multiplexing would allow them to be competitive at the lower thresholds quoted here. \section{Theoretical Framework}\label{sec:theoreticalFramework} We now move to describing the DM frameworks that can be detected via SiC detectors. We consider the following possible signals from sub-GeV DM interactions: scattering off nuclei elastically, scattering into electron excitations for $m_\chi \gtrsim {\rm MeV}$, phonon excitations for ${\rm keV} \lesssim m_\chi \lesssim 10\, {\rm MeV}$, and absorption of dark matter into electronic and phonon excitations for $10\, {\rm meV} \lesssim m_\chi \lesssim 100\, {\rm eV}$. In all cases, $\rho_\chi=0.3\;{\rm GeV}/{\rm cm}^3$ is the DM density, and $f_\chi({\bf v})$ is the DM velocity distribution, which we take to be the Standard Halo Model~\cite{PhysRevD.33.3495} with $v_0 = 220$ km/s, $v_{\rm Earth} = 240$ km/s, and $v_{\rm esc} = 500$ km/s. \subsection{Elastic DM-nucleus scattering} Assuming spin-independent interactions, the event rate from dark matter scattering off of a nucleus in a detector of mass $m_{\rm det}$ is given by the standard expression~\cite{lewinsmith} \begin{equation} \frac{dR}{dE_r} = \frac{ m_{\rm det} \rho_{\chi}\sigma_0}{2m_{\chi}\mu_{\chi N}^2}F^2(q) F^2_{\rm med}(q) \int_{v_{\rm min}} \frac{f_\chi({\bf v})}{v}d^3 {\bf v}. \end{equation} Here $q = \sqrt{2 m_T E_r}$ is the momentum transfer, $m_T$ is the target mass, $m_\chi$ is the DM mass, $\mu_{\chi N}$ is the reduced mass of the DM-nucleus system, $E_r$ is the recoil energy, $F(E_r)$ is the nuclear form factor of DM-nucleus scattering (we adopt the Helm form factor as in Ref.~\cite{lewinsmith}), and the form factor $F^2_{\rm med}(q)$ captures the form factor for mediator interactions ({\it i.e}., long-range or short-range). The cross-section $\sigma_0$ is normalized to a target nucleus, but to compare different media, this cross-section is re-parameterized as~\cite{lewinsmith,hertel} \begin{equation} \sigma_0 = A^2\left(\frac{\mu_{\chi N}}{\mu_{\chi n}}\right)^2\sigma_{n}, \end{equation} where $A$ is the number of nucleons in the nucleus, and $\mu_{\chi n}$ is the DM-nucleon reduced mass. For a sub-GeV dark matter particle, we have $\mu_{\chi N}\rightarrow m_{\chi}$, $\sigma_0\rightarrow A^2\sigma_n$, and $F(E_r)\rightarrow 1$, such that \begin{equation} \frac{dR}{dE_r} \approx m_{\rm det}\frac{\rho_{\chi}A^2\sigma_n}{2m_{\chi}^3} F^2_{\rm med}(q) \int_{v_{\rm min}}\frac{f_\chi( {\bf v})}{v}d^3 {\bf v}, \end{equation} which would seem to imply that a heavier nucleus is always more sensitive to dark matter from a pure event-rate perspective. Hidden in the integral, however, is the fact that \begin{equation} v_{\rm min} = \sqrt{\frac{E_r(m_{\chi}+m_T)}{2\mu_{\chi N}m_{\chi}}} \rightarrow \sqrt{\frac{E_r m_{T}}{2m_{\chi}^2}} \end{equation} in this limit, which implies scattering off of heavier targets is kinematically suppressed. For heterogeneous targets, the standard modification to this rate formula is to weight the event rate for a given atom by its fractional mass density. For a SiC crystal of mass $m_{\rm det}$ and equal number density of Si and C nuclei, we have the total rate \begin{equation} \left(\frac{dR}{dE_r}\right)_{\rm SiC} = \frac{1}{2m_{\rm SiC}}\left[m_{\rm Si}\left(\frac{dR}{dE_r}\right)_{\rm Si}+m_{\rm C}\left(\frac{dR}{dE_r}\right)_{\rm C}\right] \nonumber \end{equation} where the rates for Si and C are computed for the given detector mass. This is a reasonable assumption for interactions in which the scattered DM particle only probes a single nucleus. For sufficiently low $E_r$ comparable to the typical phonon energy, the assumption is no longer valid. This can be seen from the fact that the interaction of DM with single or multi-phonons is an expansion in $q^2/(m_T \omega)$~\cite{trickle2019,Campbell-Deem:2019hdx}, so that we transition to the nuclear recoil regime when $E_r \gg \omega_{\rm phonon}$. In this paper we consider elastic nuclear recoils down to 0.5~eV, well above the energy at the highest optical phonon, and consider DM as acting locally on a single nucleus from the standpoint of the initial interaction. For energy depositions between the highest optical phonon energy, $\sim 120$ meV, and 0.5 eV, we expect the signal rate to be dominated by multiphonon interactions. To compute NR limits, the behavior at low DM mass is strongly dependent on the energy threshold, while the high-mass behavior depends on the upper limit for accurate energy reconstruction. Athermal phonon calorimeters can provide very low thresholds but are intrinsically limited in dynamic range. To account for this, we assume 3 orders of magnitude in dynamic range, similar to what has been seen in detectors with ${\cal O}(\rm eV)$ thresholds~\cite{kurinsky}. This means that the upper integration limit is set to $10^3\sigma_{t}$, where the threshold $\sigma_t$ is assumed to be 5 times the resolution. \subsection{DM-phonon scattering \label{sec:phonon} } The formalism to compute single phonon excitations from DM scattering was detailed previously in Refs.~\cite{Knapen:2017ekk,Griffin:2018bjn,trickle2019}. The scattering rate per unit time and per unit target mass can be written generally as \begin{align} R = \frac{1 }{\rho_T} \frac{\rho_\chi}{m_\chi} \int d^3 {\bf v} f_\chi({\bf v}) \, \Gamma({\bf v}), \label{eq:general_rate} \end{align} where $\rho_T$ is the total target density. $\Gamma({\bf v})$ is the scattering rate per dark matter particle with velocity ${\bf v}$, given by \begin{align} \Gamma({\bf v}) \equiv \frac{\bar \sigma_{\chi} }{4 \pi \mu_{\chi n}^2} \int \frac{d^3 {\bf q}}{\Omega} \, F_{\rm med}^2(q) \, S_{\rm med}({\bf q}, \omega). \label{eq:rate} \end{align} $\mu_{\chi n}$ is the DM-nucleon reduced mass and $\bar \sigma_\chi$ is a fiducial cross section which we will define later for specific models. $\Omega$ is the primitive cell volume, and can also be written as $(\sum_d m_d)/\rho_T$ where $d$ sums over all atoms in the cell. As above, the form factor $F^2_{\rm med}(q)$ captures the form factor for mediator interactions ({\it i.e.}, long-range or short-range). Finally, the structure factor $S_{\rm med}({\bf q}, \omega)$ encapsulates the phonon excitation rate for a given momentum transfer ${\bf q}$ and energy deposition $\omega$; note that it depends on the mediator through its couplings to the nuclei and electrons in a given target. As specific examples, we first consider a mediator that couples to nuclei proportional to atomic number $A$, in which case \small \begin{align} S_{\rm med}({\bf q}, \omega) = \sum_{\nu,{\bf k}, {\bf G}} \frac{\delta(\omega - \omega_{\nu, {\bf k}})}{2 \omega_{\nu, {\bf k}}} |F_{N,\nu}({\bf q},{\bf k})|^2 \delta_{{\bf k} - {\bf q}, {\bf G}} \label{eq:structure_factor} \end{align} \normalsize where $\nu$ labels phonon branch and ${\bf k}$ denotes crystal momentum within the first Brillouin zone. The ${\bf G}$ are reciprocal lattice vectors, and for sub-MeV dark matter the ${{\bf G} =0}$ piece of the sum dominates. The phonon form factor for this mediator is \small \begin{align} |F_{N,\nu}({\bf q}, {\bf k})|^2 = \left| \sum_d \frac{A_d \, {\bf q} \cdot {\bf e}_{\nu, d, {\bf k}}^* }{\sqrt{m_d}} e^{-W_d({\bf q})} \, e^{i({\bf q} - {\bf k})\cdot {\bf r}_d^0} \right|^2, \end{align} \normalsize where $d$ labels atoms in the primitive cell and ${\bf r}_d^0$ are the equilibrium atom positions. We determine the phonon eigenvectors, ${\bf e}_{\nu, d, {\bf k}}$, and band structure $\omega_{\nu,{\bf k}}$ numerically from first-principles calculations described later in this section. Finally, $W_d({\bf q})$ is the Debye-Waller factor, which we can approximate as $W_d({\bf q}) \approx 0$ since the rates for sub-MeV DM are dominated by low $q$. With this phonon form factor, sub-MeV dark matter dominantly couples to longitudinal acoustic phonons. We next consider a mediator that couples to electric charge, such as a dark photon mediator $A'$. The structure factor has the same form as in Eq.~\eqref{eq:structure_factor}, but with $F_{N,\nu}$ replaced by the phonon form factor \small \begin{align} |F_{A',\nu}({\bf q},{{\bf k}})|^2 = \left| \sum_d \frac{{\bf q} \cdot {\bf Z}_d^* \cdot {\bf e}^*_{\nu,d,{{\bf k}}} }{\epsilon_\infty \sqrt{m_d}} e^{-W_d({\bf q})} \, e^{i({\bf q} - {\bf k})\cdot {\bf r}_d^0} \right|^2, \nonumber \end{align} \normalsize where we have assumed diagonal high-frequency dielectric constant $\epsilon_\infty$ and where ${\bf Z}_d^*$ is the matrix-valued Born effective charge of atom $d$ in the unit cell. It is the nonzero Born effective charges in polar semiconductors that permits sensitivity to these models, and it has been found that the most important mode excitation is the highest energy longitudinal optical phonon mode. \subsubsection{Daily modulation \label{sec:directional} } The anisotropic crystal structures of SiC polymorphs imply a directional-dependence of DM-phonon scattering. As the Earth rotates, there is a corresponding modulation in the rate over a sidereal day, which can provide a unique discriminant for a DM signal in the event of a detection. This effect can be captured by accounting for the time-dependent direction of the Earth's velocity with respect to the lab frame in the DM velocity distribution, $f_\chi({\bf v})$. This approach to calculating the directional signal was previously taken in Ref.~\cite{Griffin:2018bjn}, where it was computed for Al$_2$O$_3$ (sapphire) which has a rhombohedral lattice structure. The rate depends on the orientation of the crystal relative to the DM wind or equivalently Earth's velocity. Similar to Ref.~\cite{Griffin:2018bjn}, we choose the crystal orientation such that the $z$-axis is aligned with the Earth's velocity at $t=0$. Since the Earth's rotation axis is at an angle of $\theta_e \approx 42^\circ$ relative to the Earth's velocity, at time $t=1/2$ day, the $z$-axis of the crystal will be approximately perpendicular to the DM wind. For the rhombohedral and hexagonal lattice structures, the convention is that the $z$-axis corresponds to the primary crystal axis, and so we expect that this configuration should give a near-maximal modulation rate. \subsection{DM-electron scattering} Eqs. \eqref{eq:general_rate} and \eqref{eq:rate} are applicable to electron scattering as well, with the appropriate substitutions. The structure factor $S({\bf q},\omega)$ for electron recoil is given by \cite{trickle2019,Essig:2015cda} \begin{align} S(\bm{q},\omega) & = 2 \sum_{i_1,i_2} \int_{BZ} \frac{d^3k~d^3k'}{(2\pi)^6} 2\pi \delta(E_{i_2,\bm{k}'} - E_{i_1,\bm{k}} - \omega) \times \nonumber \\ &\sum_{\bm{G}} (2\pi)^3 \delta(\bm{k}' - \bm{k} + \bm{G} - \bm{q}) |f_{[i_1\bm{k},i_2\bm{k}',\bm{G}]}|^2 \end{align} where $E_{i,\bm{k}}$ is the energy of a electron in band $i$ with crystal momentum $\bm{k}$ and ${\bf G}$ are the reciprocal lattice vectors. The crystal form factor $f_{[i_1\bm{k},i_2\bm{k}',\bm{G}]}$ is given by \begin{align} f_{[i_1\bm{k},i_2\bm{k}',\bm{G}]} = \sum_{\bm{G}'} u_{i_1}^*(\bm{k'}+\bm{G}+\bm{G'})u_{i_2}(\bm{k}+\bm{G'}) \end{align} where $u_i(\bm{k})$ are the electron wavefunctions written in plane wave basis and normalized such that \begin{align} \sum_{\bm{G}} |u_i(\bm{k}+{\bm{G}})|^2 = 1\,. \end{align} In our calculation of the electron recoil limits, we make the isotropic approximation following the formalism outlined in Ref. \cite{Essig:2015cda}. The scattering rate per unit time and per unit target mass is then simplified to \small \begin{align}\label{eq:Rerecoil} R = \frac{\bar{\sigma}_e}{2 \rho_T \mu^2_{\chi e}} \frac{\rho_\chi}{m_\chi} \int q dq d\omega \, F_{\rm med}^2(q) S(q,\omega)\eta(v_{\rm min}(q,\omega)) \end{align} \normalsize where $\mu_{\chi e}$ is the reduced mass of the DM and electron, and the integrated dark matter distribution $\eta(v_{\rm min})$ is given as in Ref.~\cite{Essig:2015cda}. The reference cross section $\bar\sigma_e$ is at a fixed reference momenta, which will be taken as $\alpha m_e$, with $\alpha$ the fine structure constant and $m_e$ the electron mass. Results for daily modulation and thus directional detection of electron recoil signals in SiC will be presented in future work~\cite{future}. \subsection{Absorption of sub-keV DM \label{sec:absorption} } For a number of models, the bosonic DM absorption rate can be determined in terms of the conductivity of the material and photon absorption rate. Then the absorption rate is given as \begin{align} R = \frac{1}{\rho_T} \frac{\rho_\chi }{m_\chi} g_{\rm eff}^2 \sigma_{\rm 1}(m_\chi) \label{eq:rate_absorb} \end{align} where $\sigma_1(m_\chi)$ is real part of the optical conductivity $\hat\sigma$ of the material, namely the absorption of photons with frequency $\omega = m_{\chi}$, and $g_{\rm eff}$ is an effective coupling constant appropriate per model~\cite{Hochberg:2016sqx,Griffin:2018bjn}, as will be detailed below. The conductivity of the material can be obtained from measurements or by calculation. For $m_\chi $ greater than the electron band gap, we use measurements on amorphous SiC thin films from Ref.~\cite{SiCdata}. This data does not capture the differences between polymorphs of SiC, with band gaps ranging from 2.36 eV to 3.25 eV for those considered here, but we expect the differences to be small for $m_\chi$ well above the electron band gap. For $m_\chi$ below the electron band gap, absorption can occur into single optical phonons as well as multi-phonons. In this case, there is limited data or calculation available for sub-Kelvin temperatures. To gain further insight, we can use an analytic approximation for the dielectric function~\cite{Griffin:2018bjn}: \begin{align} \hat \epsilon(\omega) = \epsilon_\infty \prod_\nu \frac{\omega_{{\rm LO},\nu}^2 - \omega^2 + i \omega \gamma_{{\rm LO},\nu}}{\omega^2_{{\rm TO},\nu} -\omega^2 + i \omega \gamma_{{\rm TO},\nu}}, \label{eq:permittivity} \end{align} with a product over all optical branches, and where $\gamma$ is the phonon linewidth, and TO (LO) abbreviate transverse (longitudinal) optical phonons. The dielectric function is related to the complex conductivity $\hat \sigma(\omega)$ by $\hat \epsilon(\omega) = 1 + i \hat \sigma/\omega$. We separately consider the conductivity parallel to the c-axis, $\hat \epsilon_\parallel(\omega)$, and perpendicular to the c-axis, $\hat \epsilon_\perp(\omega)$. In SiC, there is a strong optical phonon branch for each of these directions, corresponding to the highest energy optical phonons ($A_1$ in the parallel direction, $E_1$ in the perpendicular direction)~\cite{mutschke1999infrared}. For these phonons, the LO and TO frequencies are compiled in Ref.~\cite{mutschke1999infrared}, where the values are nearly identical across all polymorphs. Because there are very limited low-temperature measurements of the phonon linewidths, we use $\gamma_{\rm LO} = 2.6/\textrm{cm}$ and $ \gamma_{\rm TO} = 1.2/\textrm{cm}$ in all cases. These values come from our calculations of the linewidth of the optical phonons in the 3C polymorph, and are also in agreement with experimental data~\cite{3Clinewidths}. The calculation of linewidths is discussed in Appendix \ref{app:first_principles_calcs}. Here we only consider the absorption into the strongest phonon branch for the parallel and perpendicular directions. Accounting for the fact that the DM polarization is random, we will take an averaged absorption over these phonon modes, $\langle R \rangle = \frac{2}{3}R_\perp + \frac{1}{3}R_{\parallel}$. With the above approximations, we find that the absorption rate for the strongest mode is nearly identical across all polymorphs. However, depending on the polymorph, there are additional lower energy optical phonons with weaker absorption, and which can have large mixing of transverse and longitudinal polarizations. Furthermore, there is absorption into multiphonons. While these contributions are not included in our analytical computation, we expect the qualitative behavior of the low-mass absorption rate to be well captured by the range formed by the available measurements from Ref.~\cite{SiCdata} and the above calculation. \section{Results}\label{sec:results} \subsection{DM with scalar nucleon interactions} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.98\textwidth]{SiC_massivemed_combined.pdf} \caption{ \label{fig:neutron_sigma_massive} Reach and daily modulation for DM interactions mediated by a scalar coupling to nucleons, assuming a massive mediator. {\bf Left:} All reach curves are obtained assuming kg-year exposure and zero background. For single phonon excitations relevant for $m_\chi \lesssim 10$ MeV, we show two representative thresholds of 1~meV (solid lines) and 80~meV (dotted) for the different SiC polytypes. We also show the reach for a superfluid He target~\cite{Knapen:2016cue}. The dashed lines show sensitivity to nuclear recoils assuming threshold of 0.5 eV. In the shaded region, it is expected that the dominant DM scattering is via multiphonons (see discussion in Refs.~\cite{Campbell-Deem:2019hdx,trickle2019}). {\bf Right:} The daily modulation of the DM-phonon scattering rate as a function of DM mass, where the quantity shown corresponds exactly to the modulation amplitude for a purely harmonic oscillation. The modulation is much smaller for scattering into acoustic phonons $\omega > 1$ meV, so we only show scattering into optical phonons with $\omega > 80$ meV. The modulation amplitude is generally largest for 2H and smallest for 3C. The inset compares the phase of the modulation among the polymorphs for $m_\chi$ = 80 keV. } \end{center} \end{figure*} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.98\textwidth]{SiC_masslessmed_combined.pdf} \caption{ \label{fig:neutron_sigma_massless} Similar to Fig.~\ref{fig:neutron_sigma_massive}, but for DM interactions mediated by a massless scalar coupling to nucleons. In this case, we also compare with the reach of another polar material, GaAs, for acoustic and optical branch thresholds~\cite{Griffin:2018bjn}. } \end{center} \end{figure*} For dark matter with spin-independent scalar interactions to nucleons, we consider both the massive and massless mediator limit, corresponding to different choices of mediator form factor $F^2_{\rm med}(q)$. A discussion of the astrophysical and terrestrial constraints on both cases can be found in Ref.~\cite{Knapen:2017xzo}. For the massive scalar mediator coupling to nucleons, the form factor is $F^2_{\rm med}(q) = 1$. The sensitivity of SiC to this model is shown in the left panel of Fig.~\ref{fig:neutron_sigma_massive} for the various SiC polytypes and also a few different experimental thresholds. For energy threshold $\omega > 0.5$ eV, we show the reach for nuclear recoils in a SiC target and compare with a representative target containing heavy nuclei. The DM-phonon rate is determined using Eq.~\eqref{eq:rate}, where the fiducial cross section is $\bar \sigma_\chi \equiv \sigma_n$ and $\sigma_n$ is the DM-nucleon scattering cross section. With a threshold $\omega > $~meV, it is possible to access DM excitations into single acoustic phonons, which provide by far the best sensitivity. While this threshold would be challenging to achieve, we show it as a representative optimistic scenario where access to single acoustic phonons is possible. The reach here is primarily determined by the speed of sound~\cite{Campbell-Deem:2019hdx}, and is thus fairly similar for all crystal structures. For comparison with additional polar crystal targets, see Ref.~\cite{Griffin:2019mvc}. When the threshold is $\omega \gtrsim 20-30$ meV, the only excitations available are optical phonons. For DM which couples to mass number, there is a destructive interference in the rate to excite optical phonons, resulting in significantly worse reach~\cite{Knapen:2017ekk,Cox:2019cod}. In Fig.~\ref{fig:neutron_sigma_massive}, we also show a representative optical phonon threshold of $\omega > 80$ meV as this is just below the cluster of optical phonons of energy $90-110$ meV present in all polymorphs (see Fig.~\ref{fig:SiC_phonons}). Note that the reach for $\omega > 30$ meV is not significantly different from $\omega > 80$ meV, due to the destructive interference mentioned above. While the optical phonon rate is much smaller than the acoustic phonon rate, the same destructive interference allows for a sizeable directionality in the DM scattering rate, and thus daily modulation. The right panel of Fig.~\ref{fig:neutron_sigma_massive} gives the daily modulation for DM scattering into optical phonons with threshold $\omega > 80$ meV. We find that the lowest modulation is for the 3C polytype, as expected given its higher degree of symmetry, and the largest modulation can be found in the 2H polytype. While the other polytypes of SiC can give comparable modulation to 2H, they contain many more phonon branches, which can wash out the signal. We also note that the modulation could be even larger with a lower threshold on the optical phonons, which was the case for sapphire in Ref.~\cite{Griffin:2018bjn}. However, if the threshold is reduced all the way to $\omega > $ meV such that acoustic phonons are accessible, the modulation is much smaller. In the massless mediator limit, we assume dark matter couples to nucleons through a scalar with mass $m_\phi \ll m_\chi v \sim 10^{-3} m_\chi$. For sub-MeV DM, constraints on this model are much less severe than in the heavy mediator case~\cite{Knapen:2017xzo}. Then we can approximate the DM-mediator form factor as \begin{align} F^2_{\rm med}(q) = \left( \frac{q_0}{q} \right)^4 \end{align} where $q_0 = m_{\chi} v_0$ is a reference momentum transfer. In this case $\sigma_n$ is a reference cross section for DM-nucleon scattering with momentum transfer $q_0$. The projected sensitivity to the massless mediator model from single-phonon excitations in SiC is shown in the left panel of Fig.~\ref{fig:neutron_sigma_massless}. Here we also show the reach for a GaAs target, which has a lower sound speed and thus more limited reach at low DM mass~\cite{Griffin:2018bjn}. For comparison with additional polar crystal targets, see Ref.~\cite{Griffin:2019mvc}. The daily modulation amplitude for a massless scalar mediator is shown in the right panel of Fig.~\ref{fig:neutron_sigma_massless}. Similar to the massive mediator case, we only have a sizeable modulation for scattering into optical phonon modes, and find that 2H (3C) tends to give the largest (smallest) amplitude. We conclude with a brief discussion of how SiC compares with other commonly considered target materials for DM with scalar nucleon interactions. Because SiC has a high sound speed similar to that of diamond, the sensitivity to acoustic phonon excitations extends to lower DM mass than in Si, Ge, or GaAs. Furthermore, depending on the polytype of SiC, the daily modulation in SiC is expected to be much larger than Si, Ge, GaAs and diamond. The latter materials have cubic crystal structures where the atoms in a unit cell have identical or very similar mass, so we expect the modulation to be similar to that of GaAs, found to be sub-percent level in Ref.~\cite{Griffin:2018bjn}. In terms of both reach and directionality, SiC is perhaps most similar to sapphire, and has advantages over many other well-studied target materials. \subsection{DM-electron interactions \label{sec:DMelectron_result} } \begin{figure*}[t!] \begin{center} \includegraphics[width=0.49\textwidth]{ER_Reach_F1.pdf} \includegraphics[width=0.49\textwidth]{ER_Reach_Fq2.pdf} \caption{ The reach into DM-electron scattering parameter space for 1 kg-year of exposure of select polytypes of SiC for heavy~({\bf left}) and light~({\bf right}) mediators. For comparison, we also show the reach of Si and diamond, assuming a threshold of one electron or energy sensitivity down to the direct band gap in the given material. The reach of Si given a 2-electron threshold is shown for comparison, for the case that charge leakage substantially limits the reach at 1-electron. Relic density targets from Ref.~\cite{CosmicVisions} are shown as thick blue lines for the freeze-in and freeze-out scenarios respectively. The grey shaded region includes current limits from SENSEI~\cite{barak2020sensei}, SuperCDMS HVeV~\cite{HVeV2020}, DAMIC~\cite{DAMIC_ERDM}, Xenon10~\cite{Essig:2017kqs}, Darkside~\cite{DarksideER}, and Xenon1T~\cite{Aprile:2019xxb}. \label{fig:electronRecoil}} \end{center} \end{figure*} We now present our results for DM that scatters with electrons through exchange of a scalar or vector mediator (that is not kinetically mixed with the photon). Our results for the reference cross section $\bar \sigma_e$ of Eq.~\eqref{eq:Rerecoil} are given in Fig.~\ref{fig:electronRecoil} for the heavy ({\it left}) and light ({\it right}) mediator cases, with form factors \begin{align} F_{\rm med}^2(q) = \begin{cases} 1 & \textrm{ heavy mediator} \\ (\alpha m_e)^4/q^4 & \textrm{ light mediator} \end{cases} \label{eq:F_med_ER} \end{align} For comparison, we also show the reach of Si and diamond. Thick blue curves indicate relic density targets from Ref.~\cite{CosmicVisions}. The grey shaded region show existing limits from SENSEI~\cite{barak2020sensei}, SuperCDMS HVeV~\cite{HVeV2020}, DAMIC~\cite{DAMIC_ERDM}, Xenon10~\cite{Essig:2017kqs}, Darkside~\cite{DarksideER} and Xenon1T~\cite{Xenon1T}. The results of Fig.~\ref{fig:electronRecoil} show that the reach of SiC to DM-electron scattering is similar to that of diamond at high mass for the case of a light mediators, and comparable to the silicon two-electron reach for the heavy mediator case. The relation of the reach between SiC polytypes is similar to that found in Figs.~\ref{fig:neutron_sigma_massive} and~\ref{fig:neutron_sigma_massless}, in that the majority of the difference at low-mass can be attributed to the different band gaps. We do observe, however, that the reach of 3C at high mass is roughly half an order of magnitude less than the hexagonal polytypes, despite having the smallest band gap. This can be understood by noticing that the density of state near the conduction band minima is smaller than that in the other polytypes, thus limiting the available phase space. The reach of 15R is also significantly worse than the other polytypes because the size of its small Brillouin zone is poorly matched to the typical momentum transfer (few keV). We learn that SiC can probe DM-electron scattering processes in a complementary manner to silicon and diamond. As mentioned earlier, prospects for directional detection of electron recoil signals in the various polytypes of SiC will be described in future work~\cite{future}. \subsection{DM with dark photon interactions} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.98\textwidth]{SiC_darkphoton_combined.pdf} \caption{ \label{fig:darkphoton} Reach and daily modulation for DM-phonon interactions mediated by a massless dark photon. {\bf Left:} The reach is shown assuming kg-year exposure and zero background. The reach from single optical phonon excitations in SiC (solid lines) is similar for all the polytypes, while the dotted lines in same colors are the electron recoil reach from Fig.~\ref{fig:electronRecoil}. The thick solid blue line is the predicted cross sections if all of the DM produced by freeze-in interactions~\cite{Essig:2011nj,Dvorkin:2019zdi}, and the shaded regions are constraints from stellar emission~\cite{Vogel:2013raa,Chang:2018rso} and Xenon10~\cite{Essig:2017kqs}. We also show the reach from phonon excitations in other polar materials, GaAs and Al$_2$O$_3$~\cite{Knapen:2017ekk,Griffin:2018bjn}, and from electron excitations in an aluminum superconductor~\cite{Hochberg:2015fth} and in Dirac materials, shown here for the examples of ZrTe$_5$ and a material with gap of $\Delta = 2.5$ meV~\cite{Hochberg:2017wce}. (For clarity, across all materials, all electron recoil curves are dotted and all phonon excitation curves are solid.) {\bf Right:} The daily modulation of the DM-phonon scattering rate as a function of DM mass, where the quantity shown corresponds exactly to the modulation amplitude for a purely harmonic oscillation. The modulation is negligible in the 3C polytype due to its high symmetry, and is largest in 2H. The inset compares the phase of the modulation among the polytypes for $m_\chi$ = 80 keV. } \end{center} \end{figure*} We now consider a DM candidate with mass $m_\chi$ that couples to a dark photon $A'$ of mass $m_{A'}$, where the dark photon has a kinetic mixing $\kappa$ with the Standard Model photon, \begin{equation} {\cal L}\supset -\frac{\kappa}{2} F_{\mu\nu} F'^{\mu\nu}\,. \end{equation} We again take two representative limits of this model: scattering via a massive or nearly massless dark photon. For a massive dark photon, the electron-scattering cross section $\bar \sigma_e$ in terms of model parameters is \begin{align} \bar \sigma_e = \frac{16 \pi \, \kappa^2 \alpha_\chi \alpha \, \mu_{\chi e}^2 }{\left[(\alpha m_e)^2 + (m_{A'})^2\right]^2} \end{align} and the DM-mediator form factor is $F_{\rm med}^2(q) = 1$. For the parameter space below $m_\chi \approx $ MeV, there are strong astrophysical and cosmological constraints~\cite{Essig:2015cda,Knapen:2017xzo} and the reach from exciting optical phonons is limited, so we do not consider DM-phonon scattering. The electron scattering reach is the same as the heavy mediator limit of the previous section, shown in the left panel of Fig.~\ref{fig:electronRecoil}. For a nearly-massless dark photon, we consider both electron recoils and optical phonon excitations. Optical phonons are excited through the mediator coupling to the ion (nucleus and core electrons), which is given in terms of the Born effective charges discussed in Section~\ref{sec:phonon}. For comparison with the literature, we will show both the electron recoil and optical phonon reach in terms of the electron-scattering cross section. This electron-scattering cross section is defined at a reference momentum transfer, given in terms of the dark fine structure constant $\alpha_\chi = g_\chi^2/(4\pi)$: \begin{align} \bar \sigma_e = \frac{16 \pi \, \kappa^2 \alpha_\chi \alpha \, \mu_{\chi e}^2 }{(\alpha m_e)^4} \end{align} where $\alpha$ is the fine structure constant and $\mu_{\chi e}$ is DM-electron reduced mass. As a result, for phonon scattering, the relevant cross section $\bar \sigma_\chi$ in Eq.~\eqref{eq:rate} is \begin{align} \bar \sigma_\chi \equiv \frac{\mu_{\chi n}^2}{\mu_{\chi e}^2} \bar \sigma_e. \end{align} The DM-mediator form factor for both electron and phonon scattering is \begin{align} F_{\rm med}^2(q) = \left( \frac{\alpha m_e}{q} \right)^4. \end{align} The reach for different polytypes of SiC to the light mediator limit of this model is shown in the left panel of Fig.~\ref{fig:darkphoton}. The reach for $m_\chi > $ MeV is from DM-electron scattering, and is the same as the light mediator limit of the previous section (shown in the right panel of Fig.~\ref{fig:electronRecoil}). Although there is an additional in-medium screening for dark photon mediators compared to Section~\ref{sec:DMelectron_result}, we expect this to be a small effect for a relatively high-gap material such as SiC. The sensitivity of SiC for $m_\chi < $ MeV is from exciting optical phonons, and is very similar across all polytypes. This is because the DM dominantly excites the highest energy optical phonon~\cite{Griffin:2018bjn}, which has the largest dipole moment and has similar energy in all cases. Furthermore, the coupling of the DM to this phonon is characterized by an effective Fr\"{o}hlich coupling that depends only on the phonon energy, $\epsilon_\infty$, and $\epsilon_0$~\cite{Knapen:2017ekk}. Again, it can be seen in Table~\ref{tab:properties} that all of these quantities are quite similar across the different polytypes. For completeness, we also show existing constraints from stellar emission~\cite{Vogel:2013raa,Chang:2018rso} and Xenon10~\cite{Essig:2017kqs}; projections for other materials such as Dirac materials~\cite{Hochberg:2017wce}, superconductors~\cite{Hochberg:2015fth}, and polar materials~\cite{Knapen:2017ekk,Griffin:2018bjn}; and target relic DM candidate curves~\cite{Essig:2011nj,Dvorkin:2019zdi}. There are larger differences between polytypes in directional detection, which depends on the details of the crystal structure. The results for DM-phonon scattering are provided in the right panel of Fig.~\ref{fig:darkphoton}. Similar to the case of DM with scalar nucleon interactions, we find that 3C has the smallest modulation due to its higher symmetry, while 2H has the largest modulation. Comparing with other proposed polar material targets such as GaAs and sapphire, the reach of SiC for dark photon mediated scattering does not extend as low in DM mass because of the higher LO phonon energy. However, the directional signal is similar in size to that of sapphire and substantially larger than in GaAs. For additional proposed experiments or materials that can probe this parameter space, see for example Refs.~\cite{Griffin:2019mvc,Berlin:2019uco,Coskuner:2019odd,Geilhufe:2019ndy}. \subsection{Absorption of dark photon dark matter} \begin{figure*}[t!] \begin{center} \includegraphics[width=0.48\textwidth]{SiC_abs_HP_gray.pdf} \includegraphics[width=0.48\textwidth]{SiC_abs_ALP_bandgap.pdf} \caption{ \label{fig:abs} Absorption of kinetically mixed dark photons ({\it left}) and axion-like particles ({\it right}). {\bf Left:} Projected reach at 95\% C.L. for absorption of kinetically mixed dark photons. The expected reach for a kg-year exposure of SiC is shown by the solid and dashed black curves (using the data of Ref.~\cite{SiCdata} and strongest phonon branch, respectively). Projected reach for germanium and silicon~\cite{Hochberg:2016sqx}, diamond~\cite{diamonddetectors}, Dirac materials~\cite{Hochberg:2017wce}, polar crystals~\cite{Griffin:2018bjn}, molecules~\cite{Arvanitaki:2017nhi} superconducting aluminum~\cite{Hochberg:2016ajh} and WSi nanowire~\cite{Hochberg:2019cyy} targets are indicated by the dotted curves. Constraints from stellar emission~\cite{An:2013yua,An:2014twa}, DAMIC~\cite{Aguilar-Arevalo:2016zop}, SuperCDMS~\cite{Agnese:2018col} Xenon~\cite{An:2014twa} data and a WSi nanowire~\cite{Hochberg:2019cyy} are shown by the shaded orange, green, purple, light blue and blue regions, respectively. {\bf Right:} Projected reach at 95\% C.L. for absorption of axion-like particles. The reach of a kg-year exposure of SiC is shown by the solid black curve, where only excitations above the band gap are assumed. The reach for semiconductors such as germanium and silicon~\cite{Hochberg:2016sqx}, diamond~\cite{diamonddetectors} and superconducting alumnium~\cite{Hochberg:2016ajh} targets is depicted by the dotted curves. Stellar constraints from Xenon100~\cite{Aprile:2014eoa}, LUX~\cite{Akerib:2017uem} and PandaX-II~\cite{Fu:2017lfc} data and white dwarfs~\cite{Raffelt:2006cw} are shown by the shaded red and orange regions. Constraints arising from (model-dependent) loop-induced couplings to photons are indicated by the shaded blue regions~\cite{Grin:2006aw,Arias:2012az}, while the QCD axion region is given in shaded gray. } \end{center} \end{figure*} Taking a dark photon with mass $m_{A'}$ and kinetic mixing $\kappa$ to be the dark matter candidate, the effective coupling $g_{\rm eff}^2$ in the absorption rate in Eq.~\eqref{eq:rate_absorb} must account for the in-medium kinetic mixing. Thus we have $g_{\rm eff}^2 = \kappa_{\rm eff}^2$, with in-medium mixing of \begin{align} \kappa_{\rm eff}^2 = \frac{\kappa^2 m_{A'}^4}{\left[m_{A'}^2 - \mbox{Re}~\Pi(\omega) \right]^2 + \mbox{Im}~\Pi(\omega)^2} = \frac{\kappa^2}{|\hat \epsilon(\omega) |^2}. \end{align} where $\Pi(\omega) = \omega^2(1 - \hat \epsilon(\omega))$ is the in-medium polarization tensor in the relevant limit of $|{\bf q}| \ll \omega$. The projected reach for absorption of SiC into the parameter space of kinetically mixed dark photons is shown in the left panel of Fig.~\ref{fig:abs}. As discussed in Section~\ref{sec:absorption}, we consider absorption into electron excitations using measurements of the optical conductivity of SiC from Ref.~\cite{SiCdata} (solid curve) as well as absorption into the strongest optical phonon mode for low masses (dashed curve). These black curves indicate the 95\% C.L. expected reach in SiC for a kg-year exposure, corresponding to 3 events. For comparison, we also show in dotted curves the projected reach of superconducting aluminum targets~\cite{Hochberg:2016ajh} and WSi nanowires~\cite{Hochberg:2019cyy}, semiconductors such as silicon, germanium~\cite{Hochberg:2016sqx} and diamond~\cite{diamonddetectors}, Dirac materials~\cite{Hochberg:2017wce}, polar crystals~\cite{Griffin:2018bjn} and molecules~\cite{Arvanitaki:2017nhi}. Stellar emission constraints~\cite{An:2013yua,An:2014twa} are shown in shaded orange, while the terrestrial bounds from DAMIC~\cite{Aguilar-Arevalo:2016zop}, SuperCDMS~\cite{Agnese:2018col}, Xenon data~\cite{An:2014twa} and a WSi superconducting nanowire~\cite{Hochberg:2019cyy} are shown in shaded gray. As is evident, SiC is a realistic target material that has prospects to probe deep into uncharted dark photon parameter space over a broad range of masses, from ${\cal O}(10\; {\rm meV})$ to 10's of eV. \subsection{Absorption of axion-like particles} Next we consider an axion-like particle (ALP) $a$ with mass $m_a$ that couples to electrons via \begin{equation} {\cal L}\supset \frac{g_{aee}}{2 m_e} (\partial_\mu a)\bar e \gamma^\mu \gamma^5 e\,. \end{equation} The absorption rate on electrons can be related to the absorption of photons via the axioelectric effect, and the effective coupling in Eq.~\eqref{eq:rateAxion} is then given by \begin{equation} \label{eq:rateAxion} g_{\rm eff}^2= \frac{3 m_a^2}{4 m_e^2} \frac{g_{aee}^2}{e^2}\,. \end{equation} Because the ALP directly couples to electrons, we consider only the absorption above the electron band gap. (Relating the couplings of the sub-gap phonon excitations is less straightforward due to the spin-dependence of the ALP coupling.) The projected reach for a kg-year exposure shown in the right panel of Fig.~\ref{fig:abs} by the solid black curve. For comparison, we show the reach of superconducting aluminum~\cite{Hochberg:2016ajh} targets as well as silicon~\cite{Hochberg:2016sqx}, germanium~\cite{Hochberg:2016sqx} and diamond~\cite{diamonddetectors} by the dotted curves. Constraints from white dwarfs~\cite{Raffelt:2006cw}, Xenon100~\cite{Aprile:2014eoa}, LUX~\cite{Akerib:2017uem} and PandaX-II~\cite{Fu:2017lfc} are also shown. Constraints from the model-dependent loop-induced couplings to photons are indicted by shaded blue~\cite{Grin:2006aw,Arias:2012az}. The QCD axion region of interest is shown in shaded gray. We learn that SiC detectors can reach unexplored ALP parameter space complementary to stellar emission constraints. \section{Discussion} In this paper we proposed the use of SiC for direct detection of light DM. With advantages over silicon and diamond--- including its polar nature and its many stable polymorphs---we have shown that SiC would serve as an excellent detector across many different DM channels and many mass scales: DM-nuclear scattering (direct and via single or multiple phonon excitations) down to ${\cal O}(10\, \rm keV)$ masses, DM-electron scattering down to ${\cal O}(10\, \rm MeV)$ masses, dark photon absorption down to ${\cal O}(10\, \rm meV)$ masses and axion-like absorption down to ${\cal O}(10\, \rm meV)$ masses, with prospects for directional detection as well. In particular, the high optical phonon energy in SiC (higher than that of sapphire) coupled with the high sound speed of all polytypes and long intrinsic phonon lifetime, makes SiC an ideal substrate for calorimetric phonon readout. There is substantial reach for dark photon coupled DM at higher energy thresholds than competing materials, and the presence of a strong bulk plasmon in SiC makes it a promising follow-up material for potential inelastic interactions of DM at the energy scales in the multi-phonon regime, as described in Refs.~\cite{kurinsky2020dark,Kozaczuk_2020}. In fact, since SiC exists in many stable polytypes, it allows us to compare the influence of crystal structure and hence bonding connectivity on their suitability as targets for various dark matter channels. Broadly, we see similar sensitivities and reach across the calculated polytypes as expected for a set of materials comprised of the same stoichiometric combination of elements. For DM-nucleon and DM-phonon interactions, we find very similar reach given the similar phonon spectra of the SiC polytypes. One difference is that polytypes with smaller unit cells will have the advantage of higher intrinsic phonon lifetimes, as the higher unit cell complexity will increase scattering. More variation in reach among the polytypes, however, is found for DM-electron scattering due to the variation in electronic bandgaps across the SiC family. This trend in bandgap variation in SiC polytypes is well-discussed in the literature and is a result of the third nearest neighbor effects~\cite{Park1994}. We indeed see that, with increasing unit cell size, the decrease in bandgap in the H polytypes correspondingly leads to better reach, as expected. Materials-by-design routes explored for dark matter detection have focused on bandgap tuning~\cite{Inzani_et_al:2020}, and materials metrics for improved electron and phonon interactions~\cite{geilhufe2018materials,Griffin:2019mvc,geilhufe2020dirac,catena2020atomic}. A key advantage of SiC over other target proposals is its prospect for {\it directionality-by-design}---given the similar performance in reach across the polytypes, we can select a material that is optimized for directional detection. Our results indicate that, as expected, the highly symmetric cubic phase, 3C, exhibits no daily modulation, whereas the maximal modulation is achieved for the 2H phase. The 2H phase has inequivalent in-plane and out-plane crystallographic axes and so naturally has an anisotropic directional response. We further find that this effect is diminished for increasing the number of out-of-plane hexagonal units (decreasing the regularity of the unit cell) as the directional response becomes integrated out over repeated unit cells. As discussed earlier, one of the primary benefits of using SiC over other carbon-based crystals is the availability of large samples of the 4H and 6H polytypes; the 3C polytype is not currently at the same level of fabrication scale, and the 2H, 8H and 15R polytypes are scarce in the literature and not made in significant quantities. The charge mobility measurements for existing SiC samples indicate that purity of these crystals is not at the same level as comparable diamond and silicon, and there are few measurements of intrinsic phonon properties at cryogenic temperatures. In order to further develop SiC devices, studies of charge transport and phonon lifetime in a range of samples need to be undertaken so that current limitations can be understood and vendors can work to improve crystal purity. Device fabrication, on the other hand, is expected to be fairly straightforward due to past experience with basic metallization and the similarity of SiC to both diamond and Si. The availability of large boules of SiC, unlike for diamond, means that scaling to large masses for large detectors is much more commercially viable and cost effective. The material response of the SiC polytypes should also be better characterized. In particular, studies of the non-ionizing energy loss of nuclear recoils needs to be modeled and characterized; photo-absorption cross-sections at cryogenic temperatures are needed, both above and below-gap; and the quantum yield of ionizing energy deposits needs to be better understood. SiC has already been shown to be much more radiation hard than Si, but more studies of radiation-induced defects will benefit both the use of SiC as a detector as well as a better understanding of vacancies used in quantum information storage. More practical studies of breakdown voltage and electron/hole saturation velocity will also inform detector modeling and readout. \begin{acknowledgements} We would like to thank Simon Knapen for early collaboration, and Rouven Essig and Tien-Tien Yu for clarifications regarding \texttt{QEDark}. We would also like to thank Lauren Hsu for feedback on an early paper draft. The work of YH is supported by the Israel Science Foundation (grant No. 1112/17), by the Binational Science Foundation (grant No. 2016155), by the I-CORE Program of the Planning Budgeting Committee (grant No. 1937/12), by the German Israel Foundation (grant No. I-2487-303.7/2017), and by the Azrieli Foundation. TL is supported by an Alfred P. Sloan foundation fellowship and the Department of Energy under grant DE-SC0019195. Parts of this document were prepared by NK using the resources of the Fermi National Accelerator Laboratory (Fermilab), a U.S. Department of Energy, Office of Science, HEP User Facility. Fermilab is managed by Fermi Research Alliance, LLC (FRA), acting under Contract No. DE-AC02-07CH11359. TCY is supported by the U.S. Department of Energy under contract number DE-AC02-76SF00515. SMG and KI were supported by the Laboratory Directed Research and Development Program of LBNL under the DoE Contract No. DE-AC02-05CH11231. Computational resources were provided by the National Energy Research Scientific Computing Center and the Molecular Foundry, DoE Office of Science User Facilities supported by the Office of Science of the U.S. Department of Energy under Contract No. DE-AC02-05CH11231. The work performed at the Molecular Foundry was supported by the Office of Science, Office of Basic Energy Sciences, of the U.S. Department of Energy under the same contract. \end{acknowledgements}
proofpile-arXiv_067-8655
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In recent years, the possibility of using small or micro-spacecraft in interplanetary missions is drawing the attention of scientists and engineers around the world interested in reducing both development time and cost of the mission, without affecting significantly its scientific return. The first deep-space micro-spacecraft, PROCYON \cite{campagnola2015low}, was developed in little more than a year in 2014 by the University of Tokyo and JAXA, at a very low cost if compared to standard-size spacecraft. Despite the malfunctioning of the main thruster, the PROCYON mission has been ubiquitously called a success, paving the way for similar mission concepts by other space agencies. In 2018, NASA released the first two interplanetary CubeSats, part of the MarCO (Mars Cube One) mission \cite{asmar2014mars}, which successfully accomplished their goal of providing a real-time communication link to Earth during the entry, descent, and landing phase of InSight lander. The same year, ESA's first stand-alone CubeSat mission for deep-space, M–Argo (Miniaturised – Asteroid Remote Geophysical Observer) has been announced \cite{walker2017miniaturised}, and it is likely to be ready for launch in mid-2021 at the earliest. Low-thrust electric propulsion is a key technology for enabling small/micro-satellite interplanetary missions, as it provides the spacecraft with significantly lower specific propellant consumption. However, because of the limited budget, micro-spacecraft generally mount components with a low technological readiness level. This increases the risk of incurring unexpected control execution errors and/or missed thrust events (MTEs) during any of the long thrusting periods. In addition, small spacecraft have limited ground station access, and larger uncertainties in the state knowledge (i.e., in the observations for orbit determination) should be expected with respect to standard missions. Typically, when designing the mission, the engineers take these uncertainties into account \textit{a posteriori}\cite{rayman2007coupling, laipert2015automated}, by means of time-consuming iterative procedures which often bring to suboptimal solutions and over-conservative margins. This design methodology is particularly unsuitable for micro-spacecraft missions, where the possibility to have large propellant margins and system redundancy is almost completely excluded. In this respect, recent works attempted to address the robust design of interplanetary trajectories by using novel optimization techniques. As an example, the problem of designing optimal risk-aware trajectories, which guarantee the safety of the spacecraft when it operates in uncertain environments, was addressed by applying chance-constrained optimal control~\cite{ono2013probabilistic}, combined with a convex optimization approach, to deal with impulsive maneuvers~\cite{oguri2019risk}, or with a direct/indirect hybrid optimization method, to deal with continuous-thrust~\cite{oguri2019risk2}. Stochastic Differential Dynamic Programming (SDDP) was applied to interplanetary trajectory design in presence of Gaussian-modeled state uncertainties \cite{ozaki2018stochastic, ozaki2020tube}. Also, the robust design of a low-thrust interplanetary transfer to a near-Earth asteroid was performed by using evidence theory to model epistemic uncertainties in the performance of the main thruster and in the magnitude of the departure hyperbolic excess velocity~\cite{dicarlo2019robust}. Belief-based transcription procedures for the stochastic optimal control problem were proposed for the robust design of space trajectories under stochastic and epistemic uncertainties~\cite{greco2018intrusive, greco2020direct}, incorporating also navigation analysis in the formulation to update the knowledge of the spacecraft state in presence of observations~\cite{greco2020robust}. \subsection{Deep Learning in Spaceflight Mechanics} The interest in the application of deep learning techniques to optimally and robustly solve control problems is rapidly increasing in recent years, especially for space applications. In this context, the term G\&CNet{} (namely, Guidance and Control Network) was coined at the European Space Agency \cite{izzo2019survey} to refer to an on-board system that provides real-time guidance and control functionalities to the spacecraft by means of a Deep Neural Network (DNN) that replaces traditional control and guidance architectures. {DNNs are among the most versatile and powerful machine learning tools, thanks to their unique capability of accurately approximating complex, nonlinear input-output functions, provided that a sufficiently large amount of data (training set) consisting of sample input-output pairs is available} \cite{hornik1990universal}. Two alternative, and quite different, approaches can be used for training a G\&CNet{} to solve an optimal control problem (OCP), depending on what training data are used and how they are collected. In \textit{Behavioral Cloning} (BC), given a set of trajectories from an expert (that is, labeled observations-controls pairs), the network is trained to reproduce (or clone) the expert behavior. Usually, these trajectories are obtained as the solution of a {(deterministic)} optimal control problem with randomized boundary conditions. Behavioral cloning has been successfully used to train a fast-execution G\&CNet{} to control a spacecraft during a fuel-optimal low-thrust Earth-Venus transfer~\cite{izzo2019interplanetary} as well as during a landing maneuver in a simplified dynamical model~\cite{sanchez2016learning}. This approach proved to be computationally efficient, and it benefits from state-of-the-art implementations of supervised learning algorithms \cite{tensorflow2015-whitepaper}. However, it shows a number of downsides that make it unsuitable for robust trajectory design. In fact, the BC effectiveness rapidly worsens when the G\&CNet{} is asked to solve problems that fall outside of the set of expert demonstrations it was trained in. As a consequence, when dealing with Stochastic Optimal Control Problems (SOCPs), a drop in performance (or even divergence) may occur when, because of uncertainty, the flight trajectory starts moving away from the training set domain, typically populated by solutions coming from deterministic OCPs. To recover a correct behavior, a DAGGER (Dataset Aggregation) algorithm can be used. In this case, the solution process features and additional loop where new training data are provided ``on-line'' by an expert (e.g., an OCP solver) as they are required to cover previously unknown situations. This approach has been effectively exploited to improve the network accuracy in controlling a lander during a powered descent on the Lunar surface~\cite{furfaro2018deep}. However, the effectiveness of BC for robust trajectory design remains doubtful, especially when solutions from deterministic OCPs are used as expert demonstrations. Recently, an attempt has been performed to train a network by BC with a training set encompassing trajectories perturbed by random MTEs \cite{rubinsztejn2019neural}, showing promising results. However, the possibility of having other types of state and control uncertainties has not been addressed yet. A different approach is represented by \textit{Reinforcement Learning} (RL), which involves learning from experience rather than from expert demonstrations. In RL, a software agent (e.g., the G\&CNet{}) autonomously learns how to behave in a (possibly) unknown dynamical environment, modeled as a Markov Decision Process (MDP), so as to maximize some utility-based function that plays the role of the merit index in traditional optimal control problems. Differently from the BC approach, there is no pre-assigned data set of observations-controls pairs to learn from, so the agent is not told in advance what actions to take in a given set of states. Instead, the agent is left free to explore the environment, by repeatedly interacting with a sufficiently large number of realizations of it. The only feedback the agent receives back is a numerical reward collected at each time step, which helps the agent understanding how good or how bad its current performance is. In this framework, the final goal of the RL-agent is to learn the control policy that maximizes the expected cumulative sum of rewards over a trajectory. Because MDP allows only scalar reward functions, a careful choice, or shaping, of the reward is mandatory to efficiently guide the agent during training, while ensuring compliance with (any) problem constraints. Deep RL methods have obtained promising results in a number of spaceflight dynamics problems, such as low-thrust interplanetary trajectory design \cite{miller2019low, miller2019interplanetary, sullivan2020using}, 3-DoF and 6-DoF landing guidance with application to a powered descent~\cite{gaudet2020deep}, trajectory optimization in the cislunar environment \cite{scorsoglio2019actor, lafarge2020guidance}, and the design of guidance algorithms for rendezvous and docking maneuvers~\cite{broida2019spacecraft, hovell2020deep}. This paper aims at investigating the use of Reinforcement Learning for the robust design of a low-thrust interplanetary trajectory in presence of uncertainty. Specifically, uncertainties on the spacecraft state, caused by unmodeled dynamical effects, on orbit determination, because of inaccurate knowledge, and on the applied control, due to execution errors and missed thrust events, will be considered in the present analysis. RL has been selected as optimization algorithm since it has the clear advantage of not requiring the \textit{a priori} generation of any optimal trajectory to populate the training set, as data are gathered by running directly the current best found control policy on the stochastic environment. In this way, the agent is able to progressively improve, in an autonomous way, the performance and robustness of its control policy, in order to achieve the mission goals regardless of the uncertainties that may arise. This feature makes RL the ideal candidate to solve the problem at hand. At present, most of the research encompassing RL for spacecraft trajectory design deals exclusively with deterministic environments. Thus, one of the main contributions of this paper is the investigation of the possible extension of RL applicability also to stochastic scenarios. The paper is organized as follows. First, the optimization problem is formulated as a Markov Decision Process, and the mathematical models used to describe the state, observation, and control uncertainties acting on the system are defined. The expression of the reward function, which includes both the merit index and the problem constraints (e.g., fixed final spacecraft position and velocity), is given as well. Next, after a brief introduction of the basic concepts and notation of Reinforcement Learning, the RL algorithm used in this work, named Proximal Policy Optimization, is described in detail. Furthermore, the configuration selected for the DNN and the values used for the algorithm hyper-parameters are reported. Then, numerical results are presented for the case study of the paper, that is, a time-fixed low-thrust Earth-Mars rendezvous mission. Specifically, the effect of each source of uncertainties on the system dynamics is analysed independently and the obtained results are compared in terms of trajectory robustness and optimality. Eventually, the reliability of the obtained solutions is assessed by means of Monte Carlo simulations. A section of conclusions ends the paper. \section{Problem Statement} This paper investigates the use of RL algorithms for the design of robust low-thrust interplanetary trajectories. For the sake of comparison with other research papers~\cite{ozaki2018stochastic, ozaki2020tube}, a three-dimensional time-fixed minimum-fuel Earth-Mars rendezvous mission is considered as a test case. The spacecraft leaves the Earth with zero excess of hyperbolic velocity, and it is assumed to move in a Keplerian dynamical model under the sole influence of the Sun. The mission goal is to match Mars position and velocity at final time, with minimum propellant consumption. The values of the initial position $\bm{r}_\Earth$ and velocity $\bm{v}_\Earth$ of the Earth, the final position $\bm{r}_\Mars$ and velocity $\bm{v}_\Mars$ of Mars, the total transfer time $t_f$, the initial spacecraft mass $m_0$, and the spacecraft engine parameters (maximum thrust $T_{max}$ and effective exhaust velocity $u_{eq}$) are the same as in the paper by Lantoine and Russell \cite{lantoine2012hybrid}, and are reported in Table~\ref{tab:data}. In all simulations, the physical quantities have been made non-dimensional by using as reference values the Earth-Sun mean distance $\bar{r} = \SI{149.6e6}{km}$, the corresponding circular velocity $\bar{v} = \sqrt{{\mu_\Sun}/{\bar{r}}}$, and the initial spacecraft mass $\bar{m} = m_0$. \begin{table}[htbp] \caption{Problem data.} \label{tab:data} \centering \begin{tabular}{c c} \hline Variable & Value\\ \hline $N$ & 40 \\ $t_f,\, \si{days}$ & $358.79$ \\ $T_{max},\, \si{\newton}$ & $0.50$ \\ $u_{eq},\, \si{\kilo\meter/\second}$ & $19.6133$ \\ $m_0,\, \si{\kilo\gram}$ & $1000$ \\ $\mu_\Sun,\, \si{\kilo\meter^3/\second^2}$ & $132712440018$ \\ $\bm{r}_\Earth,\, \si{\kilo\meter}$ & $[-140699693,\, -51614428,\, 980]^T$\\ $\bm{v}_\Earth,\, \si{\kilo\meter/\second}$ & $[9.774596,\, -28.07828,\, 4.337725 \times 10^{-4}]^T$\\ $\bm{r}_\Mars,\, \si{\kilo\meter}$ & $[-172682023,\, 176959469,\, 7948912]^T$\\ $\bm{v}_\Mars,\,\si{\kilo\meter/\second}$ & $[-16.427384,\, -14.860506,\, 9.21486 \times 10^{-2}]^T$\\ \hline \end{tabular} \end{table} The stochastic effects here considered are \textit{state uncertainties}, which refer to the presence of unmodeled dynamics, \textit{observation uncertainties}, related to measurement noise and/or inaccuracies in orbital determination that lead to imperfect knowledge of the spacecraft state, and \textit{control uncertainties}, which account for both random actuation errors (i.e., in the direction and magnitude of the thrust), and \textit{single} or \textit{multiple MTEs}, which correspond to null thrust occurrences. \subsection{Markov Decision Process} Let us briefly introduce the mathematical formulation of a generic Markov Decision Process (MDP), which is required to properly setup the mathematical framework of deep RL algorithms. Let $\bm{s}_k \in S \subset \mathbb{R}^n$ be a vector that completely identifies the \textit{state} of the system (e.g., the spacecraft) at time $t_k$. In general, the complete system state at time $t_k$ is not available to the controller, which instead relies on an \textit{observation} vector $\bm{o}_k \in O \subset \mathbb{R}^m$. Observations might be affected by noise or uncertainty, and are thus written as a function of a random vector $\bm{\omega}_{o,k} \in \Omega_o \subset \mathbb{R}^{m_w}$. The commanded \textit{action} $\bm{a}_k$ at time $t_k$ is the output of a state-feedback control policy $\pi : O \xrightarrow{} A$, that is: $\bm{a}_k = \pi(\bm{o}_k) \in A \subset \mathbb{R}^l$. The actual \textit{control} $\bm{u}_k \in A$ differs from the commanded action due to possible execution errors, modeled as a function of a stochastic control disturbance vector $\bm{\omega}_{a,k} \in \Omega_a \subset \mathbb{R}^{l_w}$. A stochastic, time-discrete dynamical model $f$ is considered for the system state. The uncertainty on the system dynamics at time $t_k$ is modeled as a random vector $\bm{w}_{s,k} \in \Omega_s \subset \mathbb{R}^{n_w}$. As a result, the dynamical system evolution over time is described by the following equations: \begin{align} \bm{s}_{k+1} &= f(\bm{s}_k, \bm{u}_k, \bm{\omega}_{s, k}) \label{eq:MDP} \\ \bm{o}_{k} &= h(\bm{s}_{k}, t_k, \bm{\omega}_{o, k}) \label{eq:MDP4} \\ \bm{u}_k &= g(\bm{a}_k, \bm{\omega}_{a,k}) \label{eq:MDP3}\\ \bm{a}_k &= \pi(\bm{o}_k) \label{eq:MDP2} \end{align} The problem goal is to find the optimal control policy $\pi^\ast$ that maximizes the expected value of the discounted sum of rewards, that, in an episodic form, is: \begin{equation} J = \underset{\tau\sim \pi}{\mathbb{E}} \left[ \sum_{k = 0}^{N-1} { \gamma^k R(\bm{s}_k, \bm{u}_k, \bm{s}_{k+1}) } \right] \label{eq:obj} \end{equation} where $R(\bm{s}_k, \bm{u}_k, \bm{s}_{k+1})$ is the reward associated with transitioning from state $\bm{s}_k$ to state $\bm{s}_{k+1}$ due to control $\bm{u}_k$, $\gamma \in (0,1]$ is a discount factor that is used to either encourage long term planning ($\gamma = 1$) or short term rewards ($\gamma \ll 1$), and $N$ is the number of steps in one episode. Note that $\mathbb{E}_{\tau\sim \pi}$ here denotes the expectation taken over a trajectory $\tau$, that is, a sequence of state-action pairs $\tau = \left\{ (\bm{s}_0,\,\bm{a}_0) ,\, \ldots \, (\bm{s}_{N-1},\,\bm{a}_{N-1}) \right\}$ sampled according to the closed-loop dynamics in Eqs.~\eqref{eq:MDP}-\eqref{eq:MDP2}. Note that, in an episodic setting, $J = V^\pi(\bm{s}_0)$, being $V^\pi(\bm{s}_k)$ the value function, defined as the expected return obtained by starting from state $\bm{s}_k$ and acting according to policy $\pi$ until the end of the episode: \begin{equation} V^{\pi}(\bm{s}_k) = \underset{\tau\sim \pi}{\mathbb{E}} \left[ \sum_{k' = k}^{N-1} { \gamma^{k'} R(\bm{s}_{k'}, \bm{u}_{k'}, \bm{s}_{k'+1}) } \right] \label{eq:Vpi} \end{equation} \subsection{Formulating an Earth-Mars Mission as a Markov Decision Process} This general model is now specified for the Earth-Mars transfer problem at hand. During the mission, the spacecraft state $\bm{s}_k$ at any time step $t_k = k\, t_f / N,\, k \in [0,N]$, is identified by its inertial position $\bm{r}$ and velocity $\bm{v}$ with respect to Sun, and by its total mass $m$: \begin{equation} \bm{s}_k = \left[\bm{r}_k^T, \bm{v}_k^T, m_k \right]^T \in \mathbb{R}^7 \end{equation} The low-thrust trajectory is approximated as a series of ballistic arcs connected by impulsive $\Delta V$s, similarly to what done in the well-known Sims-Flanagan model~\cite{sims1999preliminary}. The magnitude of the $k$-th impulse is limited by the amount of $\Delta V$ that could be accumulated over the corresponding trajectory segment by operating the spacecraft engine at maximum thrust $T_{max}$: \begin{equation} \Delta V_{max, k} = \frac{T_{max}}{m_k} \frac{t_f}{N} \label{eq:DVmax-k} \end{equation} So, the commanded action at time $t_k$ corresponds to an impulsive $\Delta V$: \begin{equation} \bm{a}_k = \Delta \bm{V}_k \in [-\Delta V_{max, k}, \Delta V_{max, k}]^3 \subset \mathbb{R}^3. \label{eq:action} \end{equation} Since the spacecraft moves under Keplerian dynamics between any two time steps, in a deterministic scenario the spacecraft state can be propagated analytically with a closed-form transition function: \begin{equation} \begin{bmatrix} \bm{r}_{k+1} \\ \bm{v}_{k+1} \\ m_{k+1} \\ \end{bmatrix} = f(\bm{r}_k, \bm{v}_k, m_k, \Delta \bm{V}_k) = \begin{bmatrix} \hat f_k \bm{r}_k + \hat g_k (\bm{v}_k + \Delta \bm{V}_k) \\ \dot{\hat f}_k \bm{r}_k + \dot{\hat g}_k (\bm{v}_k + \Delta \bm{V}_k)\\ m_k \, \mbox{exp}\left({-\frac{|\Delta \bm{V}_k|}{u_{eq}}}\right) \\ \end{bmatrix} \label{eq:dyn_kep} \end{equation} where $\hat f_k$ and $\hat g_k$ are the Lagrange coefficients at $k$-th step, defined as in Ref.~\citenum{bate1971fundamentals}, and the mass update is obtained through Tsiolkovsky equation. At time $t_f$, the final $\Delta V$ is calculated so as to match Mars velocity, that is: \begin{equation} \Delta \bm{V}_N = \min{\left(|\bm{v}_\Mars - \bm{v}_N|, \Delta V_{max,N}\right)} \frac{\bm{v}_\Mars - \bm{v}_N}{|\bm{v}_\Mars - \bm{v}_N|} \label{eq:DVmax-f} \end{equation} and the final spacecraft state is evaluated as: \begin{align} \bm{r}_f &= \bm{r}_N \\ \bm{v}_f &= \bm{v}_N + \Delta \bm{V}_N \\ m_f &= m_N \, \mbox{exp}\left({-{|\Delta \bm{V}_N|}/{u_{eq}}}\right) \end{align} The (deterministic) observations collected at time $t_k$ are: \begin{equation} \bm{o}_k = \left[\bm{r}_k^T, \bm{v}_k^T, m_k, t_k \right]^T \in \mathbb{R}^8 \end{equation} The value selected for the total number of time steps $N$ is reported in Table~\ref{tab:data}. \subsubsection{State Uncertainties.} For the sake of simplicity, uncertainties on the spacecraft dynamics are modeled as additive Gaussian noise on position and velocity at time $t_k$, $k \in (0, N]$, that is: \begin{equation} \bm{w}_{s, k} = \begin{bmatrix} \delta \bm{r}_k \\ \delta \bm{v}_k \end{bmatrix} \sim \mathcal{N}(\bm{0}_{6}, \bm{R}_{s,k}) \in \mathbb{R}^6 \end{equation} where $\bm{R}_{s,k} = \mbox{diag}\left(\sigma_r^2\bm{I}_3, \sigma_v^2\bm{I}_3 \right)$ is the covariance matrix, $\bm{I}_{n}$ (respectively, $\bm{0}_{n}$) indicates an identity (respectively, null) matrix with dimension $n \times n$ (respectively, $n \times 1$), and $\sigma_r, \sigma_v$ are the standard deviations on position and velocity. So, the stochastic dynamical model is written as: \begin{equation} \begin{bmatrix} \bm{r}_{k+1} \\ \bm{v}_{k+1} \\ m_{k+1} \\ \end{bmatrix} = f(\bm{r}_k, \bm{v}_k, m_k, \bm{u}_k) + \begin{bmatrix} \delta \bm{r}_{k+1} \\ \delta \bm{v}_{k+1} \\ 0 \\ \end{bmatrix} \end{equation} \subsubsection{Observation Uncertainties.} The uncertainty in the knowledge of spacecraft position and velocity due to errors in the orbital determination is modeled as additive Gaussian noise on the deterministic observations at time $t_k$: \begin{equation} \bm{o}_k = \begin{bmatrix} \bm{r}_{k} \\ \bm{v}_{k} \\ m_{k} \\ t_k \end{bmatrix} + \begin{bmatrix} \delta \bm{r}_{o, k} \\ \delta \bm{v}_{o, k} \\ 0 \\ 0 \end{bmatrix} \end{equation} being: \begin{equation} \bm{w}_{o, k} = \begin{bmatrix} \delta \bm{r}_{o,k} \\ \delta \bm{v}_{o,k} \end{bmatrix} \sim \mathcal{N}(\bm{0}_{6}, \bm{R}_{s,k}) \in \mathbb{R}^6 \end{equation} \subsubsection{Control Uncertainties.} Control execution errors are modeled as a small three-dimensional rotation of the commanded $\Delta V$ vector, defined by Euler angles $(\delta \phi, \delta \vartheta, \delta \psi)$, and a slight variation $\delta u$ of its modulus. Random variables $\delta \phi, \delta \vartheta, \delta \psi$ and $\delta u$ are assumed to be Gaussian, with standard deviations $\sigma_{\phi}, \sigma_{\vartheta}, \sigma_{\psi}$ and $\sigma_{u}$. So, the control disturbance vector at time $t_k$ is: \begin{equation} \bm{w}_{a, k} = \begin{bmatrix} \delta \phi_k \\ \delta \vartheta_k \\ \delta \psi_k \\ \delta u_k \end{bmatrix} \sim \mathcal{N}(\bm{0}_{4}, \bm{R}_{a,k}) \in \mathbb{R}^4 \end{equation} where $\bm{R}_{a,k} = \mbox{diag}\left(\sigma_\phi^2, \sigma_\vartheta^2, \sigma_\psi^2, \sigma_u^2 \right)$ is the covariance matrix. The actual control $\bm{u}$ can be written as a function of the commanded action $\bm{a}$ at time $t_k$, $k \in [0, N)$, as: \begin{equation} \bm{u}_k = g(\bm{a}_k, \bm{w}_{a,k}) = (1 + \delta u_k) \bm{A}_k \bm{a}_k \end{equation} where the rotation matrix $\bm{A}_k$ is evaluated, under the small-angle assumption, as: \begin{equation} \bm{A}_k = \begin{bmatrix} 1 & - \delta \psi_k & \delta \vartheta_k\\ \delta \psi_k & 1 & - \delta \phi_k \\ - \delta \vartheta_k & \delta \phi_k & 1 \end{bmatrix} \end{equation} It is worth noting that, although the control disturbance vector is Gaussian, the effect obtained on the applied control is definitively non-Gaussian and, for this reason, the solution methods in Ref.~\citenum{ozaki2018stochastic} and \citenum{ozaki2020tube} may not be applicable. \subsubsection{Missed Thrust Events.} Besides small control execution errors, the effect of one or more consecutive MTEs over the course of the mission is also investigated. The MTE is modeled as a complete lack of thrust, even when commanded, that occurs at a randomly chosen time $t_{\hat k} \in [0, N)$, so that $\bm{u}_{\hat k}=\bm{0}_{3}$. With some probability $1-p_{mte}$ the miss-thrust is recovered and for the remaining steps it never happens again. Otherwise, the MTE persists for an additional time-step. This procedure is repeated, but the MTE may last at most $n_{mte}$ successive time steps, that is, from $t_{\hat k}$ to $t_{\hat k +n_{mte}-1}$. The values used for the standard deviations and the other uncertainty model parameters introduced so far are reported in Table~\ref{tab:sigma}. \begin{table}[htbp] \caption{Uncertainty model parameters.} \label{tab:sigma} \centering \begin{tabular}{c c c c c c c c} \hline $\sigma_r,\, \si{\kilo\meter}$ & $\sigma_v,\, \si{\kilo\meter/\second}$ & $\sigma_\phi,\, \si{deg}$ & $\sigma_\vartheta,\, \si{deg}$ & $\sigma_\psi,\, \si{deg}$ & $\sigma_u$ & $p_{mte}$ & $n_{mte}$ \\ \hline $1.0$ & $0.05$ & $1.0$ & $1.0$ & $1.0$ & $0.05$ & $0.1$ & $3$ \\ \hline \end{tabular} \end{table} \subsubsection{Reward Function.} The objective of the optimization procedure is to maximize the (expected) final mass of the spacecraft, while ensuring the compliance with terminal rendezvous constraints on position and velocity. For this reason, the reward $r_k$ collected by the agent at time $t_k$, for $k \in (0, N]$, is defined as: \begin{equation} r_k = -\mu_k -\lambda_1 \, e_{u,k-1} - \lambda_2 \, e_{s,k} \label{eq:rew} \end{equation} where: \begin{align} \mu_k &= \Delta m_k = \begin{cases} m_{k-1} - m_k & \mbox{if}\,\, k < N \\ m_{N-1} - m_f & \mbox{if}\,\, k = N \end{cases} \\ e_{u,k} &= \max{\left(0, |\bm{u}_{k}| - \Delta V_{max,k} \right)} \\ e_{s,k} &= \begin{cases} 0 & \mbox{if}\,\, k < N \\ \max{\left(0, \max{\left(\frac{|\bm{r}_f - \bm{r}_\Mars|}{|\bm{r}_\Mars|}, \frac{|\bm{v}_f - \bm{v}_\Mars|}{|\bm{v}_\Mars|}\right) - \varepsilon } \right)} & \mbox{if}\,\, k = N \end{cases} \end{align} Here $\mu_k$ is the cost function, that is, the consumed propellant mass, % $e_{u,k}$ is the violation of the constraint relative to the maximum $\Delta V$ magnitude admissible for that segment (see Eq.~\ref{eq:DVmax-k}), and $e_{s,k}$ is the violation of the constraint acting on the final state of the spacecraft, up to a given tolerance $\varepsilon = \SI{e-3}{}$. The penalty factors $\lambda_1 = 100$ and $\lambda_2=50$ are used in the present work. \section{Reinforcement Learning} \input{RL} \section{Numerical Results} \input{results.tex} \section{Conclusion} This paper presented a deep Reinforcement Learning (RL) framework to deal with the robust design of low-thrust interplanetary trajectories in presence of different sources of uncertainty. The stochastic optimal control problem must first be reformulated as a Markov Decision Process. Then, a state-of-the-art RL algorithm, named Proximal Policy Optimization (PPO), is adopted for the problem solution, and its prominent features over similar policy-gradient methods are outlined. Preliminary numerical results were reported for a three-dimensional Earth-Mars mission, by considering separately the effect of different types of uncertainties, namely, uncertainties on the dynamical model, on the observations, on the applied control, as well as the presence of a single or multiple, consecutive, missed thrust events. The obtained results show the capability of PPO of solving simple interplanetary transfer problems, as the Earth-Mars mission here considered, in both deterministic and stochastic scenarios. The solution found in the deterministic case is in good agreement with the optimal solution provided by an indirect method. However, the high computational cost necessary to train the neural network discourages the use of a model-free RL algorithm in that circumstance. The power of RL becomes apparent when dealing with stochastic optimal control problems, where traditional methods are either cumbersome, impractical, or simply impossible to apply. Despite the reported results are only preliminary, the presented solutions seem very promising, in terms of both payload mass and constraint enforcement. The methodology here proposed is quite general and can be implemented, with the appropriate changes, to cope with a variety of spacecraft missions and uncertainty models. Also, extensions to arbitrary stochastic dynamical models (e.g., with possibly complex non-Gaussian perturbations) are straightforward. This is a major advantage with respect to other techniques presented in the literature based on ad-hoc extensions of traditional optimal control methods. The preliminary results here proposed pave the way for reinforcement learning approaches in robust design of interplanetary trajectories. Additional work is obviously necessary in order to increase both the efficiency of the learning process and the reliability of the solutions. The high computational cost calls for the use of asynchronous algorithm, where the two processes of policy-rollout (for collecting experience) and policy-update (for learning) run in parallel, so as to exploit at best the massive parallelization allowed by high performance computing clusters. Also, the use of recurrent neural networks should be investigated when dealing with non-Markov dynamical processes, as in the case of partial observability and multiple, correlated, missed thrust events. However, the most crucial point seems to be enhancing the constraint-handling capability of RL algorithms. The adoption of the $\varepsilon$-constraint relaxation is a modest contribution that goes in that direction. More advanced formulations of the problem, such as Constrained Markov Decision Process (CMDP), should be investigated in the future for this purpose. \bibliographystyle{AAS_publication} \section{SCARTI} \subsection{\hl{--------------------}} Reducing Variance Using a Baseline We subtract a baseline function B(s) from the policy gradient This can reduce variance, without changing expectation A good baseline is the state value function $B(s) = V^{\pi_\theta}(s)$ So we can rewrite the policy gradient using the advantage function $$A^{\pi_\theta} = Q^{\pi_\theta}(s, a) - V^{\pi_\theta}(s)$$ For the true value function $V^{\pi_\theta}(s)$, the TD error $\delta^{\pi_\theta}$ is $$\delta^{\pi_\theta} = r + \gamma V^{\pi_\theta}(s') - V^{\pi_\theta}(s)$$ is an unbiased estimate of the advantage function, that is $E_{\pi_\theta}[\delta^{\pi_\theta}] = A^{\pi_\theta}(s,a)$. In practice we can use an approximate TD error $$ \hat \delta = r + \gamma V_\phi(s') - V_\phi(s) $$ and then use the (approximated) TD error to compute the policy gradient $$ \nabla_\theta J(\theta) = E \left[ \nabla_\theta log {\pi_\theta}(s,a) \hat \delta\right] $$ This approach only requires one set of critic parameters $\phi$ proximal policy optimization (PPO) simplifies Trust region policy optimization (TRPO) (Schulman, et al., 2015) using a clipped surrogate objective while retaining similar performance. The goal is thus to chose $\theta$ The value of the reward (objective) function depends on this policy and then various algorithms can be applied to optimize $\theta$ for the best reward. In high-dimensional spaces or complex environments, the agent attempt at learning the control policy $\pi(\bm{s})$ . instead of learning the (e.g., Deep Q Network, DQN) model-free RL approaches dos not rely on an \textit{a-priori} knowledge of the transition function $f$, control model $g$, and observation model $h$, nor of the probability distributions of disturbance $P$ (see Eq.s~\eqref{eq:MDP}-\eqref{eq:MDP2}) and work even in case these function are non-analytical. \hl{questo e' utile per questo problema perche'} \hl{-------------------------------------} \hl{-------------------------------------} The environment initializes an episode by randomly generating an internal state, mapping this internal state to an observation, and passing the observation to the agent. At each step of the episode, an observation is generated from the internal state and given to the agent. The agent uses this observation to generate an action that is sent to the environment; the environment then uses the action and the current state to generate the next state and a scalar reward signal. The reward and the observation corresponding to the next state are then passed to the agent The DNNs can be Feedforward Networks (FNNs), as Multi-Layer Perceptron (MLP) networks, as well as recurrent networks, as Long-Short Term Memory (LSTM) networks. n this context, the term G\&CNet{} (namely, Guidance and Control Network) was coined at the European Space Agency \cite{izzo2019survey} to refer to a Deep Neural Network (DNN) used to map any observation $\bm{o}_k$ of the spacecraft state $\bm{s}_k$ (e.g., position, velocity and mass) to the corresponding (parameterized) policy $\pi_\theta(\bm{s}_k)$ (e.g., the thrust vector) and/or (parameterized) value function $V_\phi(\bm{s}_k)$. The computational effort is mainly spent on the network training, which has to be performed just once. After training, the G\&CNet{} can be used to evaluate an optimal state-feedback control policy at almost no computational cost. This feature makes the G\&CNet{} also suitable to be used as an integrated on-board guidance and control system during the cruise phase of interplanetary missions, providing the spacecraft with the capability of autonomously adjusting its trajectory in situations where corrective commands from Earth are not possible, such as during a short-duration maneuver (e.g., the flyby of an asteroid) or when the number of available radio links with Earth is limited. Furthermore, the trained network is able to run also on cheaper processors, as those typically mounted on-board of micro-spacecraft because of the limited budget. \hl{---------------------------------} In order to avoid having a third DNN, the One may easily prove that the TD error $\delta^{\pi_\theta} = r + \gamma V^{\pi_\theta}(s') - V^{\pi_\theta}(s)$ is is an unbiased estimate of the advantage function, that is $E_{\pi_\theta}[\delta^{\pi_\theta}] = A^{\pi_\theta}(s,a)$. Hence, For the true value function $V^{\pi_\theta}(s)$, the TD error $\delta^{\pi_\theta}$ is In practice we can use an approximate TD error $$ \hat \delta = r + \gamma V_\phi(s') - V_\phi(s) $$ and then use the (approximated) TD error to compute the policy gradient \begin{equation} \nabla_\theta J(\theta) = E \left[ \nabla_\theta log {\pi_\theta}(s,a) \hat \delta\right] \label{eq:gradA} \end{equation} This approach only requires one set of critic parameters $\phi$ \hl{-------------------------------------} can be roughly categorized in two different families: Q-learning methods, and policy-optimization methods. In Q-learning methods, as Deep Q Network (DQN) \cite{mnih2013playing}, the agent learns an approximation $Q_\theta(\bm{s}, \bm{a})$ of the optimal action-value function $Q^\ast(\bm{s}, \bm{a})$, that is the value function when executing $\bm{a}$ in state $\bm{s}$ and then act according to $\pi$ for the rest of the training episode. The corresponding optimal policy $\pi^\ast$ is evaluated by choosing, in each state $\bm{s}$, the action $\bm{a}$ which maximizes $Q_\theta(\bm{s}, \bm{a})$. A DNN is commonly used for the estimated $Q_\theta$ In high-dimensional spaces or complex environments, hence the method name. In policy-optimization methods, as policy-gradients ones, instead, the agent attempt at directly learning the control policy $\pi(\bm{s})$ that maximize a performance index $J$. Policy-gradient methods rely on an on-policy evaluation works by running the current policy $\pi_{\theta_j}$ in $n_{env}$ independent realizations of the environment for $n_b$ training episodes \hl{each/each one}, and collecting a set of trajectories $\{ \tau_i\}_{i = 1, \dots, n_{env} n_b}$. At the end of the sampling process, the policy performance are evaluated by using a performance index that has typically the form: that is an \hl{unbias/biased yet low-variance} estimated \hl{controllare bene, non mi torna} \begin{equation} J(\theta_j) = \underset{\tau \sim \pi_{\theta_j}}{\mathbb{E}}\left[\sum_{k = 0}^{N} \log{\pi_{\theta_j}(\bm{a}_k | \bm{s}_k)} \hat{A}_{k}\right] \label{eq:Jvan} \end{equation} REINFORCE Eq. (XXX) is called the policy gradient and the form of this equation is referred to as the REINFORCE method it was shown that the reward $r k (x k , u k )$ in Eq. (18) can be replaced with state- action value function $Q^\pi (x k , u k )$, this result is known as the Policy Gradient Theorem. Furthermore, the variance of the policy gradient estimate that is derived from the Monte Carlo roll-outs, $\tau^i$ , is reduced by subtracting a state-dependent baseline from $Q^\pi(x_k , u_k )$. By choosing is commonly chosen to be the state value function $V \pi (x k )$, and we can define $A^\pi(x_k , u_k ) = Q^\pi (x_k, u_k ) - V^\pi (x_k)$. This method is known as the Advantage-Actor-Critic (A2C) Method. The policy gradient for the A2C method is given by $$ EQUAZIONE $$ SAC the idea is to have two neural networks that run in parallel (as in Fig.~\ref{fig:gncnet}) and are concurrently updated: a (policy-based) Actor, that returns a parametrized policy $\pi_\theta$, and a (value-based) Critic, that returns an approximation of the on-policy value function $V_\phi$. So, the Actor controls the agent behavior while the Critic evaluates the agent performance and gives a feedback to the Actor in order to efficiently update the policy. The basic Actor-Critic RL scheme is reported in Fig.~\ref{fig:AC}. \subsection{Deterministic Optimal Trajectory} The ability of the presented methodology to deal with traditional, deterministic, optimal control problems is investigated first, by comparing the solution provided by the control policy $\pi^{unp}$ trained in the deterministic (unperturbed) environment (see Eq.~\eqref{eq:dyn_kep}), with the solution of the original Earth-Mars low-thrust transfer problem, found by using a well-established indirect optimization method used by the authors in other interplanetary transfers \cite{colasurdo2014tour}. The two solutions are very close to each other in terms of trajectory and control direction. Also, the final mass of the RL solution ($\SI{600.23}{kg}$) is in good agreement with the (true) optimal mass obtained by the indirect method ($\SI{599.98}{\kilo\gram}$). This slight difference is partly due to the fact that RL satisfies the terminal constraints with a lower accuracy ($10^{-3}$ in RL vs $10^{-9}$ in the indirect method), and partly due to the (approximated) time-discrete, impulsive dynamical model adopted in the MDP transcription. However, when applying RL to the solution of deterministic optimal control problems, two major downsides arise. First, terminal constraints cannot be explicitly accounted for, and constraint violations must be introduced in the reward function as (weighted) penalty terms. As a result, the accuracy on constraint satisfaction is generally looser than in traditional methods for solving optimal control problems. Second, RL is quite computationally intensive, even when applied to problems as simple as the deterministic rendezvous mission here considered. This is mainly motivated by the fact that PPO is a model-free algorithm, hence, the knowledge of the underlying (analytical) dynamical model is not exploited at all. The only way the agent can obtain satisfactory results is to acquire as much experience (i.e., samples) as possible about the environment. In this respect, the solution of the deterministic problem here reported took about $2\div3$ hours (depending on the desired accuracy on the constraints) on a computer equipped with Intel Core i7-9700K CPU @\SI{3.60}{\giga\Hz}, while the indirect method just a few seconds. \subsection{Robust Trajectory Design} Besides the unperturbed, deterministic mission scenario (labeled ${unp}$), the following stochastic case-studies are considered: i) state uncertainties ($st$), ii) observation uncertainties ($obs$), iii) control uncertainties ($ctr$), iv) single missed thrust event ($mte,1$), and v) multiple missed thrust events ($mte,2$). Training the G\&CNet{} in one of these environments leads to the definition of a corresponding policy, named $\pi^{unp}$, $\pi^{st}$, $\pi^{obs}$, $\pi^{ctr}$, $\pi^{mte,1}$, and $\pi^{mte,2}$, respectively. For each policy, the reference trajectory, which should be intended as robust to that source of uncertainty, is obtained by applying in the unperturbed environment a (deterministic) version of the policy (i.e., that always takes the action corresponding to the largest probability, instead of sampling from the probability distribution $\bm{a}\sim\pi(\cdot|\bm{s})$), and recording the commands and spacecraft states along the flight. \begin{figure} [!htbp] \centering \includegraphics[width=0.75\textwidth, trim={0cm 0.5cm 0cm 1cm},clip]{Figures/comparison/compare2D.pdf} \caption{Earth-Mars trajectories corresponding to different robust policies. Differences with respect to the unperturbed policy trajectory are up-scaled by 5 for illustration purposes.} \label{fig:robust_comp} \end{figure} Figure~\ref{fig:robust_comp} shows the robust trajectories obtained after a training that lasts up to $\SI{200}{\mega steps}$, that roughly corresponds to 10 $\div$ 12 computing hours. For each case, only the best found solution during training (also accounting for the closed-loop behavior described in the next section) is reported. To add robustness, these trajectories tend to approach Mars orbit in advance with respect to the optimal solution, so as to maximize the probability of meeting the terminal constraints even in presence of late perturbations and/or control errors. \input{Tables/tab-robust} Table~\ref{tab:nom} summarizes the main features of these trajectories, that are, the final spacecraft mass $m_f$, constraint violations $\Delta r_f$ and $\Delta v_f$, and cumulative reward $J$, as well as some environment-specific training settings. The solutions corresponding to a policy trained in a stochastic environment with perturbations on either state ($\pi^{st}$), observations ($\pi^{obs}$), or control direction and magnitude ($\pi^{ctr}$) satisfy the terminal constraints within the given tolerance ($10^{-3})$. In those cases, robustness is obtained by sacrificing less than $1 \div 2$\% of the payload mass. Instead, the solutions $\pi^{mte,1}$ and $\pi^{mte,2}$, trained in the MTE environments, tend to slightly overcome Mars orbit, since they account, even in the unperturbed scenario, for the possible presence of MTEs, whose probability of occurrence during training has been exaggerated in this work for research purposes (at least one MTE must occur in any environment realization). Also, the final spacecraft mass obtained in these two cases is considerably worse than in the previous ones. In all presented cases, the error on the final velocity is zero. This result should not surprise the reader. In fact, the last $\Delta V$ is computed algebraically as a difference between the final spacecraft velocity and Mars velocity. Thus, the velocity constraint is automatically satisfied whenever this (computed) $\Delta V$ has a magnitude lower than the maximum admissible by the Sims-Flanagan model (see Eq.~\ref{eq:DVmax-f}). \begin{figure}[!htbp] \centering \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_unperturbed.pdf} \caption{Unperturbed policy.} \label{fig:dv_nom} \end{subfigure}% \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_state.pdf} \caption{State uncertainties.} \label{fig:dv_state} \end{subfigure} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_observation.pdf} \caption{Observation uncertainties} \label{fig:dv_obs} \end{subfigure}% \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_control.pdf} \caption{Control uncertainties.} \label{fig:dv_con} \end{subfigure} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_single_MTE.pdf} \caption{Single MTE.} \label{fig:dv_mte} \end{subfigure}% \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_multi_MTE.pdf} \caption{Multiple MTEs.} \label{fig:dv_mtes} \end{subfigure} \caption{Magnitude of the $\pmb{\Delta V}$s along the robust trajectories. Dashed lines indicate the maximum $\pmb{\Delta V}$ admitted at each time step by Sims-Flanagan model.} \label{fig:DVrob} \end{figure} Figure~\ref{fig:DVrob} shows the distribution of the magnitude of the $\Delta V$s over the flight time for the different robust trajectories. Dashed lines indicate the maximum allowed $\Delta V$ at each step, according to the Sims-Flanagan model (see Eq.~\eqref{eq:DVmax-k}). As a general comment, the robust trajectories trained in the stochastic environments show a lower $\Delta V$ magnitude at the beginning and at the end of the transfer, with respect to the optimal deterministic solution $\pi^{unp}$, and, correspondingly, an higher magnitude in the central portion of the transfer. This sub-optimal distribution of the thrust is responsible for the additional propellant consumption of the robust solutions. Also, the applied $\Delta V$ is consistently lower than the maximum available in all cases except the unperturbed one, where an almost bang-off-bang pattern, which is the expected solution of this kind of optimal control problems, may be recognized. This is a distinctive feature of robust trajectories, that must satisfy the constraint on the maximum admissible value of $\Delta V$, while leaving room for efficient correction maneuvers. As a final remark, in the two solutions obtained with policies $\pi^{mte,1}$ and $\pi^{mte,2}$ the last $\Delta V$ is considerably smaller than in the other solutions, probably because the two policies try to ensure the compliance with the final velocity constraint regardless of the possible presence of a MTE near the final time. \subsection{Closed-Loop Mission Analysis} \begin{figure}[!htbp] \centering \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/state/MC2D_LQ.png} \caption{$\pi^{st}$} \label{fig:MC_pert} \end{subfigure} \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/observation/MC2D_LQ.png} \caption{$\pi^{obs}$} \label{fig:obs} \end{subfigure}% \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/control/MC2D_LQ.png} \caption{$\pi^{ctr}$} \label{fig:control} \end{subfigure} \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/MTE/MC2D_LQ.png} \caption{$\pi^{mte,1}$} \label{fig:MTE} \end{subfigure}% \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/MTEs/MC2D_LQ.png} \caption{$\pi^{mte,2}$} \label{fig:MTEs} \end{subfigure} \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/nominal/MC2D_state_LQ.png} \caption{$\pi^{unp}$} \label{fig:MC_det} \end{subfigure}% \caption{Monte Carlo simulations realized by running each policy in the correspondingly-perturbed environment, except for the unperturbed policy, which is run in the state-perturbed one. Differences are exaggerated by a factor 5 for illustration purposes.} \label{fig:MCs} \end{figure} The closed-loop performance of the G\&CNet{s} in their respective (stochastic) training environment are measured by using a Monte Carlo approach that consists in running each state-feedback policy in 500 randomly-generated environment realizations, or episodes. Figure~\ref{fig:MCs} shows the results of the Monte Carlo campaigns in terms of spacecraft trajectory in each randomly-generated episode. Specifically, in each figure, the dark-blue line represents the robust trajectory, with the light-blue arrows indicating the $\Delta V$s, while each gray line represents the trajectory obtained in one of the randomly-generated episodes. One may notice that, for policies $\pi^{st}$, $\pi^{obs}$, $\pi^{ctr}$ and $\pi^{mte,1}$, the Monte Carlo generated trajectories have a greater dispersion in the central part of the mission, which tends to reduce, and disappear almost entirely, while approaching the final time, because of the imposed terminal constraints. This is not completely true for the case of multiple MTEs, where a small number of trajectories (15 out of 500) clearly miss the target by far. \input{Tables/tab-MC} Table~\ref{tab:MC} shows that RL is able to cope with all these different stochastic scenarios effectively. Indeed, despite the severity of the considered perturbations/uncertainties, the success rate (that is, the percentage of solutions that meet final constraints within the prescribed accuracy) is rather high, over 70\% in most cases and up to 80\% when only additive Gaussian state perturbations are considered. For the sake of comparison, we also reported the results obtained by running policy $\pi^{unp}$ in the state-perturbed stochastic environment (Fig.~\ref{fig:MC_det}). While the differences between the robust trajectories corresponding to policy $\pi^{unp}$ and $\pi^{st}$ seem minimal, the effects in the closed-loop simulations are apparent. Indeed, in none of the episodes the policy $\pi^{unp}$ was able to reach the imposed accuracy on terminal state, while policy $\pi^{st}$ succeeds in the $80\%$ of the cases. Similar results are obtained when running the policy $\pi^{unp}$ in any of the proposed perturbed (stochastic) environments. More precisely, the success rate is zero for both the state- and observation-uncertainty environments, while in case of control uncertainties on thrust magnitude/direction is 8\%, and almost double in case of single (18.8\%) or multiple (16.8\%) MTEs. The preliminary results found with policies $\pi^{mte,1}$ and $\pi^{mte,2}$ stressed that RL performance deteriorates substantially in the presence of MTEs. By looking at Figure~\ref{fig:errors}, it is clear that, in most cases ($69.8\%$ with a single MTE, $70.4\%$ with multiple MTEs) the policy manages to recover from the complete absence of thrust, and meets the final constraints within the imposed tolerance ($10^{-3}$). However, in a few unfortunate scenarios, the MTE occurs in ``crucial points'' of the trajectory, that is, near final time, and the policy is not able to compensate for the missing $\Delta V$s in any way. As a result, the terminal constraints cannot be met. This fact is confirmed by the high variance obtained on the constraint violation in both mission scenarios (single and multiple MTEs). Analogously, the drop in payload mass (Fig.~\ref{fig:mass}) highlights what are the most critical points in terms of thrust efficiency. The looser satisfaction of the terminal constraints is deemed responsible for the final rise in the achieved payload mass and might be misleading. \begin{figure}[!htbp] \centering \begin{subfigure}[t]{.47\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/MTE/MTE_error.pdf} \caption{Terminal constraint violation.} \label{fig:errors} \end{subfigure}% \hfill \begin{subfigure}[t]{.51\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/MTE/MTE_mass.pdf} \caption{Final spacecraft mass.} \label{fig:mass} \end{subfigure} \caption{Terminal constraint violation (a) and final spacecraft mass (b) obtained with policy $\pmb{\pi^{mte,1}}$ by varying the MTE location $\pmb{\hat{k}}$.} \label{fig:mte_analysis} \end{figure} \section{SCARTI} \subsection{\hl{--------------------}} Reducing Variance Using a Baseline We subtract a baseline function B(s) from the policy gradient This can reduce variance, without changing expectation A good baseline is the state value function $B(s) = V^{\pi_\theta}(s)$ So we can rewrite the policy gradient using the advantage function $$A^{\pi_\theta} = Q^{\pi_\theta}(s, a) - V^{\pi_\theta}(s)$$ For the true value function $V^{\pi_\theta}(s)$, the TD error $\delta^{\pi_\theta}$ is $$\delta^{\pi_\theta} = r + \gamma V^{\pi_\theta}(s') - V^{\pi_\theta}(s)$$ is an unbiased estimate of the advantage function, that is $E_{\pi_\theta}[\delta^{\pi_\theta}] = A^{\pi_\theta}(s,a)$. In practice we can use an approximate TD error $$ \hat \delta = r + \gamma V_\phi(s') - V_\phi(s) $$ and then use the (approximated) TD error to compute the policy gradient $$ \nabla_\theta J(\theta) = E \left[ \nabla_\theta log {\pi_\theta}(s,a) \hat \delta\right] $$ This approach only requires one set of critic parameters $\phi$ proximal policy optimization (PPO) simplifies Trust region policy optimization (TRPO) (Schulman, et al., 2015) using a clipped surrogate objective while retaining similar performance. The goal is thus to chose $\theta$ The value of the reward (objective) function depends on this policy and then various algorithms can be applied to optimize $\theta$ for the best reward. In high-dimensional spaces or complex environments, the agent attempt at learning the control policy $\pi(\bm{s})$ . instead of learning the (e.g., Deep Q Network, DQN) model-free RL approaches dos not rely on an \textit{a-priori} knowledge of the transition function $f$, control model $g$, and observation model $h$, nor of the probability distributions of disturbance $P$ (see Eq.s~\eqref{eq:MDP}-\eqref{eq:MDP2}) and work even in case these function are non-analytical. \hl{questo e' utile per questo problema perche'} \hl{-------------------------------------} \hl{-------------------------------------} The environment initializes an episode by randomly generating an internal state, mapping this internal state to an observation, and passing the observation to the agent. At each step of the episode, an observation is generated from the internal state and given to the agent. The agent uses this observation to generate an action that is sent to the environment; the environment then uses the action and the current state to generate the next state and a scalar reward signal. The reward and the observation corresponding to the next state are then passed to the agent The DNNs can be Feedforward Networks (FNNs), as Multi-Layer Perceptron (MLP) networks, as well as recurrent networks, as Long-Short Term Memory (LSTM) networks. n this context, the term G\&CNet{} (namely, Guidance and Control Network) was coined at the European Space Agency \cite{izzo2019survey} to refer to a Deep Neural Network (DNN) used to map any observation $\bm{o}_k$ of the spacecraft state $\bm{s}_k$ (e.g., position, velocity and mass) to the corresponding (parameterized) policy $\pi_\theta(\bm{s}_k)$ (e.g., the thrust vector) and/or (parameterized) value function $V_\phi(\bm{s}_k)$. The computational effort is mainly spent on the network training, which has to be performed just once. After training, the G\&CNet{} can be used to evaluate an optimal state-feedback control policy at almost no computational cost. This feature makes the G\&CNet{} also suitable to be used as an integrated on-board guidance and control system during the cruise phase of interplanetary missions, providing the spacecraft with the capability of autonomously adjusting its trajectory in situations where corrective commands from Earth are not possible, such as during a short-duration maneuver (e.g., the flyby of an asteroid) or when the number of available radio links with Earth is limited. Furthermore, the trained network is able to run also on cheaper processors, as those typically mounted on-board of micro-spacecraft because of the limited budget. \hl{---------------------------------} In order to avoid having a third DNN, the One may easily prove that the TD error $\delta^{\pi_\theta} = r + \gamma V^{\pi_\theta}(s') - V^{\pi_\theta}(s)$ is is an unbiased estimate of the advantage function, that is $E_{\pi_\theta}[\delta^{\pi_\theta}] = A^{\pi_\theta}(s,a)$. Hence, For the true value function $V^{\pi_\theta}(s)$, the TD error $\delta^{\pi_\theta}$ is In practice we can use an approximate TD error $$ \hat \delta = r + \gamma V_\phi(s') - V_\phi(s) $$ and then use the (approximated) TD error to compute the policy gradient \begin{equation} \nabla_\theta J(\theta) = E \left[ \nabla_\theta log {\pi_\theta}(s,a) \hat \delta\right] \label{eq:gradA} \end{equation} This approach only requires one set of critic parameters $\phi$ \hl{-------------------------------------} can be roughly categorized in two different families: Q-learning methods, and policy-optimization methods. In Q-learning methods, as Deep Q Network (DQN) \cite{mnih2013playing}, the agent learns an approximation $Q_\theta(\bm{s}, \bm{a})$ of the optimal action-value function $Q^\ast(\bm{s}, \bm{a})$, that is the value function when executing $\bm{a}$ in state $\bm{s}$ and then act according to $\pi$ for the rest of the training episode. The corresponding optimal policy $\pi^\ast$ is evaluated by choosing, in each state $\bm{s}$, the action $\bm{a}$ which maximizes $Q_\theta(\bm{s}, \bm{a})$. A DNN is commonly used for the estimated $Q_\theta$ In high-dimensional spaces or complex environments, hence the method name. In policy-optimization methods, as policy-gradients ones, instead, the agent attempt at directly learning the control policy $\pi(\bm{s})$ that maximize a performance index $J$. Policy-gradient methods rely on an on-policy evaluation works by running the current policy $\pi_{\theta_j}$ in $n_{env}$ independent realizations of the environment for $n_b$ training episodes \hl{each/each one}, and collecting a set of trajectories $\{ \tau_i\}_{i = 1, \dots, n_{env} n_b}$. At the end of the sampling process, the policy performance are evaluated by using a performance index that has typically the form: that is an \hl{unbias/biased yet low-variance} estimated \hl{controllare bene, non mi torna} \begin{equation} J(\theta_j) = \underset{\tau \sim \pi_{\theta_j}}{\mathbb{E}}\left[\sum_{k = 0}^{N} \log{\pi_{\theta_j}(\bm{a}_k | \bm{s}_k)} \hat{A}_{k}\right] \label{eq:Jvan} \end{equation} REINFORCE Eq. (XXX) is called the policy gradient and the form of this equation is referred to as the REINFORCE method it was shown that the reward $r k (x k , u k )$ in Eq. (18) can be replaced with state- action value function $Q^\pi (x k , u k )$, this result is known as the Policy Gradient Theorem. Furthermore, the variance of the policy gradient estimate that is derived from the Monte Carlo roll-outs, $\tau^i$ , is reduced by subtracting a state-dependent baseline from $Q^\pi(x_k , u_k )$. By choosing is commonly chosen to be the state value function $V \pi (x k )$, and we can define $A^\pi(x_k , u_k ) = Q^\pi (x_k, u_k ) - V^\pi (x_k)$. This method is known as the Advantage-Actor-Critic (A2C) Method. The policy gradient for the A2C method is given by $$ EQUAZIONE $$ SAC the idea is to have two neural networks that run in parallel (as in Fig.~\ref{fig:gncnet}) and are concurrently updated: a (policy-based) Actor, that returns a parametrized policy $\pi_\theta$, and a (value-based) Critic, that returns an approximation of the on-policy value function $V_\phi$. So, the Actor controls the agent behavior while the Critic evaluates the agent performance and gives a feedback to the Actor in order to efficiently update the policy. The basic Actor-Critic RL scheme is reported in Fig.~\ref{fig:AC}. \section{Introduction} In recent years, the possibility of using small or micro-spacecraft in interplanetary missions is drawing the attention of scientists and engineers around the world interested in reducing both development time and cost of the mission, without affecting significantly its scientific return. The first deep-space micro-spacecraft, PROCYON \cite{campagnola2015low}, was developed in little more than a year in 2014 by the University of Tokyo and JAXA, at a very low cost if compared to standard-size spacecraft. Despite the malfunctioning of the main thruster, the PROCYON mission has been ubiquitously called a success, paving the way for similar mission concepts by other space agencies. In 2018, NASA released the first two interplanetary CubeSats, part of the MarCO (Mars Cube One) mission \cite{asmar2014mars}, which successfully accomplished their goal of providing a real-time communication link to Earth during the entry, descent, and landing phase of InSight lander. The same year, ESA's first stand-alone CubeSat mission for deep-space, M–Argo (Miniaturised – Asteroid Remote Geophysical Observer) has been announced \cite{walker2017miniaturised}, and it is likely to be ready for launch in mid-2021 at the earliest. Low-thrust electric propulsion is a key technology for enabling small/micro-satellite interplanetary missions, as it provides the spacecraft with significantly lower specific propellant consumption. However, because of the limited budget, micro-spacecraft generally mount components with a low technological readiness level. This increases the risk of incurring unexpected control execution errors and/or missed thrust events (MTEs) during any of the long thrusting periods. In addition, small spacecraft have limited ground station access, and larger uncertainties in the state knowledge (i.e., in the observations for orbit determination) should be expected with respect to standard missions. Typically, when designing the mission, the engineers take these uncertainties into account \textit{a posteriori}\cite{rayman2007coupling, laipert2015automated}, by means of time-consuming iterative procedures which often bring to suboptimal solutions and over-conservative margins. This design methodology is particularly unsuitable for micro-spacecraft missions, where the possibility to have large propellant margins and system redundancy is almost completely excluded. In this respect, recent works attempted to address the robust design of interplanetary trajectories by using novel optimization techniques. As an example, the problem of designing optimal risk-aware trajectories, which guarantee the safety of the spacecraft when it operates in uncertain environments, was addressed by applying chance-constrained optimal control~\cite{ono2013probabilistic}, combined with a convex optimization approach, to deal with impulsive maneuvers~\cite{oguri2019risk}, or with a direct/indirect hybrid optimization method, to deal with continuous-thrust~\cite{oguri2019risk2}. Stochastic Differential Dynamic Programming (SDDP) was applied to interplanetary trajectory design in presence of Gaussian-modeled state uncertainties \cite{ozaki2018stochastic, ozaki2020tube}. Also, the robust design of a low-thrust interplanetary transfer to a near-Earth asteroid was performed by using evidence theory to model epistemic uncertainties in the performance of the main thruster and in the magnitude of the departure hyperbolic excess velocity~\cite{dicarlo2019robust}. Belief-based transcription procedures for the stochastic optimal control problem were proposed for the robust design of space trajectories under stochastic and epistemic uncertainties~\cite{greco2018intrusive, greco2020direct}, incorporating also navigation analysis in the formulation to update the knowledge of the spacecraft state in presence of observations~\cite{greco2020robust}. \subsection{Deep Learning in Spaceflight Mechanics} The interest in the application of deep learning techniques to optimally and robustly solve control problems is rapidly increasing in recent years, especially for space applications. In this context, the term G\&CNet{} (namely, Guidance and Control Network) was coined at the European Space Agency \cite{izzo2019survey} to refer to an on-board system that provides real-time guidance and control functionalities to the spacecraft by means of a Deep Neural Network (DNN) that replaces traditional control and guidance architectures. {DNNs are among the most versatile and powerful machine learning tools, thanks to their unique capability of accurately approximating complex, nonlinear input-output functions, provided that a sufficiently large amount of data (training set) consisting of sample input-output pairs is available} \cite{hornik1990universal}. Two alternative, and quite different, approaches can be used for training a G\&CNet{} to solve an optimal control problem (OCP), depending on what training data are used and how they are collected. In \textit{Behavioral Cloning} (BC), given a set of trajectories from an expert (that is, labeled observations-controls pairs), the network is trained to reproduce (or clone) the expert behavior. Usually, these trajectories are obtained as the solution of a {(deterministic)} optimal control problem with randomized boundary conditions. Behavioral cloning has been successfully used to train a fast-execution G\&CNet{} to control a spacecraft during a fuel-optimal low-thrust Earth-Venus transfer~\cite{izzo2019interplanetary} as well as during a landing maneuver in a simplified dynamical model~\cite{sanchez2016learning}. This approach proved to be computationally efficient, and it benefits from state-of-the-art implementations of supervised learning algorithms \cite{tensorflow2015-whitepaper}. However, it shows a number of downsides that make it unsuitable for robust trajectory design. In fact, the BC effectiveness rapidly worsens when the G\&CNet{} is asked to solve problems that fall outside of the set of expert demonstrations it was trained in. As a consequence, when dealing with Stochastic Optimal Control Problems (SOCPs), a drop in performance (or even divergence) may occur when, because of uncertainty, the flight trajectory starts moving away from the training set domain, typically populated by solutions coming from deterministic OCPs. To recover a correct behavior, a DAGGER (Dataset Aggregation) algorithm can be used. In this case, the solution process features and additional loop where new training data are provided ``on-line'' by an expert (e.g., an OCP solver) as they are required to cover previously unknown situations. This approach has been effectively exploited to improve the network accuracy in controlling a lander during a powered descent on the Lunar surface~\cite{furfaro2018deep}. However, the effectiveness of BC for robust trajectory design remains doubtful, especially when solutions from deterministic OCPs are used as expert demonstrations. Recently, an attempt has been performed to train a network by BC with a training set encompassing trajectories perturbed by random MTEs \cite{rubinsztejn2019neural}, showing promising results. However, the possibility of having other types of state and control uncertainties has not been addressed yet. A different approach is represented by \textit{Reinforcement Learning} (RL), which involves learning from experience rather than from expert demonstrations. In RL, a software agent (e.g., the G\&CNet{}) autonomously learns how to behave in a (possibly) unknown dynamical environment, modeled as a Markov Decision Process (MDP), so as to maximize some utility-based function that plays the role of the merit index in traditional optimal control problems. Differently from the BC approach, there is no pre-assigned data set of observations-controls pairs to learn from, so the agent is not told in advance what actions to take in a given set of states. Instead, the agent is left free to explore the environment, by repeatedly interacting with a sufficiently large number of realizations of it. The only feedback the agent receives back is a numerical reward collected at each time step, which helps the agent understanding how good or how bad its current performance is. In this framework, the final goal of the RL-agent is to learn the control policy that maximizes the expected cumulative sum of rewards over a trajectory. Because MDP allows only scalar reward functions, a careful choice, or shaping, of the reward is mandatory to efficiently guide the agent during training, while ensuring compliance with (any) problem constraints. Deep RL methods have obtained promising results in a number of spaceflight dynamics problems, such as low-thrust interplanetary trajectory design \cite{miller2019low, miller2019interplanetary, sullivan2020using}, 3-DoF and 6-DoF landing guidance with application to a powered descent~\cite{gaudet2020deep}, trajectory optimization in the cislunar environment \cite{scorsoglio2019actor, lafarge2020guidance}, and the design of guidance algorithms for rendezvous and docking maneuvers~\cite{broida2019spacecraft, hovell2020deep}. This paper aims at investigating the use of Reinforcement Learning for the robust design of a low-thrust interplanetary trajectory in presence of uncertainty. Specifically, uncertainties on the spacecraft state, caused by unmodeled dynamical effects, on orbit determination, because of inaccurate knowledge, and on the applied control, due to execution errors and missed thrust events, will be considered in the present analysis. RL has been selected as optimization algorithm since it has the clear advantage of not requiring the \textit{a priori} generation of any optimal trajectory to populate the training set, as data are gathered by running directly the current best found control policy on the stochastic environment. In this way, the agent is able to progressively improve, in an autonomous way, the performance and robustness of its control policy, in order to achieve the mission goals regardless of the uncertainties that may arise. This feature makes RL the ideal candidate to solve the problem at hand. At present, most of the research encompassing RL for spacecraft trajectory design deals exclusively with deterministic environments. Thus, one of the main contributions of this paper is the investigation of the possible extension of RL applicability also to stochastic scenarios. The paper is organized as follows. First, the optimization problem is formulated as a Markov Decision Process, and the mathematical models used to describe the state, observation, and control uncertainties acting on the system are defined. The expression of the reward function, which includes both the merit index and the problem constraints (e.g., fixed final spacecraft position and velocity), is given as well. Next, after a brief introduction of the basic concepts and notation of Reinforcement Learning, the RL algorithm used in this work, named Proximal Policy Optimization, is described in detail. Furthermore, the configuration selected for the DNN and the values used for the algorithm hyper-parameters are reported. Then, numerical results are presented for the case study of the paper, that is, a time-fixed low-thrust Earth-Mars rendezvous mission. Specifically, the effect of each source of uncertainties on the system dynamics is analysed independently and the obtained results are compared in terms of trajectory robustness and optimality. Eventually, the reliability of the obtained solutions is assessed by means of Monte Carlo simulations. A section of conclusions ends the paper. \section{Problem Statement} This paper investigates the use of RL algorithms for the design of robust low-thrust interplanetary trajectories. For the sake of comparison with other research papers~\cite{ozaki2018stochastic, ozaki2020tube}, a three-dimensional time-fixed minimum-fuel Earth-Mars rendezvous mission is considered as a test case. The spacecraft leaves the Earth with zero excess of hyperbolic velocity, and it is assumed to move in a Keplerian dynamical model under the sole influence of the Sun. The mission goal is to match Mars position and velocity at final time, with minimum propellant consumption. The values of the initial position $\bm{r}_\Earth$ and velocity $\bm{v}_\Earth$ of the Earth, the final position $\bm{r}_\Mars$ and velocity $\bm{v}_\Mars$ of Mars, the total transfer time $t_f$, the initial spacecraft mass $m_0$, and the spacecraft engine parameters (maximum thrust $T_{max}$ and effective exhaust velocity $u_{eq}$) are the same as in the paper by Lantoine and Russell \cite{lantoine2012hybrid}, and are reported in Table~\ref{tab:data}. In all simulations, the physical quantities have been made non-dimensional by using as reference values the Earth-Sun mean distance $\bar{r} = \SI{149.6e6}{km}$, the corresponding circular velocity $\bar{v} = \sqrt{{\mu_\Sun}/{\bar{r}}}$, and the initial spacecraft mass $\bar{m} = m_0$. \begin{table}[htbp] \caption{Problem data.} \label{tab:data} \centering \begin{tabular}{c c} \hline Variable & Value\\ \hline $N$ & 40 \\ $t_f,\, \si{days}$ & $358.79$ \\ $T_{max},\, \si{\newton}$ & $0.50$ \\ $u_{eq},\, \si{\kilo\meter/\second}$ & $19.6133$ \\ $m_0,\, \si{\kilo\gram}$ & $1000$ \\ $\mu_\Sun,\, \si{\kilo\meter^3/\second^2}$ & $132712440018$ \\ $\bm{r}_\Earth,\, \si{\kilo\meter}$ & $[-140699693,\, -51614428,\, 980]^T$\\ $\bm{v}_\Earth,\, \si{\kilo\meter/\second}$ & $[9.774596,\, -28.07828,\, 4.337725 \times 10^{-4}]^T$\\ $\bm{r}_\Mars,\, \si{\kilo\meter}$ & $[-172682023,\, 176959469,\, 7948912]^T$\\ $\bm{v}_\Mars,\,\si{\kilo\meter/\second}$ & $[-16.427384,\, -14.860506,\, 9.21486 \times 10^{-2}]^T$\\ \hline \end{tabular} \end{table} The stochastic effects here considered are \textit{state uncertainties}, which refer to the presence of unmodeled dynamics, \textit{observation uncertainties}, related to measurement noise and/or inaccuracies in orbital determination that lead to imperfect knowledge of the spacecraft state, and \textit{control uncertainties}, which account for both random actuation errors (i.e., in the direction and magnitude of the thrust), and \textit{single} or \textit{multiple MTEs}, which correspond to null thrust occurrences. \subsection{Markov Decision Process} Let us briefly introduce the mathematical formulation of a generic Markov Decision Process (MDP), which is required to properly setup the mathematical framework of deep RL algorithms. Let $\bm{s}_k \in S \subset \mathbb{R}^n$ be a vector that completely identifies the \textit{state} of the system (e.g., the spacecraft) at time $t_k$. In general, the complete system state at time $t_k$ is not available to the controller, which instead relies on an \textit{observation} vector $\bm{o}_k \in O \subset \mathbb{R}^m$. Observations might be affected by noise or uncertainty, and are thus written as a function of a random vector $\bm{\omega}_{o,k} \in \Omega_o \subset \mathbb{R}^{m_w}$. The commanded \textit{action} $\bm{a}_k$ at time $t_k$ is the output of a state-feedback control policy $\pi : O \xrightarrow{} A$, that is: $\bm{a}_k = \pi(\bm{o}_k) \in A \subset \mathbb{R}^l$. The actual \textit{control} $\bm{u}_k \in A$ differs from the commanded action due to possible execution errors, modeled as a function of a stochastic control disturbance vector $\bm{\omega}_{a,k} \in \Omega_a \subset \mathbb{R}^{l_w}$. A stochastic, time-discrete dynamical model $f$ is considered for the system state. The uncertainty on the system dynamics at time $t_k$ is modeled as a random vector $\bm{w}_{s,k} \in \Omega_s \subset \mathbb{R}^{n_w}$. As a result, the dynamical system evolution over time is described by the following equations: \begin{align} \bm{s}_{k+1} &= f(\bm{s}_k, \bm{u}_k, \bm{\omega}_{s, k}) \label{eq:MDP} \\ \bm{o}_{k} &= h(\bm{s}_{k}, t_k, \bm{\omega}_{o, k}) \label{eq:MDP4} \\ \bm{u}_k &= g(\bm{a}_k, \bm{\omega}_{a,k}) \label{eq:MDP3}\\ \bm{a}_k &= \pi(\bm{o}_k) \label{eq:MDP2} \end{align} The problem goal is to find the optimal control policy $\pi^\ast$ that maximizes the expected value of the discounted sum of rewards, that, in an episodic form, is: \begin{equation} J = \underset{\tau\sim \pi}{\mathbb{E}} \left[ \sum_{k = 0}^{N-1} { \gamma^k R(\bm{s}_k, \bm{u}_k, \bm{s}_{k+1}) } \right] \label{eq:obj} \end{equation} where $R(\bm{s}_k, \bm{u}_k, \bm{s}_{k+1})$ is the reward associated with transitioning from state $\bm{s}_k$ to state $\bm{s}_{k+1}$ due to control $\bm{u}_k$, $\gamma \in (0,1]$ is a discount factor that is used to either encourage long term planning ($\gamma = 1$) or short term rewards ($\gamma \ll 1$), and $N$ is the number of steps in one episode. Note that $\mathbb{E}_{\tau\sim \pi}$ here denotes the expectation taken over a trajectory $\tau$, that is, a sequence of state-action pairs $\tau = \left\{ (\bm{s}_0,\,\bm{a}_0) ,\, \ldots \, (\bm{s}_{N-1},\,\bm{a}_{N-1}) \right\}$ sampled according to the closed-loop dynamics in Eqs.~\eqref{eq:MDP}-\eqref{eq:MDP2}. Note that, in an episodic setting, $J = V^\pi(\bm{s}_0)$, being $V^\pi(\bm{s}_k)$ the value function, defined as the expected return obtained by starting from state $\bm{s}_k$ and acting according to policy $\pi$ until the end of the episode: \begin{equation} V^{\pi}(\bm{s}_k) = \underset{\tau\sim \pi}{\mathbb{E}} \left[ \sum_{k' = k}^{N-1} { \gamma^{k'} R(\bm{s}_{k'}, \bm{u}_{k'}, \bm{s}_{k'+1}) } \right] \label{eq:Vpi} \end{equation} \subsection{Formulating an Earth-Mars Mission as a Markov Decision Process} This general model is now specified for the Earth-Mars transfer problem at hand. During the mission, the spacecraft state $\bm{s}_k$ at any time step $t_k = k\, t_f / N,\, k \in [0,N]$, is identified by its inertial position $\bm{r}$ and velocity $\bm{v}$ with respect to Sun, and by its total mass $m$: \begin{equation} \bm{s}_k = \left[\bm{r}_k^T, \bm{v}_k^T, m_k \right]^T \in \mathbb{R}^7 \end{equation} The low-thrust trajectory is approximated as a series of ballistic arcs connected by impulsive $\Delta V$s, similarly to what done in the well-known Sims-Flanagan model~\cite{sims1999preliminary}. The magnitude of the $k$-th impulse is limited by the amount of $\Delta V$ that could be accumulated over the corresponding trajectory segment by operating the spacecraft engine at maximum thrust $T_{max}$: \begin{equation} \Delta V_{max, k} = \frac{T_{max}}{m_k} \frac{t_f}{N} \label{eq:DVmax-k} \end{equation} So, the commanded action at time $t_k$ corresponds to an impulsive $\Delta V$: \begin{equation} \bm{a}_k = \Delta \bm{V}_k \in [-\Delta V_{max, k}, \Delta V_{max, k}]^3 \subset \mathbb{R}^3. \label{eq:action} \end{equation} Since the spacecraft moves under Keplerian dynamics between any two time steps, in a deterministic scenario the spacecraft state can be propagated analytically with a closed-form transition function: \begin{equation} \begin{bmatrix} \bm{r}_{k+1} \\ \bm{v}_{k+1} \\ m_{k+1} \\ \end{bmatrix} = f(\bm{r}_k, \bm{v}_k, m_k, \Delta \bm{V}_k) = \begin{bmatrix} \hat f_k \bm{r}_k + \hat g_k (\bm{v}_k + \Delta \bm{V}_k) \\ \dot{\hat f}_k \bm{r}_k + \dot{\hat g}_k (\bm{v}_k + \Delta \bm{V}_k)\\ m_k \, \mbox{exp}\left({-\frac{|\Delta \bm{V}_k|}{u_{eq}}}\right) \\ \end{bmatrix} \label{eq:dyn_kep} \end{equation} where $\hat f_k$ and $\hat g_k$ are the Lagrange coefficients at $k$-th step, defined as in Ref.~\citenum{bate1971fundamentals}, and the mass update is obtained through Tsiolkovsky equation. At time $t_f$, the final $\Delta V$ is calculated so as to match Mars velocity, that is: \begin{equation} \Delta \bm{V}_N = \min{\left(|\bm{v}_\Mars - \bm{v}_N|, \Delta V_{max,N}\right)} \frac{\bm{v}_\Mars - \bm{v}_N}{|\bm{v}_\Mars - \bm{v}_N|} \label{eq:DVmax-f} \end{equation} and the final spacecraft state is evaluated as: \begin{align} \bm{r}_f &= \bm{r}_N \\ \bm{v}_f &= \bm{v}_N + \Delta \bm{V}_N \\ m_f &= m_N \, \mbox{exp}\left({-{|\Delta \bm{V}_N|}/{u_{eq}}}\right) \end{align} The (deterministic) observations collected at time $t_k$ are: \begin{equation} \bm{o}_k = \left[\bm{r}_k^T, \bm{v}_k^T, m_k, t_k \right]^T \in \mathbb{R}^8 \end{equation} The value selected for the total number of time steps $N$ is reported in Table~\ref{tab:data}. \subsubsection{State Uncertainties.} For the sake of simplicity, uncertainties on the spacecraft dynamics are modeled as additive Gaussian noise on position and velocity at time $t_k$, $k \in (0, N]$, that is: \begin{equation} \bm{w}_{s, k} = \begin{bmatrix} \delta \bm{r}_k \\ \delta \bm{v}_k \end{bmatrix} \sim \mathcal{N}(\bm{0}_{6}, \bm{R}_{s,k}) \in \mathbb{R}^6 \end{equation} where $\bm{R}_{s,k} = \mbox{diag}\left(\sigma_r^2\bm{I}_3, \sigma_v^2\bm{I}_3 \right)$ is the covariance matrix, $\bm{I}_{n}$ (respectively, $\bm{0}_{n}$) indicates an identity (respectively, null) matrix with dimension $n \times n$ (respectively, $n \times 1$), and $\sigma_r, \sigma_v$ are the standard deviations on position and velocity. So, the stochastic dynamical model is written as: \begin{equation} \begin{bmatrix} \bm{r}_{k+1} \\ \bm{v}_{k+1} \\ m_{k+1} \\ \end{bmatrix} = f(\bm{r}_k, \bm{v}_k, m_k, \bm{u}_k) + \begin{bmatrix} \delta \bm{r}_{k+1} \\ \delta \bm{v}_{k+1} \\ 0 \\ \end{bmatrix} \end{equation} \subsubsection{Observation Uncertainties.} The uncertainty in the knowledge of spacecraft position and velocity due to errors in the orbital determination is modeled as additive Gaussian noise on the deterministic observations at time $t_k$: \begin{equation} \bm{o}_k = \begin{bmatrix} \bm{r}_{k} \\ \bm{v}_{k} \\ m_{k} \\ t_k \end{bmatrix} + \begin{bmatrix} \delta \bm{r}_{o, k} \\ \delta \bm{v}_{o, k} \\ 0 \\ 0 \end{bmatrix} \end{equation} being: \begin{equation} \bm{w}_{o, k} = \begin{bmatrix} \delta \bm{r}_{o,k} \\ \delta \bm{v}_{o,k} \end{bmatrix} \sim \mathcal{N}(\bm{0}_{6}, \bm{R}_{s,k}) \in \mathbb{R}^6 \end{equation} \subsubsection{Control Uncertainties.} Control execution errors are modeled as a small three-dimensional rotation of the commanded $\Delta V$ vector, defined by Euler angles $(\delta \phi, \delta \vartheta, \delta \psi)$, and a slight variation $\delta u$ of its modulus. Random variables $\delta \phi, \delta \vartheta, \delta \psi$ and $\delta u$ are assumed to be Gaussian, with standard deviations $\sigma_{\phi}, \sigma_{\vartheta}, \sigma_{\psi}$ and $\sigma_{u}$. So, the control disturbance vector at time $t_k$ is: \begin{equation} \bm{w}_{a, k} = \begin{bmatrix} \delta \phi_k \\ \delta \vartheta_k \\ \delta \psi_k \\ \delta u_k \end{bmatrix} \sim \mathcal{N}(\bm{0}_{4}, \bm{R}_{a,k}) \in \mathbb{R}^4 \end{equation} where $\bm{R}_{a,k} = \mbox{diag}\left(\sigma_\phi^2, \sigma_\vartheta^2, \sigma_\psi^2, \sigma_u^2 \right)$ is the covariance matrix. The actual control $\bm{u}$ can be written as a function of the commanded action $\bm{a}$ at time $t_k$, $k \in [0, N)$, as: \begin{equation} \bm{u}_k = g(\bm{a}_k, \bm{w}_{a,k}) = (1 + \delta u_k) \bm{A}_k \bm{a}_k \end{equation} where the rotation matrix $\bm{A}_k$ is evaluated, under the small-angle assumption, as: \begin{equation} \bm{A}_k = \begin{bmatrix} 1 & - \delta \psi_k & \delta \vartheta_k\\ \delta \psi_k & 1 & - \delta \phi_k \\ - \delta \vartheta_k & \delta \phi_k & 1 \end{bmatrix} \end{equation} It is worth noting that, although the control disturbance vector is Gaussian, the effect obtained on the applied control is definitively non-Gaussian and, for this reason, the solution methods in Ref.~\citenum{ozaki2018stochastic} and \citenum{ozaki2020tube} may not be applicable. \subsubsection{Missed Thrust Events.} Besides small control execution errors, the effect of one or more consecutive MTEs over the course of the mission is also investigated. The MTE is modeled as a complete lack of thrust, even when commanded, that occurs at a randomly chosen time $t_{\hat k} \in [0, N)$, so that $\bm{u}_{\hat k}=\bm{0}_{3}$. With some probability $1-p_{mte}$ the miss-thrust is recovered and for the remaining steps it never happens again. Otherwise, the MTE persists for an additional time-step. This procedure is repeated, but the MTE may last at most $n_{mte}$ successive time steps, that is, from $t_{\hat k}$ to $t_{\hat k +n_{mte}-1}$. The values used for the standard deviations and the other uncertainty model parameters introduced so far are reported in Table~\ref{tab:sigma}. \begin{table}[htbp] \caption{Uncertainty model parameters.} \label{tab:sigma} \centering \begin{tabular}{c c c c c c c c} \hline $\sigma_r,\, \si{\kilo\meter}$ & $\sigma_v,\, \si{\kilo\meter/\second}$ & $\sigma_\phi,\, \si{deg}$ & $\sigma_\vartheta,\, \si{deg}$ & $\sigma_\psi,\, \si{deg}$ & $\sigma_u$ & $p_{mte}$ & $n_{mte}$ \\ \hline $1.0$ & $0.05$ & $1.0$ & $1.0$ & $1.0$ & $0.05$ & $0.1$ & $3$ \\ \hline \end{tabular} \end{table} \subsubsection{Reward Function.} The objective of the optimization procedure is to maximize the (expected) final mass of the spacecraft, while ensuring the compliance with terminal rendezvous constraints on position and velocity. For this reason, the reward $r_k$ collected by the agent at time $t_k$, for $k \in (0, N]$, is defined as: \begin{equation} r_k = -\mu_k -\lambda_1 \, e_{u,k-1} - \lambda_2 \, e_{s,k} \label{eq:rew} \end{equation} where: \begin{align} \mu_k &= \Delta m_k = \begin{cases} m_{k-1} - m_k & \mbox{if}\,\, k < N \\ m_{N-1} - m_f & \mbox{if}\,\, k = N \end{cases} \\ e_{u,k} &= \max{\left(0, |\bm{u}_{k}| - \Delta V_{max,k} \right)} \\ e_{s,k} &= \begin{cases} 0 & \mbox{if}\,\, k < N \\ \max{\left(0, \max{\left(\frac{|\bm{r}_f - \bm{r}_\Mars|}{|\bm{r}_\Mars|}, \frac{|\bm{v}_f - \bm{v}_\Mars|}{|\bm{v}_\Mars|}\right) - \varepsilon } \right)} & \mbox{if}\,\, k = N \end{cases} \end{align} Here $\mu_k$ is the cost function, that is, the consumed propellant mass, % $e_{u,k}$ is the violation of the constraint relative to the maximum $\Delta V$ magnitude admissible for that segment (see Eq.~\ref{eq:DVmax-k}), and $e_{s,k}$ is the violation of the constraint acting on the final state of the spacecraft, up to a given tolerance $\varepsilon = \SI{e-3}{}$. The penalty factors $\lambda_1 = 100$ and $\lambda_2=50$ are used in the present work. \section{Reinforcement Learning} \input{RL} \section{Numerical Results} \input{results.tex} \section{Conclusion} This paper presented a deep Reinforcement Learning (RL) framework to deal with the robust design of low-thrust interplanetary trajectories in presence of different sources of uncertainty. The stochastic optimal control problem must first be reformulated as a Markov Decision Process. Then, a state-of-the-art RL algorithm, named Proximal Policy Optimization (PPO), is adopted for the problem solution, and its prominent features over similar policy-gradient methods are outlined. Preliminary numerical results were reported for a three-dimensional Earth-Mars mission, by considering separately the effect of different types of uncertainties, namely, uncertainties on the dynamical model, on the observations, on the applied control, as well as the presence of a single or multiple, consecutive, missed thrust events. The obtained results show the capability of PPO of solving simple interplanetary transfer problems, as the Earth-Mars mission here considered, in both deterministic and stochastic scenarios. The solution found in the deterministic case is in good agreement with the optimal solution provided by an indirect method. However, the high computational cost necessary to train the neural network discourages the use of a model-free RL algorithm in that circumstance. The power of RL becomes apparent when dealing with stochastic optimal control problems, where traditional methods are either cumbersome, impractical, or simply impossible to apply. Despite the reported results are only preliminary, the presented solutions seem very promising, in terms of both payload mass and constraint enforcement. The methodology here proposed is quite general and can be implemented, with the appropriate changes, to cope with a variety of spacecraft missions and uncertainty models. Also, extensions to arbitrary stochastic dynamical models (e.g., with possibly complex non-Gaussian perturbations) are straightforward. This is a major advantage with respect to other techniques presented in the literature based on ad-hoc extensions of traditional optimal control methods. The preliminary results here proposed pave the way for reinforcement learning approaches in robust design of interplanetary trajectories. Additional work is obviously necessary in order to increase both the efficiency of the learning process and the reliability of the solutions. The high computational cost calls for the use of asynchronous algorithm, where the two processes of policy-rollout (for collecting experience) and policy-update (for learning) run in parallel, so as to exploit at best the massive parallelization allowed by high performance computing clusters. Also, the use of recurrent neural networks should be investigated when dealing with non-Markov dynamical processes, as in the case of partial observability and multiple, correlated, missed thrust events. However, the most crucial point seems to be enhancing the constraint-handling capability of RL algorithms. The adoption of the $\varepsilon$-constraint relaxation is a modest contribution that goes in that direction. More advanced formulations of the problem, such as Constrained Markov Decision Process (CMDP), should be investigated in the future for this purpose. \bibliographystyle{AAS_publication} \subsection{Deterministic Optimal Trajectory} The ability of the presented methodology to deal with traditional, deterministic, optimal control problems is investigated first, by comparing the solution provided by the control policy $\pi^{unp}$ trained in the deterministic (unperturbed) environment (see Eq.~\eqref{eq:dyn_kep}), with the solution of the original Earth-Mars low-thrust transfer problem, found by using a well-established indirect optimization method used by the authors in other interplanetary transfers \cite{colasurdo2014tour}. The two solutions are very close to each other in terms of trajectory and control direction. Also, the final mass of the RL solution ($\SI{600.23}{kg}$) is in good agreement with the (true) optimal mass obtained by the indirect method ($\SI{599.98}{\kilo\gram}$). This slight difference is partly due to the fact that RL satisfies the terminal constraints with a lower accuracy ($10^{-3}$ in RL vs $10^{-9}$ in the indirect method), and partly due to the (approximated) time-discrete, impulsive dynamical model adopted in the MDP transcription. However, when applying RL to the solution of deterministic optimal control problems, two major downsides arise. First, terminal constraints cannot be explicitly accounted for, and constraint violations must be introduced in the reward function as (weighted) penalty terms. As a result, the accuracy on constraint satisfaction is generally looser than in traditional methods for solving optimal control problems. Second, RL is quite computationally intensive, even when applied to problems as simple as the deterministic rendezvous mission here considered. This is mainly motivated by the fact that PPO is a model-free algorithm, hence, the knowledge of the underlying (analytical) dynamical model is not exploited at all. The only way the agent can obtain satisfactory results is to acquire as much experience (i.e., samples) as possible about the environment. In this respect, the solution of the deterministic problem here reported took about $2\div3$ hours (depending on the desired accuracy on the constraints) on a computer equipped with Intel Core i7-9700K CPU @\SI{3.60}{\giga\Hz}, while the indirect method just a few seconds. \subsection{Robust Trajectory Design} Besides the unperturbed, deterministic mission scenario (labeled ${unp}$), the following stochastic case-studies are considered: i) state uncertainties ($st$), ii) observation uncertainties ($obs$), iii) control uncertainties ($ctr$), iv) single missed thrust event ($mte,1$), and v) multiple missed thrust events ($mte,2$). Training the G\&CNet{} in one of these environments leads to the definition of a corresponding policy, named $\pi^{unp}$, $\pi^{st}$, $\pi^{obs}$, $\pi^{ctr}$, $\pi^{mte,1}$, and $\pi^{mte,2}$, respectively. For each policy, the reference trajectory, which should be intended as robust to that source of uncertainty, is obtained by applying in the unperturbed environment a (deterministic) version of the policy (i.e., that always takes the action corresponding to the largest probability, instead of sampling from the probability distribution $\bm{a}\sim\pi(\cdot|\bm{s})$), and recording the commands and spacecraft states along the flight. \begin{figure} [!htbp] \centering \includegraphics[width=0.75\textwidth, trim={0cm 0.5cm 0cm 1cm},clip]{Figures/comparison/compare2D.pdf} \caption{Earth-Mars trajectories corresponding to different robust policies. Differences with respect to the unperturbed policy trajectory are up-scaled by 5 for illustration purposes.} \label{fig:robust_comp} \end{figure} Figure~\ref{fig:robust_comp} shows the robust trajectories obtained after a training that lasts up to $\SI{200}{\mega steps}$, that roughly corresponds to 10 $\div$ 12 computing hours. For each case, only the best found solution during training (also accounting for the closed-loop behavior described in the next section) is reported. To add robustness, these trajectories tend to approach Mars orbit in advance with respect to the optimal solution, so as to maximize the probability of meeting the terminal constraints even in presence of late perturbations and/or control errors. \input{Tables/tab-robust} Table~\ref{tab:nom} summarizes the main features of these trajectories, that are, the final spacecraft mass $m_f$, constraint violations $\Delta r_f$ and $\Delta v_f$, and cumulative reward $J$, as well as some environment-specific training settings. The solutions corresponding to a policy trained in a stochastic environment with perturbations on either state ($\pi^{st}$), observations ($\pi^{obs}$), or control direction and magnitude ($\pi^{ctr}$) satisfy the terminal constraints within the given tolerance ($10^{-3})$. In those cases, robustness is obtained by sacrificing less than $1 \div 2$\% of the payload mass. Instead, the solutions $\pi^{mte,1}$ and $\pi^{mte,2}$, trained in the MTE environments, tend to slightly overcome Mars orbit, since they account, even in the unperturbed scenario, for the possible presence of MTEs, whose probability of occurrence during training has been exaggerated in this work for research purposes (at least one MTE must occur in any environment realization). Also, the final spacecraft mass obtained in these two cases is considerably worse than in the previous ones. In all presented cases, the error on the final velocity is zero. This result should not surprise the reader. In fact, the last $\Delta V$ is computed algebraically as a difference between the final spacecraft velocity and Mars velocity. Thus, the velocity constraint is automatically satisfied whenever this (computed) $\Delta V$ has a magnitude lower than the maximum admissible by the Sims-Flanagan model (see Eq.~\ref{eq:DVmax-f}). \begin{figure}[!htbp] \centering \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_unperturbed.pdf} \caption{Unperturbed policy.} \label{fig:dv_nom} \end{subfigure}% \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_state.pdf} \caption{State uncertainties.} \label{fig:dv_state} \end{subfigure} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_observation.pdf} \caption{Observation uncertainties} \label{fig:dv_obs} \end{subfigure}% \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_control.pdf} \caption{Control uncertainties.} \label{fig:dv_con} \end{subfigure} \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_single_MTE.pdf} \caption{Single MTE.} \label{fig:dv_mte} \end{subfigure}% \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/comparison/controls_multi_MTE.pdf} \caption{Multiple MTEs.} \label{fig:dv_mtes} \end{subfigure} \caption{Magnitude of the $\pmb{\Delta V}$s along the robust trajectories. Dashed lines indicate the maximum $\pmb{\Delta V}$ admitted at each time step by Sims-Flanagan model.} \label{fig:DVrob} \end{figure} Figure~\ref{fig:DVrob} shows the distribution of the magnitude of the $\Delta V$s over the flight time for the different robust trajectories. Dashed lines indicate the maximum allowed $\Delta V$ at each step, according to the Sims-Flanagan model (see Eq.~\eqref{eq:DVmax-k}). As a general comment, the robust trajectories trained in the stochastic environments show a lower $\Delta V$ magnitude at the beginning and at the end of the transfer, with respect to the optimal deterministic solution $\pi^{unp}$, and, correspondingly, an higher magnitude in the central portion of the transfer. This sub-optimal distribution of the thrust is responsible for the additional propellant consumption of the robust solutions. Also, the applied $\Delta V$ is consistently lower than the maximum available in all cases except the unperturbed one, where an almost bang-off-bang pattern, which is the expected solution of this kind of optimal control problems, may be recognized. This is a distinctive feature of robust trajectories, that must satisfy the constraint on the maximum admissible value of $\Delta V$, while leaving room for efficient correction maneuvers. As a final remark, in the two solutions obtained with policies $\pi^{mte,1}$ and $\pi^{mte,2}$ the last $\Delta V$ is considerably smaller than in the other solutions, probably because the two policies try to ensure the compliance with the final velocity constraint regardless of the possible presence of a MTE near the final time. \subsection{Closed-Loop Mission Analysis} \begin{figure}[!htbp] \centering \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/state/MC2D_LQ.png} \caption{$\pi^{st}$} \label{fig:MC_pert} \end{subfigure} \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/observation/MC2D_LQ.png} \caption{$\pi^{obs}$} \label{fig:obs} \end{subfigure}% \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/control/MC2D_LQ.png} \caption{$\pi^{ctr}$} \label{fig:control} \end{subfigure} \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/MTE/MC2D_LQ.png} \caption{$\pi^{mte,1}$} \label{fig:MTE} \end{subfigure}% \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/MTEs/MC2D_LQ.png} \caption{$\pi^{mte,2}$} \label{fig:MTEs} \end{subfigure} \hfill \begin{subfigure}[t]{.49\textwidth} \centering \includegraphics[width = 0.95\textwidth]{Figures/nominal/MC2D_state_LQ.png} \caption{$\pi^{unp}$} \label{fig:MC_det} \end{subfigure}% \caption{Monte Carlo simulations realized by running each policy in the correspondingly-perturbed environment, except for the unperturbed policy, which is run in the state-perturbed one. Differences are exaggerated by a factor 5 for illustration purposes.} \label{fig:MCs} \end{figure} The closed-loop performance of the G\&CNet{s} in their respective (stochastic) training environment are measured by using a Monte Carlo approach that consists in running each state-feedback policy in 500 randomly-generated environment realizations, or episodes. Figure~\ref{fig:MCs} shows the results of the Monte Carlo campaigns in terms of spacecraft trajectory in each randomly-generated episode. Specifically, in each figure, the dark-blue line represents the robust trajectory, with the light-blue arrows indicating the $\Delta V$s, while each gray line represents the trajectory obtained in one of the randomly-generated episodes. One may notice that, for policies $\pi^{st}$, $\pi^{obs}$, $\pi^{ctr}$ and $\pi^{mte,1}$, the Monte Carlo generated trajectories have a greater dispersion in the central part of the mission, which tends to reduce, and disappear almost entirely, while approaching the final time, because of the imposed terminal constraints. This is not completely true for the case of multiple MTEs, where a small number of trajectories (15 out of 500) clearly miss the target by far. \input{Tables/tab-MC} Table~\ref{tab:MC} shows that RL is able to cope with all these different stochastic scenarios effectively. Indeed, despite the severity of the considered perturbations/uncertainties, the success rate (that is, the percentage of solutions that meet final constraints within the prescribed accuracy) is rather high, over 70\% in most cases and up to 80\% when only additive Gaussian state perturbations are considered. For the sake of comparison, we also reported the results obtained by running policy $\pi^{unp}$ in the state-perturbed stochastic environment (Fig.~\ref{fig:MC_det}). While the differences between the robust trajectories corresponding to policy $\pi^{unp}$ and $\pi^{st}$ seem minimal, the effects in the closed-loop simulations are apparent. Indeed, in none of the episodes the policy $\pi^{unp}$ was able to reach the imposed accuracy on terminal state, while policy $\pi^{st}$ succeeds in the $80\%$ of the cases. Similar results are obtained when running the policy $\pi^{unp}$ in any of the proposed perturbed (stochastic) environments. More precisely, the success rate is zero for both the state- and observation-uncertainty environments, while in case of control uncertainties on thrust magnitude/direction is 8\%, and almost double in case of single (18.8\%) or multiple (16.8\%) MTEs. The preliminary results found with policies $\pi^{mte,1}$ and $\pi^{mte,2}$ stressed that RL performance deteriorates substantially in the presence of MTEs. By looking at Figure~\ref{fig:errors}, it is clear that, in most cases ($69.8\%$ with a single MTE, $70.4\%$ with multiple MTEs) the policy manages to recover from the complete absence of thrust, and meets the final constraints within the imposed tolerance ($10^{-3}$). However, in a few unfortunate scenarios, the MTE occurs in ``crucial points'' of the trajectory, that is, near final time, and the policy is not able to compensate for the missing $\Delta V$s in any way. As a result, the terminal constraints cannot be met. This fact is confirmed by the high variance obtained on the constraint violation in both mission scenarios (single and multiple MTEs). Analogously, the drop in payload mass (Fig.~\ref{fig:mass}) highlights what are the most critical points in terms of thrust efficiency. The looser satisfaction of the terminal constraints is deemed responsible for the final rise in the achieved payload mass and might be misleading. \begin{figure}[!htbp] \centering \begin{subfigure}[t]{.47\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/MTE/MTE_error.pdf} \caption{Terminal constraint violation.} \label{fig:errors} \end{subfigure}% \hfill \begin{subfigure}[t]{.51\textwidth} \centering \includegraphics[width = 1\textwidth]{Figures/MTE/MTE_mass.pdf} \caption{Final spacecraft mass.} \label{fig:mass} \end{subfigure} \caption{Terminal constraint violation (a) and final spacecraft mass (b) obtained with policy $\pmb{\pi^{mte,1}}$ by varying the MTE location $\pmb{\hat{k}}$.} \label{fig:mte_analysis} \end{figure} \subsubsection{Remark.} The above described framework has been derived in case of a (perfectly observable) Markov Decision Process. When a perfect knowledge of the state is not available or when the observations differ from the state, the same RL algorithm can be used, but in this case the Actor and Critic networks take as an input directly the observation $\bm{o}_k$. \subsection{Implementation Details}% The results presented in this work have been obtained by using the PPO algorithm implementation by Stable Baselines \cite{stable-baselines2018}, an open-source library containing a set of improved implementations of RL algorithms based on OpenAI Baselines. The scientific library \textit{pykep}\cite{izzo2012pykep}, developed at the European Space Agency, was instead used for the astrodynamics routines. The selected G\&CNet{} consists of two separate head networks, one for the control policy and the other for the value function, each one composed of two hidden layers. The G\&CNet{} architecture is summarized in Table~\ref{tab:net}, which reports the number of neurons in each layer and the activation function used in each neuron. The tuning of PPO hyper-parameters was performed by using Optuna~\cite{optuna2019}, an open source hyper-parameter optimization framework for automated hyper-parameter search. The hyper-parameter optimization was realized on a deterministic version of the RL environment, with a budget of 500 trials, each with a maximum of $\SI{3e5}{}$ training steps. The optimal value of the hyper-parameters used in all simulations is reported in Table~\ref{tab:hyper}. \begin{table}[!htb] \begin{minipage}{.45\textwidth} \centering \caption{Network Configuration.}\label{tab:net} \centering \begin{tabular}{c c c} \hline & Policy & Value \\ & network & network\\ \hline Layer 1 & 64 & 64\\ Layer 2 & 64 & 64\\ Output & 3 & 1 \\ Activation & tanh & tanh \\ \hline \end{tabular} \end{minipage} \hfill \begin{minipage}{.45\textwidth} \begin{threeparttable} \caption{PPO hyperparameters.} \label{tab:hyper} \centering \begin{tabular}{c c} \hline Hyper-parameter & Value\\ \hline $\gamma$ & $0.9999$ \\ $\lambda$ & $0.99$ \\ $\alpha$ & $2.5 \times 10^{-4} \left(1 - {t}/{T} \right)$\tnote{$\star$} \\ $\epsilon$ & $0.3 \left(1 - {t}/{T}\right)$\tnote{$\star$} \\ $c_1$ & $0.5$ \\ $c_2$ & $4.75 \times 10^{-8}$ \\ $n_{opt}$ & $30$\\ \hline \end{tabular} \begin{tablenotes}\footnotesize \item [$\star$] $t$ indicates the training step number \end{tablenotes} \end{threeparttable} \end{minipage}% \end{table} When dealing with constrained optimization problems, constraint violations are typically included as penalty terms inside the reward function (see Eq.~\eqref{eq:rew}). In these cases, a penalty-based $\varepsilon$-constraint method, similar to those sometimes used in stochastic global optimization \cite{federici2020eos}, proved to be helpful to enforce constraints more gradually during optimization, allowing the agent to explore to a greater extent the solution space at the beginning of the training process. % For this reason, as a modest original contribution of this paper, the constraint satisfaction tolerance $\varepsilon$ also varies during the training, according to a (piecewise constant) decreasing trend: \begin{equation} \varepsilon = \begin{cases} 0.01 & \mbox{for the first } T/2 \mbox{ training steps}\\ 0.001 & \mbox{for the second } T/2 \mbox{ training steps} \end{cases} \end{equation} \subsubsection{Remark.} The above described framework has been derived in case of a (perfectly observable) Markov Decision Process. When a perfect knowledge of the state is not available or when the observations differ from the state, the same RL algorithm can be used, but in this case the Actor and Critic networks take as an input directly the observation $\bm{o}_k$. \subsection{Implementation Details}% The results presented in this work have been obtained by using the PPO algorithm implementation by Stable Baselines \cite{stable-baselines2018}, an open-source library containing a set of improved implementations of RL algorithms based on OpenAI Baselines. The scientific library \textit{pykep}\cite{izzo2012pykep}, developed at the European Space Agency, was instead used for the astrodynamics routines. The selected G\&CNet{} consists of two separate head networks, one for the control policy and the other for the value function, each one composed of two hidden layers. The G\&CNet{} architecture is summarized in Table~\ref{tab:net}, which reports the number of neurons in each layer and the activation function used in each neuron. The tuning of PPO hyper-parameters was performed by using Optuna~\cite{optuna2019}, an open source hyper-parameter optimization framework for automated hyper-parameter search. The hyper-parameter optimization was realized on a deterministic version of the RL environment, with a budget of 500 trials, each with a maximum of $\SI{3e5}{}$ training steps. The optimal value of the hyper-parameters used in all simulations is reported in Table~\ref{tab:hyper}. \begin{table}[!htb] \begin{minipage}{.45\textwidth} \centering \caption{Network Configuration.}\label{tab:net} \centering \begin{tabular}{c c c} \hline & Policy & Value \\ & network & network\\ \hline Layer 1 & 64 & 64\\ Layer 2 & 64 & 64\\ Output & 3 & 1 \\ Activation & tanh & tanh \\ \hline \end{tabular} \end{minipage} \hfill \begin{minipage}{.45\textwidth} \begin{threeparttable} \caption{PPO hyperparameters.} \label{tab:hyper} \centering \begin{tabular}{c c} \hline Hyper-parameter & Value\\ \hline $\gamma$ & $0.9999$ \\ $\lambda$ & $0.99$ \\ $\alpha$ & $2.5 \times 10^{-4} \left(1 - {t}/{T} \right)$\tnote{$\star$} \\ $\epsilon$ & $0.3 \left(1 - {t}/{T}\right)$\tnote{$\star$} \\ $c_1$ & $0.5$ \\ $c_2$ & $4.75 \times 10^{-8}$ \\ $n_{opt}$ & $30$\\ \hline \end{tabular} \begin{tablenotes}\footnotesize \item [$\star$] $t$ indicates the training step number \end{tablenotes} \end{threeparttable} \end{minipage}% \end{table} When dealing with constrained optimization problems, constraint violations are typically included as penalty terms inside the reward function (see Eq.~\eqref{eq:rew}). In these cases, a penalty-based $\varepsilon$-constraint method, similar to those sometimes used in stochastic global optimization \cite{federici2020eos}, proved to be helpful to enforce constraints more gradually during optimization, allowing the agent to explore to a greater extent the solution space at the beginning of the training process. % For this reason, as a modest original contribution of this paper, the constraint satisfaction tolerance $\varepsilon$ also varies during the training, according to a (piecewise constant) decreasing trend: \begin{equation} \varepsilon = \begin{cases} 0.01 & \mbox{for the first } T/2 \mbox{ training steps}\\ 0.001 & \mbox{for the second } T/2 \mbox{ training steps} \end{cases} \end{equation}
proofpile-arXiv_067-8686
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A geometric graph is a graph drawn in the plane with straight-line edges. Throughout this paper we additionally assume that all geometric graphs are noncrossing. Let $G$ be a given (noncrossing) geometric graph $G$. We want to augment $G$ with a geometric matching on the vertices of $G$ such that no edges cross in the augmentation. We call such a (geometric) matching \emph{compatible} with~$G$. Note that our definition of a compatible matching implies that the matching is noncrossing and avoids the edges of~$G$. Questions regarding compatible matchings were first studied by Rappaport~et~al.~\cite{Rap89,RIT86}. Rappaport~\cite{Rap89} proved that it is {\sf {\sf NP}}-hard to decide whether for a given geometric graph $G$ there is a compatible matching $M$ such that $G+M$ is a (spanning) cycle. Recently Akitaya~et~al.~\cite{AKRST19} confirmed a conjecture of Rappaport and proved that this holds even if~$G$ is a perfect matching. Note that in this case also $M$ is necessarily a perfect matching. However, for some compatible perfect matchings $M$ the union $G+M$ might be a collection of several disjoint cycles. There are graphs $G$ that do not admit any compatible perfect matching, even when $G$ is a matching. Such matchings were studied by Aichholzer~et~al.~\cite{CompMatchingConj} who proved that each $m$-edge perfect matching $G$ admits a compatible matching of size at least $\frac{4}{5} m$. Ishaque~et~al.~\cite{disjoint_matchings} confirmed a conjecture of Aichholzer~et~al.~\cite{CompMatchingConj} which says that any perfect matching $G$ with an even number of edges admits a compatible perfect matching. For a geometric graph $G$ let $d(G)$ denote the size of a largest compatible matching of $G$ and for a family $\mathcal{F}$ of geometric graphs let~$d(\mathcal{F})=\min\{d(G)\mid G\in\mathcal{F}\}$. Aichholzer~et~al.~\cite{aght-cmgg-11} proved that for the family~$T_n$ of all $n$-vertex geometric trees $\frac{1}{10}n \leq d(T_n) \leq \frac{1}{4} n$ holds and for the family~$P_n$ of all $n$-vertex simple polygons~$\frac{n-3}{4} \leq d(P_n) \leq \frac{1}{3} n$ holds. We continue this line of research and consider the following problems. Given a polygon, we first show that it is {\sf NP}-complete to decide whether the polygon admits a compatible perfect matching. % Then we ask for the \enquote{worst} compatible matchings for a given polygon. That is, we search for small maximal compatible matchings, where a compatible matching $M$ is maximal if there is no compatible matching $M'$ that contains $M$. We study such matchings also for larger families of $d$-regular geometric graphs. The first studied problem can also be phrased as follows: Given a geometric cycle, can we add edges to obtain a cubic geometric graph? In the last section, we consider a related augmentation problem. Given a geometric graph, we show that it is {\sf NP}-complete to decide whether the graph can be augmented to a graph of minimum degree five. The corresponding problem for the maximum vertex degree asks to add a \emph{maximal} set of edges to the graph such that the maximum vertex degree is bounded from above by a constant. This problem is also known to be {\sf NP}-complete for maximum degree at most~seven~\cite{jansen}. A survey of Hurtado and T\'oth~\cite{HT13} discusses several other augmentation problems for geometric graphs. Specifically it is {\sf NP}-hard to decide whether a geometric graph can be augmented to a cubic geometric graph~\cite{p-acg-12} and also whether an abstract planar graph can be augmented to a cubic planar graph (not preserving any fixed embedding)~\cite{HRR15}. Besides the problems mentioned in that survey, decreasing the diameter~\cite{CGKPSTW17} and the continuous setting (where every point along the edges of an embedded graph is considered as a vertex) received considerable attention~\cite{BBCGL19,DCGSS17}. \section{Compatible Perfect Matchings in Polygons} \begin{theorem}\label{thm:perfMatchingHard} Given a simple polygon, it is {\sf NP}-complete to decide whether it admits a compatible perfect matching. \end{theorem} \begin{proof} The problem is obviously in {\sf NP}, as a certificate one can merely provide the added edges. {\sf NP}-hardness is shown by a reduction from \textsc{positive planar 1-in-3-SAT}. In this problem, shown to be {\sf NP}-hard by Mulzer and Rote~\cite{mulzer_rote}, we are given an instance of 3-SAT with a planar variable--clause incidence graph (i.e., the graph whose vertices are the variables and clauses, which are connected by an edge if and only if the variable occurs in the clause) and no negative literals; the instance is considered satisfiable if and only if there is exactly one true variable per clause. For a given 1-in-3-SAT formula, we take an embedding of its incidence graph and replace its elements by gadgets. We first show that finding compatible matchings for a set of disjoint simple polygons is hard and we then show how to connect the individual polygons to obtain a single polygon. Our construction relies on a gadget that restricts the possible matching edges of vertices. In particular, we introduce a polygonal chain, whose vertices need to be matched to each other in any perfect matching. This is achieved by the \emph{twin-peaks gadget} as shown in \fig{fig:twin_peaks}. \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{twin_peaksNEW} \caption{(a) This gadget allows for simulating a \enquote{bend} in the polygon without a vertex that needs to be matched. The construction is scaled such that the eight points marked with squares do not see any other point outside of the gadget (in particular, narrowing it horizontally). (b) A possible matching is shown in red.} \label{fig:twin_peaks} \end{figure} The gadget is scaled such that the eight vertices in its interior (which are marked with squares in \fig{fig:twin_peaks}) do not see any edges outside of the gadget. (We say that a vertex \emph{sees} another vertex if the relative interior of the segment between them does not intersect the polygon.) The two topmost vertices must have an edge to the vertices directly below as the vertices below do not see any other (nonadjacent) vertices. The remaining six \enquote{square} vertices do not have a geometric perfect matching on their own, so any geometric perfect matching containing them must connect them to the two bottommost vertices. Clearly, there is such a matching. We now present the remaining gadgets (\emph{wire}, \emph{split}, and \emph{clause}) for our reduction. The ideas are inspired by the reduction of Pilz~\cite{p-acg-12} who showed that augmenting an arbitrary geometric graph to a crossing-free cubic graph is {\sf NP}-complete. In the following illustrations, vertices of degree two are drawn as a dot. Other vertices in the figures represent a sufficiently small twin-peaks gadget. The \emph{wires} propagate the truth assignment of a variable. A wire consists of a sequence of polygons, each containing four vertices of degree two (ignoring twin-peak vertices). There are only two possible global matchings for these vertices; see \fig{fig:gadgets-wire}(a). A \emph{bend} in a wire can be drawn as shown in \fig{fig:gadgets-wire}(b). % The truth assignment of a wire can be duplicated by a \emph{split gadget}; see \fig{fig:gadgets-wire}(c). A variable is represented by a cyclic wire with split gadgets. Recall that in our reduction, we do not need negated variables. % \begin{figure}[tb] \centering \includegraphics[width=\textwidth]{gadgets-wire} \caption{(a) A wire gadget and its two truth states (one in dashed, the other in solid red). (b) A bend in a wire gadget. (c) A split gadget that transports the truth setting of one wire to two other ones. This is used for representing the variables.} \label{fig:gadgets-wire} \end{figure} The \emph{clause gadget} is illustrated in \fig{fig:gadgets}, where the wires enter from the top. The vertices there can be matched if and only if one of the vertices is connected to a wire that is in the true state. The vertices at the bottom of the gadget make sure that if there are exactly two wires in the false state, then we can add an edge to them. Hence, this set of polygons has a compatible perfect matching if and only if the initial formula was satisfiable. \begin{figure}[htb] \centering \includegraphics[width=.8\textwidth,page=2]{gadgets} \caption{The clause gadget. The visibility among the vertices of degree two is indicated by the lighter lines. Exactly one vertex of degree two of the part in the circle must be connected to a wire above that carries the true state.} \label{fig:gadgets} \end{figure} It remains to \enquote{merge} the polygons of the construction to one simple polygon. Observe that two neighboring polygons can be merged by a small tunnel using four new bends with twin-peaks gadgets line in Fig.~\ref{fig:mergestep}, without affecting the possible compatible perfect matchings of the other vertices. We can consider the incidence graph to be connected (otherwise the reduction splits into disjoint problems). Hence, we can always merge two distinct neighboring polygons, until there is only a single polygon left. \begin{figure}[tb] \centering \includegraphics{mergestep} \caption{Merging neighboring polygons to a single polygon.} \label{fig:mergestep} \end{figure} \end{proof} \section{Compatible Maximal Matchings in Geometric Graphs} For a geometric graph $G$ let $\mathrm{mm}(G)$ denote the size of a minimal maximal compatible matching of $G$ and for a family $\mathcal{F}$ of geometric graphs let $\mathrm{mm}(\mathcal{F})=\min\{\mathrm{mm}(G)\mid G\in\mathcal{F}\}$. For a geometric graph $G$ and a maximal compatible matching $M$ we define the following parameters (illustrated in Fig.~\ref{fig:lemma1}): \begin{itemize} \item $i_{GM}$ denotes the number of isolated vertices in $G+M$, \item $\Delta_{GM}$ denotes the number of triangular faces in $G$ incident to unmatched vertices only, \item $\sigma_{GM}$ denotes the number of faces of $G+M$ incident to matched vertices only, \item $\nu_{GM}$ denotes the number of edges $uv$ in $G$ where $u$ is unmatched, $v$ is matched, and $uv$ is incident to a reflex angle at $u$ in $G+M$ (see \fig{fig:matchedUnmatchedInc}), \item $r^u_{GM}$ and $r^m_{GM}$ denote the number of unmatched and matched vertices incident to a reflex angle in $G+M$, respectively. \end{itemize} Here, we call an angle reflex if it is of degree strictly larger than~$\pi$ (there is an angle of degree $2\pi$ at vertices of degree $1$ in $G+M$ and there is no angle considered at isolated vertices). Analogically, we call an angle convex if it is of degree $\pi$ or smaller than~$\pi$. We assume that the vertices of the considered graphs are in general position. That means that no three vertices are collinear. \begin{figure}[htb] \centering \includegraphics{lemma1} \caption{A geometric graph $G$ (black) and a maximal compatible matching $M$ (red). Here $i_{GM}=\Delta_{GM}= 1$, $\sigma_{GM}=2$, $\nu_{GM}=10$, $r^u_{GM}=11$, and $r^m_{GM}=10$.} \label{fig:lemma1} \end{figure} The following lemma gives a general lower bound on the size of any maximal matching in terms of the parameters introduced above. We use this bound later to derive specific lower bounds for various classes of geometric graphs below. \begin{lemma}\label{lem:mmm-generalBound} For each geometric graph $G$ and each maximal compatible matching~$M$ of $G$ we have \begin{eqnarray*}2\abs{V(G)} + \nu_{GM}+2\, \sigma_{GM} - r^u_{GM} - 2\,r^m_{GM} -\; \sum_{\mathclap{u\in V(M)}}\;d_G(u) - \Delta_{GM} -2 \leq 2 \abs{E(M)}.\end{eqnarray*} \end{lemma} \begin{proof} We subdivide the plane into cells as follows. First draw a rectangle enclosing $G$ in the outer face (with four vertices and four edges). For each isolated vertex in $G+M$ (one after the other) draw two collinear straight-line edges, both starting at that vertex and until they hit some already drawn edges $e$ and $e'$. The direction of these new edges is arbitrary as long as they do not hit any vertex. Their endpoints become new vertices (subdividing $e$ and $e'$). Similarly, for each vertex $u\in V(G)$ incident to some reflex angle in the resulting drawing we draw (one after the other) a straight-line edge starting at $u$. The direction of this new edge is chosen such that it cuts the reflex angle at $u$ into two convex angles and such that it stops on some already drawn edge (but not a vertex) which is then subdivided by a new vertex. Avoiding to hit vertices is possible as the points are in general position. See \fig{fig:subdivdeMax}. \begin{figure}[tb] \centering \includegraphics{subdivideMaxNeu} \caption{The geometric graph (black) with maximal matching (red) from Fig.~\ref{fig:lemma1} where each reflex angle is cut by a gray edge.} \label{fig:subdivdeMax} \end{figure} Let~$D$ denote the final plane graph. Then each bounded face in $D$ is convex and $D$ is connected. Further, $D$ has exactly $\abs{V(G)}+ r^u_{GM} + r^m_{GM} + 2\,i_{GM}+4$ vertices and $\abs{E(G)} + \abs{E(M)} + 2 (r^m_{GM} + r^u_{GM} + 2\,i_{GM})+4$ edges (each edge starting at an isolated vertex and each edge cutting a reflex angle creates a new vertex and subdivides an existing edge into two parts). By Euler's formula the number $F_D$ of faces in $D$ is exactly \begin{equation*} F_D = \abs{E(D)}-\abs{V(D)}+2=\abs{E(G)} - \abs{V(G)} + \abs{E(M)} + r^m_{GM} + r^u_{GM} + 2\,i_{GM} + 2. \end{equation*} Let $U=V(G)\setminus V(M)$ denote the set of unmatched vertices of $G$ and let $F_i$ denote the number of faces in $D$ with exactly $i$ vertices of $U$ in their boundary. Each isolated vertex in $G+M$ is incident to exactly two faces of $D$, each vertex~$u\in U$ not incident to a reflex angle in $G+M$ is incident to exactly $d_G(u)$ faces of $D$, and each remaining vertex $u\in U$ is incident to exactly $d_G(u)+1$ faces of~$D$. Therefore \begin{equation} 2\,i_{GM} + r^u_{GM} + \sum_{u\in U}d_G(u) = \sum_{i\geq 1}i\, F_i.\label{eq:1} \end{equation} Consider two vertices in $U$ incident to a common face $F$ in $D$. The line segment connecting these two vertices is an edge of $G$, otherwise $M$ is not maximal. So either $F$ has at most two vertices from $U$ or $F$ is a triangular face of $G$ incident to vertices from $U$ only. This shows that $F_3=\Delta_{GM}$ and $F_i=0$ for each $i\geq 4$. Further, each face incident to a vertex that is isolated in $G+M$ is not incident to any other unmatched vertex. Similarly, for each edge counted by~$\nu_{GM}$ there is a face in $D$ with only one unmatched vertex in its boundary, see \fig{fig:matchedUnmatchedInc}. \begin{figure}[tb] \centering \includegraphics[width=.25\textwidth]{matchedUnmatchedInc} \caption{An edge $uv\in E(G)$ where $u\in V(G)\setminus V(M)$ and $v\in V(M)$ with a reflex angle at~$u$ (in $G+M$). Then $u$ is the only vertex from $V(G)\setminus V(M)$ incident to the face $F$ (obtained by cutting the reflex angle at $u$) since $M$ is maximal.} \label{fig:matchedUnmatchedInc} \end{figure} % Hence~$F_1\geq 2\, i_{GM} + \nu_{GM}$. The outer face does not contain any vertices of $U$ and hence~$F_0\geq 1+\sigma_{GM}$. Combining these observations with \eqref{eq:1} and $F_2=F_D-F_0-F_1-F_3$ yields \begin{align*} &2\,i_{GM} + r^u_{GM} + \sum_{u\in U}d_G(u)\\ = \quad&F_1 + 2\, F_2 + 3\, \Delta_{GM}\\ = \quad&2\, F_D-2\, F_0-F_1 + \Delta_{GM}\\ \leq\quad &2\abs{E(G)} - 2\abs{V(G)} + 2\abs{E(M)}\\ & + 2\,i_{GM} + 2\,r^m_{GM} +2\,r^u_{GM} + \Delta_{GM} - \nu_{GM} -2\, \sigma_{GM} +2. \end{align*} Now the desired result follows using $\sum\limits_{u\in U}d_G(u)= 2\abs{E(G)} - \sum\limits_{u\in V(M)}d_G(u).$ \end{proof} The bound of Lemma~\ref{lem:mmm-generalBound} is particularly applicable for regular graphs. \begin{theorem}\label{thm:mmm-regulargraphs} Consider an $n$-vertex geometric graph $G$. \begin{enumerate}[label=\bf\alph{enumi}),topsep=2pt] \item If $G$ is $0$-regular (a point set) we have $\mathrm{mm}(G) \geq \frac{n-1}{3}$. \item If $G$ is $1$-regular (a perfect matching) we have $\mathrm{mm}(G) \geq \frac{n-2}{6}$. \item If $G$ is $2$-regular (disjoint polygons) we have $\mathrm{mm}(G) \geq \frac{n-3}{11}$. \end{enumerate} All these bounds are tight for infinitely many values of $n$. \end{theorem} \begin{proof} First consider a $0$-regular $n$-vertex graph $G$ (a point set). Then $r^u_{GM}=0$, $r^m_{GM}=2 \abs{E(M)}$, $\nu_{GM}=\Delta_{GM}=0$, and $\sigma_{GM}\geq 0$ for any maximal compatible matching $M$ of~$G$. By Lemma~\ref{lem:mmm-generalBound} we have $2n - 4\abs{E(M)} -2 \leq 2 \abs{E(M)}$. This shows $\mathrm{mm}(G)\geq (n-1)/3$. This is tight due to the graphs $G$ and the maximal matchings given in \fig{fig:tightMMM} (left). \begin{figure}[tb] \centering \includegraphics{tightMMM2} \caption{Geometric graphs (black) with minimal maximal compatible matchings (red).} \label{fig:tightMMM} \end{figure} Next consider a $1$-regular $n$-vertex graph $G$. Each vertex in $G$ is reflex in $G+M$. Then $\Delta_{GM}=0$, $\nu_{GM}\geq 0$, $r^u_{GM}=n-2\abs{E(M)}$, $r^m_{GM}=2 \abs{E(M)}$, and $\sigma_{GM}\geq 0$ for any maximal compatible matching $M$ of $G$. By Lemma~\ref{lem:mmm-generalBound} we have $n - 4\abs{E(M)} -2 \leq 2 \abs{E(M)}$. This shows $\mathrm{mm}(G)\geq (n-2)/6$. This is tight due to the graphs $G$ and the maximal matchings given in \fig{fig:tightMMM} (middle). Finally consider a $2$-regular $n$-vertex geometric graph $G$. Each vertex in $V(G)\setminus V(M)$ is reflex in $G+M$. Then $\nu_{GM}\geq 0$, $r^u_{GM}=n-2\abs{E(M)}$, $r^m_{GM}\leq 2 \abs{E(M)}$, $\sigma_{GM}\geq 0$, and $\Delta_{GM}\leq (n-2\abs{E(M)})/3$ for any maximal compatible matching $M$ of $G$. By Lemma~\ref{lem:mmm-generalBound} we have $n - 6\abs{E(M)} - (n-2\abs{E(M)})/3 -2 \leq 2 \abs{E(M)}$. This shows $\mathrm{mm}(G)\geq (n-3)/11$. This is tight due to the graph $G$ and the maximal matching $M$ given in \fig{fig:tightMMM} (right), as an infinite family is obtained by repeatedly replacing an arbitrary triangle with a (scaled) copy of $G+M$. \end{proof} \begin{theorem}\label{thm:maxMatchingPoly} Let $n\geq 4$ and let $P_n$ denote the family of all $n$-vertex polygons. Then $\mathrm{mm}(P_n)\geq \frac{1}{7}n$ for all $n$ and this bound is tight for infinitely many values of $n$. \end{theorem} \begin{proof} The construction in Fig.~\ref{fig:maximalBnd} shows that for infinitely many values of $n$ there is an $n$-vertex polygon with a compatible maximal matching of size $\frac{n}{7}$. This shows $\mathrm{mm}(P_n)\leq \frac{n}{7}$ for infinitely many values of $n$. \begin{figure}[tb] \centering \includegraphics{tightPolygon} \caption{A polygon (black) with a maximal matching (red) with $\frac{n}{7}$ edges (here $n=42$). Note that there are exactly two matching edges between the $14$ vertices in the gray area which can be repeated along a cycle arbitrarily often.} \label{fig:maximalBnd} \end{figure} It remains to prove the lower bound. Let $P$ be an $n$-vertex polygon with a maximal compatible matching~$M$. Since $n\geq 4$, we have $\abs{E(M)}\geq 1$, $\Delta_{PM}=0$, $r^m_{PM}\leq 2\abs{E(M)}$, and $\sigma_{PM}\geq 0$. Let $U=V(P)\setminus V(M)$ denote the unmatched vertices of $P$ and let $E_{UM}$ denote the set of edges $uv$ in $P$ where $u\in U$ and $v\in V(M)$. Each vertex in~$U$ has a reflex angle. Hence $r^u_{PM}=n-2\abs{E(M)}$ and $\nu_{PM}=\abs{E_{UM}}$. There are $2+\abs{E(M)}$ faces in $P+M$. Each of them either has no vertex from $U$ in its boundary or at least two edges from~$E_{UM}$. So~$P+M$ has $2+\abs{E(M)}-\sigma_{PM}$ faces incident to at least two edges from~$E_{UM}$ each. Each edge in $E_{UM}$ is on the boundary of two faces of $P+M$. Together we have $2\abs{E_{UM}} \geq 2(2+\abs{E(M)}-\sigma_{PM})$ and hence $\nu_{PM}+\sigma_{PM}\geq 2+\abs{E(M)}$. Combining these observations with Lemma~\ref{lem:mmm-generalBound} yields~$\abs{E(M)}\geq n/7$, because \begin{eqnarray*} 2n+2+\abs{E(M)}-n-2\abs{E(M)}- 4\abs{E(M)}-4\abs{E(M)}&\leq& \\ 2n+ \nu_{PM}+2\, \sigma_{PM} - r^u_{PM} - 2\,r^m_{PM} -\; \sum_{\mathclap{u\in V(M)}}\;d_P(u) - \Delta_{PM} -2 &\leq & 2 \abs{E(M)} \end{eqnarray*} \end{proof} For nonregular (abstract) graphs $\hat{G}$ determining a geometric drawing $G$ minimizing $\mathrm{mm}(G)$ seems harder. For an integer $n$ and a real number $d$ with $0\leq d\leq 3$, let $\mathcal{F}_d^n$ denote the family of all (noncrossing) geometric graphs with $n$ vertices and at most $dn$ edges. Further let $\mathrm{mm}(d)=\liminf\limits_{n\to\infty}\min\{\mathrm{mm}(G)/n\mid G\in\mathcal{F}_d^n\}$. For each $n$ and each $d\geq 2$ the set $\mathcal{F}_d^n$ contains a triangulation of a convex polygon (on $2n-3$ edges). This shows $\mathrm{mm}(d)=0$ for $d\geq 2$. Theorem~\ref{thm:mmm-regulargraphs} shows $\mathrm{mm}(0)=1/3$ and $\mathrm{mm}(1/2)\leq 1/6$. The construction in the following lemma shows $\mathrm{mm}(d)\leq (2-d)/13$ for $7/10<d<2$. \begin{lemma}\label{lem:constructionMMM} For any integers $m$, $n$ with $n\geq 5$, $\frac{7n+95}{10}\leq m \leq 2n+2$ there is a geometric graph on $n$ vertices and $m$ edges with a maximal compatible matching of size $\ceil{\frac{2n-m+3}{13}}$. \end{lemma} \begin{proof} Let $k=\ceil{\frac{2n-m+3}{13}}$. Then $k\geq 1$ since $m \leq 2n+2$. First suppose that $2n-m+3$ is divisible by $13$, that is, $m=2n + 3 -13k$. We shall construct a geometric graph on $n$ vertices and $m$ edges with a maximal compatible matching of size $k$. Choose a (noncrossing, geometric) perfect matching $M$ of $2k$ points in convex position and an (inner) triangulation of that geometric graph. See \fig{fig:constructionMMM} (left). % \begin{figure}[tb] \centering \includegraphics{constructionMMM} \caption{A geometric graph (black) with a maximal compatible matching (red).} \label{fig:constructionMMM} \end{figure} % There are $2k-2$ triangular faces and $2k$ edges in the boundary of the outer face. Place an isolated edge in the interior of each triangular face. Further for all but one of the outer edges $e$ place another (tiny) isolated edge close to $e$ in the outer face (so that there are no visibilities between these). So far there are $2k + (4k-4) + (4k-2) =10k-6$ vertices and $(3k-3)+(2k-2)+(2k-1)=7k-6$ edges not in $M$. Close to the remaining outer edge we place a triangulation $T$ of a convex polygon on $n-10k+6$ vertices (so that there are no visibilities between these vertices and the isolated edges not in $M$). See \fig{fig:constructionMMM} (right). Note that $n-10k+6 \geq 2$ since $m\geq \frac{7n+95}{10}$. So the graph $T$ contains $2n-20k+9=m-7k+6$ edges. The final graph has in total $n$ vertices and $m$ edges not in $M$. Further $M$ is a maximal matching by construction. It remains to consider the case that $2n-m+3$ is not divisible by $13$. In this case we apply the construction above with $m'=2n + 3 -13k$ edges. To add the remaining $m-m'\leq 12$ edges we replace the triangulation $T$ by an appropriate triangulation of another point set that has some interior points (and hence has more edges). \end{proof} \section{Augmenting to Minimum Degree Five} In this section, we show that augmenting to a geometric graph with minimum degree five is {\sf NP}-complete. \begin{theorem}\label{thm:minDeg5Hard} Given a geometric crossing-free graph $G$, it is {\sf NP}-complete to decide whether there is a set of compatible edges $E$ such that $G+E$ has minimum degree five. \end{theorem} \begin{proof} The problem is obviously in {\sf NP}, a certificate provides the added edges. {\sf NP}-hardness is shown by a reduction from \textsc{monotone planar rectilinear 3-SAT}. \begin{figure}[htb] \centering \includegraphics{figures/monotone3sat} \caption{A monotone planar 3-SAT instance with a corresponding embedding.} \label{fig:mon3sat} \end{figure} In this problem, shown to be {\sf NP}-hard by de Berg and Khosravi~\cite{dBK2010}, we are given an instance of monotone (meaning that each clause has only negative or only positive variables) 3-SAT with a planar variable-clause incidence graph. In this graph, the variable and clause gadgets are represented by rectangles. All variable rectangles lie on a horizontal line. The clauses with positive variables lie above the variables and the clauses with negative variables below. The edges connecting the clause gadgets to the variable gadgets are vertical line segments and no edges cross. See \fig{fig:mon3sat}. % \begin{figure}[tb] \centering \includegraphics[width=0.6\textwidth]{figures/MinDegWirePiece2Darker} \caption{A (geometric) subgraph whose copies will form a wire gadget.} \label{fig:wirepiece} \end{figure} For a given monotone planar 3-SAT formula, we take an embedding of its incidence graph (as discussed) and replace its elements by gadgets. Note that the corresponding rectilinear layout can be computed in polynomial time and has coordinates whose size is bounded by a polynomial~\cite{TT89}. We use a \emph{wire gadget} that propagates the truth assignments; see \fig{fig:wirepiece}. It consists of a linear sequence of similar subgraphs, each containing exactly four vertices of degree~four (the other vertices have at least degree five). The gray areas contain subgraphs where all vertices have at least degree five. The main idea is that we need to add an edge to each of the vertices of degree four surrounding the big gray squares. But due to blocked visibilities this can only be achieved by a \enquote{windmill} pattern, which has to synchronize with the neighboring parts; see \fig{fig:wirestates}. Thus, we have exactly two ways to add edges in order to augment the wire to a graph with minimum degree five. \begin{figure}[tb] \centering \includegraphics[width=0.97\textwidth]{figures/MinDegWireStates2} \caption{The wire gadget with its only true possible augmentations, associated with the assignment true (a) and false (b).} \label{fig:wirestates} \end{figure} A bend in a wire is shown in \fig{fig:bendsplit}. The truth assignment of a wire can be duplicated by the \emph{split gadget} as shown in \fig{fig:bendsplit}. \begin{figure}[tb] \centering \includegraphics[width=0.8\textwidth,page=2]{figures/MinDegBendSplit} \caption{A bended wire (a) and the split gadget (b).} \label{fig:bendsplit} \end{figure} A variable is represented by a long wire with split gadgets. Recall that in our reduction, all variables lie on a horizontal line. The clauses with positive variables lie above and the ones with negated variables lie below this line. We can control whether a variable or a negated variable is transmitted to the clause gadget by choosing appropriate positions for the corresponding split gadgets. In particular, if we translate the split gadget at the wire by one position to the left or right and keep the truth assignment for the wire, the orientation of the augmentation at the position of the new split gadget is flipped. The \emph{clause gadget} is illustrated in \fig{fig:clause}. The wires enter from left, right and below (respectively above). The 7-gon in the middle of the clause gadget can be augmented to a subgraph with minimum degree five if and only if it is connected to at least one wire in the true state. See also \fig{fig:clausestates}. \end{proof} \begin{figure}[tb] \centering \includegraphics[width=0.5\textwidth,page=3]{MinDegreeGadgets2Final} \caption{A clause gadget, the three bold segments represent that the corresponding literals are set to true. The central 7-gon (blue) can be augmented to a subgraph of degree at least five if and only if at least one literal is true.} \label{fig:clause} \end{figure} \begin{figure}[tb] \centering \includegraphics[width=0.9\textwidth]{MinDegClauseStates} \caption{ The three valid possibilities to augment the 7-gon in the clause gadget if one literal is true.} \label{fig:clausestates} \end{figure} \section{Conclusions} We study how many noncrossing straight-line matching edges can be drawn on top of a geometric graph $G$ without crossing or using the edges of $G$. From an algorithmic point of view we show that it is hard to decide whether a perfect matching can be drawn on top of a polygon in this way. Our results on minimal maximal matchings show that a greedy algorithm will always draw at least $\frac{n}{7}$ edges on top of any $n$-vertex polygon. However, there are instances where it may draw not more than this amount of edges, although larger compatible matchings exist. We are interested in how the function $\mathrm{mm}(G)$ (the size of a minimal maximal compatible matching of $G$) behaves among all geometric graphs $G$ on $n$ vertices and at most $dn$ edges for any value $d\in[0,3]$. Our results show that degree constraints (like $d$-regularity) help to determine $\mathrm{mm}(G)$ and also increase the value of $\mathrm{mm}(G)$ (compared to graphs on the same average degree). Indeed, we show that any $2$-regular graph has at least $(n-3)/11$ edges in any maximal compatible matching while the construction in Lemma~\ref{lem:constructionMMM} shows that there is a geometric graph $G$ on $n$ vertices with $n$ edges and $\mathrm{mm}(G)=(n+3)/13$. We do not know whether there is a family of such geometric graphs with values of~$\mathrm{mm}(G)$ (asymptotically) even smaller than $n/13$. It is also not clear for which graphs $\mathrm{mm}(G)$ is maximized. For some drawings of empty graphs $G$ we have~$\mathrm{mm}(G)=\lceil\frac{n}{3}\rceil$. Is this the (asymptotically) largest possible value? \paragraph*{\bf Acknowledgments.} This work was initiated during the 15th European Research Week on Geometric Graphs (GG Week 2018) in Pritzhagen, Germany. We thank Kevin Buchin, Michael Hoffmann, Wolfgang Mulzer and Nadja Seiferth for helpful discussions. \bibliographystyle{splncs04}
proofpile-arXiv_067-8695
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Additional Details} \label{app:details} In this appendix, we will provide extra details on the low-energy excitations, followed by a mean-field theory analysis of the superconducting states starting from the lattice model in \eqnref{eq:H}. We will consider the low density and strongly interacting limit: \begin{equation} n-1 = \delta \ll 1 \quad\text{and}\quad t_{ij} \ll V_2 \end{equation} where $V_n$ is the Coulomb repulsion between $n$-th nearest-neighbors (see \figref{fig:neighbors}). We also assume that the spins are fully polarized, e.g. by an external magnetic field. \subsection{Electrostatics} \label{app:electrostatics} \begin{figure} \subfloat[]{\includegraphics[width=.79\columnwidth]{excitationEd}} \hspace{.02\columnwidth} \subfloat[]{\raisebox{2cm}{\includegraphics[width=.17\columnwidth]{3e}}} \caption{% (a) A detailed excitation phase diagram of the lowest energy per charge ($E/q$) excitation. The red dots mark estimates for a slightly twisted WSe$_2$/WS$_2$ inside a dielectric environment with permittivity $\epsilon=3$ and for different distances $d$ to metallic gates. In the ``other'' region, other excitations have the least $E/q$, such as the charge-$3e$ excitation shown in (b). The bottom-left region (white with red stripes) is inaccessible (when $V_{n\geq3}=0$) as it would require $\Delta<0$. The dashed lines are drawn for $V_{n\geq3}=0$. See \appref{app:electrostatics} for more details. }\label{fig:detailedExcitations} \end{figure} \begin{figure} \includegraphics[width=.6\columnwidth]{neighbors} \caption{% $n^\text{th}$ nearest-neighbors on a honeycomb lattice. Throughout this work, $V_n$ denotes the Coulomb repulsion between $n^\text{th}$ nearest-neighbors. We sometimes differentiate between $V_n^{AA}$ and $V_n^{BB}$ for repulsions between a pair of sites on the $A$ or $B$ sublattice. $n^\text{th}$ nearest-neighbors are separated by $a_n$ with $a_{n=1,2,3,4,5}/a_1 = 1, \sqrt{3}, 2, \sqrt{7}, 3$. $L_\text{M} = a_2$ is the moir\'e period. }\label{fig:neighbors} \end{figure} \begin{figure} \subfloat[\label{fig:dipoleCrystal}]{\includegraphics[width=.45\columnwidth]{dipoleCrystal}} \hspace{.05\columnwidth} \subfloat[\label{fig:snake}]{\includegraphics[width=.45\columnwidth]{snake}} \caption{% Possible instabilities of the charge transfer insulator. (a) A Wigner crystal of dipoles, which is a possible state when the dipole energy is negative, $E_d<0$. (b) The charge density wave that occurs in the CDW region (white) of \figref{fig:detailedExcitations} when $V_{n\geq4}=0$ and $V_3 \ll V_2$. }\label{fig:CDW1} \end{figure} In \figref{fig:excitation} of the main text, we showed which of the following three excitations has the smallest energy per charge, $E/q$: \begin{align} \text{hole ($q=1$):} & & E_{-e} & \label{eq:Eq}\\ \text{trimer ($q=2$):} & & E_t &= 2E_e + E_d - 2V_1 + 3V_2^{BB} \nonumber\\ \begin{array}{c} \text{electronic} \\ \text{polaron} \end{array} (q=1): & & E_{ep} &= E_e + E_d - V_1 + V_2^{BB} \nonumber \end{align} Remarkably, for the above three excitations, this only requires knowing $E_d/V_1$ and $V_2^{BB}/V_1$; $\Delta$ and all $V_n$ are effectively absorbed into these two ratios. It is remarkable that charge-$2e$ pairing can occur from just repulsive interactions. The nontrivial charge-transfer insulating background is essential for the trimer stability; a trimer is not stable in the vacuum \cite{tetron}. As an aid to intuition, we give an even simpler example of how this can occur in a finite system in \appref{app:cluster}. In \figref{fig:detailedExcitations}, we show a more detailed diagram of the smallest $E/q$ excitation when arbitrary excitations are considered. We also check for instabilities (shown in \figref{fig:CDW1}) of the charge transfer insulator, which occur in the ``CDW'' region of \figref{fig:detailedExcitations} and when $E_d<0$. The energy of other excitations and these instabilities depend on more than just $E_d/V_1$ and $V_2^{BB}/V_1$. Therefore, we show the locations of these other excitations and instabilities for the simple case when $V_{n\geq3}=0$. In \figref{fig:detailedExcitations}, dashed lines are used to depict boundaries that assume $V_{n\geq3}=0$. \begin{table} \subfloat[$d = 4\text{nm}$]{\begin{tabular}{c|cccccc} $n$ & 0 & 1 & 2 & 3 \\\hline $V_n^{AA}$ & 3.7769 & & 0.2292 & \\ $V_n^{AB}$ & & 0.9479 & & 0.1340 \\ $V_n^{BB}$ & 2.9828 & & 0.2472 & \end{tabular}} \\ \subfloat[$d = 7\text{nm}$]{\begin{tabular}{c|cccccc} $n$ & 0 & 1 & 2 & 3 \\\hline $V_n^{AA}$ & 4.2407 & & 0.4599 & \\ $V_n^{AB}$ & & 1.2998 & & 0.3239 \\ $V_n^{BB}$ & 3.4132 & & 0.4780 & \end{tabular}} \\ \caption{% The values of $V_n$ (in units of $\frac{e^2}{\epsilon L_\text{M}} = \frac{205.7}{\epsilon}\text{meV}$ with $L_\text{M} = 7\text{nm}$) used to estimate the location (denoted by red dots in \figref{fig:detailedExcitations}) of slightly twisted WSe$_2$/WS$_2$ in the excitation phase diagram. }\label{tab:Vn} \end{table} We also estimate where in the phase diagram a slightly twisted WSe$_2$/WS$_2$ with a moir\'e period $L_\text{M} = 7\text{nm}$ could be, which we show in \figref{fig:detailedExcitations} using red dots. The locations are calculated using $\Delta = 14.9\text{meV}$ and the values of $V_n$ shown in \tabref{tab:Vn}. $\Delta$ and $V_n$ were calculated using Wannier orbitals and a Coulomb interaction $V(r)$ that is screened by a pair of metallic gates, each a distance $\pm d$ from the TMD hereobilayer. \cite{LiuZhandPrivate} By modeling the gates as perfect conductors, the screened Coulomb interaction can be calculated using the method of image charges, yielding\footnote{% A single parallel conductor results in \mbox{$V(r) \propto \frac{1}{r} - 1/\sqrt{r^2 + (2d)^2}$}.} \begin{equation} V(r) = \frac{e^2}{\epsilon} \sum_{z \in \mathbb{Z}} \, \frac{(-1)^z}{\sqrt{r^2 + (2dz)^2}} \label{eq:2gate} \end{equation} When $r \gg d$, $V(r)$ decays exponentially. In \figref{fig:BerkeleyData}, we point out possible experimental evidence of insulating pair density waves of trimers from recently observed resistivity peaks \cite{BerkeleyTMD}. See also \refcite{ShanTMD} for evidence for additional charge orders. \begin{figure} \includegraphics[width=\columnwidth]{BerkeleyData} \caption{% Resistance of the Berkeley group's WSe$_2$/WS$_2$ moir\'e device as a function of gate voltage, which determines the electron filling, $n$. The figure is copied from \refcite{BerkeleyTMD}. We add vertical lines to indicate various Wigner crystal fillings. The blue line at filling $n = 9/7$ is particularly interesting because it shows a large resistance peak at the same filling as the pair density wave in \figref{fig:9_7_pairCrystal}. Energetically favorable charge density waves at filling $n=3/2$ are shown in \figref{fig:CDW 1_2}. }\label{fig:BerkeleyData} \end{figure} \begin{figure} \subfloat[$\delta = 1/2$]{\includegraphics[width=.45\columnwidth]{3_2_CDW}} \hspace{.05\columnwidth} \subfloat[$\delta = 1/2$]{\includegraphics[width=.45\columnwidth]{3_2_pairCrystal}} \\ \caption{% A charge density wave state at doping $\delta=1/2$ that is typical of the (a) hole and (b) trimer regimes of the excitation phase diagram in \figref{fig:detailedExcitations}. }\label{fig:CDW 1_2} \end{figure} \subsection{Mean Field Theory of Superconductivity} \label{app:MFT} Here, we study the mean field theory of trimer superconductivity. Suppose that we are near the edge of the trimer region of the phase diagram, so that the trimer binding energy, $\epsilon_b$ [\eqnref{eq:Eb}], is small: \begin{equation} 0 < \epsilon_b \ll V_2 \end{equation} We will also assume that excitations other than the charge-$e$ hole and charge-$2e$ trimers (such as the dipole and electronic polaron) have a large energy cost $\sim V_2$ so that they can be ignored. The low-energy Hamiltonian thus consists of only the mobile holes $c_{\bf k}$ on the cations, and bosonic trimers $b_a$ centered on the anions: \begin{align} H_\text{eff} &= \sum_{\bf k} \epsilon({\bf k}) c^\dagger_{\bf k} c_{\bf k} + (\epsilon_0 - 2\mu) \sum_a b_a^\dagger b_a \nonumber\\ &- \frac{\tilde{g}({\bf k-k'})}{2\sqrt{N}} \sum_{\bf k,k'} (b_{\bf k+k'}^\dagger c_{\bf k} c_{\bf k'} + h.c.) + \cdots \label{eq:HeffApp} \end{align} where $b_{\bf k}^\dagger = N^{-1/2} \sum_a e^{i {\bf k}\cdot a} b_a^\dagger$ and $N$ is the number of anion sites. The first term gives the dispersion of the holes, which hop on the triangular lattice of cations. At low density~($\delta \ll 1$), $\epsilon({\bf k})$ can be expanded about its band minima $\pm {\bf K} = (0,\pm\frac{4\pi}{3 L_\text{M}})$: \begin{equation} \epsilon({\bf k \pm K}) = \frac{1}{2m} k^2 - \mu + O(k^3) \label{eq:epsilon} \end{equation} $m \approx 2L_\text{M}^2/27t_2$ where $t_n$ is the hopping energy between $n$-th nearest-neighbors. We have shifted $\epsilon({\bf k})$ so that $\epsilon(\pm {\bf K}) = 0$. The second term sets the energy cost of trimers. $b^\dagger_a$ creates a trimer centered at the anion $a$, and $b^\dagger_{\bf k}$ creates a trimer with momentum $\bf k$. $\epsilon_0 \approx 6t_2 - \epsilon_b$ is the energy difference between a trimer and two holes at the band minima $\pm {\bf K}$. The last term accounts for the conversion between a trimer and a pair of holes. The triangular symmetry of the cation sublattice constrains the conversion amplitude to the following form: $\tilde{g}({\bf q}) = \tilde{g} \sum_{i=1}^3 \frac{2}{3\sqrt{3}} \sin({\bf q} \cdot {\bf r}_i) + \cdots$ where ``$\cdots$'' denotes higher-order moments and ${\bf r}_i = \sqrt{3} \sin(\frac{2\pi}{3} i) \hat{\bf x} - \sqrt{3} \cos(\frac{2\pi}{3} i) \hat{\bf y}$ are second-nearest-neighbor displacements. The normalization is such that $\tilde{g}({\bf K}) = - \tilde{g}(- {\bf K}) = \tilde{g}(-2{\bf K}) = \tilde{g}$. $\tilde{g}$ can be calculated perturbatively; the leading contribution comes from the process shown in \figref{fig:resonance} and has amplitude $\tilde{g} \sim t_1 t_2/V_2$.\footnote{% The perturbative approximation $\tilde{g} \sim t_1 t_2/V_2$ formally also requires assuming $V_4 \ll V_2$ so that $V_4$ only results in a perturbative correction to the final state energy in \figref{fig:resonance}.} Expanding $H_\text{eff}$ about the band minima $\bf \pm K$ leads to \begin{align} H_\text{eff} &\approx \sum_{\bf k, \pm} \epsilon_k c^\dagger_{\bf k, \pm} c_{\bf k, \pm} + (\epsilon_0 - 2\mu) \sum_a b_a^\dagger b_a \nonumber\\ &- \frac{\tilde{g}}{\sqrt{N}} \sum_{\bf k,k'} (b_{\bf k+k'}^\dagger c_{\bf k, -} c_{\bf k', +} + h.c.) + \cdots \label{eq:HeffApp2} \end{align} where $c_{{\bf k},\pm} \sim c_{\bf k \pm K}$ denotes the new fermion operators expanded about $\bf \pm K$ and $\epsilon_k = \frac{1}{2m} k^2 - \mu$ is the dispersion. The ``$\cdots$'' in $H_\text{eff}$ denotes other terms that could be included in $H_\text{eff}$. We will ignore these terms in the following mean-field analysis because we do not expect these terms to be relevant in the resonantly-paired superconductivity regime of interest. To justify this, consider two potentially important kinds terms that we are omitting. The first is a trimer kinetic energy term, $-t_t \sum_{a'a} b_{a'}^\dagger b_a$, where $t_t \sim t^3/V^2$ is the trimer hopping energy, resulting from the perturbative process shown in \figref{fig:BEC}. However, we expect that near resonance, this term is negligible compared to the effective boson mass resulting from the coupling $\tilde{g}$ to the fermions. The second potentially important terms are 4-fermion interactions, such as $V_{ij} n_i n_j$. However, we will soon see [from \eqnref{eq:Delta resonant}] that near resonance, the fermion and boson operators scale as $c \sim b \sim O(\sqrt{\delta})$ at low density $\delta$. Therefore the first term in $H_\text{eff}$ is $O(t \delta)$; the third term contributes $O(\delta^{3/2} \tilde{g}) \sim O(\delta^{3/2} t^2/V_2)$; and a 4-fermion interaction would contribute $O(\delta^2 V)$. Thus, we expect that the 4-fermion interaction is negligible when $\delta^2 V \ll \delta^{3/2} t^2/V$, i.e. at the sufficiently-low doping $\delta \ll (t/V)^4$. To make a connection to $H_\delta$ in \eqnref{eq:Hdelta} of the main text, note that $\psi^\dagger \sim L_\text{M}^{-1} c^\dagger$ and $\phi^\dagger \sim L_\text{M}^{-1} b^\dagger$. Then $g$ in $H_\delta$ and $\tilde{g}$ in $H_\text{eff}$ [\eqnref{eq:HeffApp2}] are related by $g \sim L_\text{M} \tilde{g}$. In the following, we will use lattice units where the distance between nearest-neighbor sites is $1$, so that the distance between next-nearest-neighbors is $L_\text{M} = \sqrt{3}$. To make analytical progress, we consider the following mean-field approximation: \begin{gather} \begin{aligned} b_a^\dagger b_a &= b_a^\dagger \langle b_a \rangle + \langle b_a^\dagger \rangle b_a - \langle b_a^\dagger \rangle \langle b_a \rangle + (b_a^\dagger - \langle b_a^\dagger \rangle) (b_a - \langle b_a \rangle) \nonumber\\ &\approx b_a^\dagger \langle b_a \rangle + \langle b_a^\dagger \rangle b_a - \langle b_a^\dagger \rangle \langle b_a \rangle \end{aligned} \\ b_{\bf k+k'}^\dagger c_{\bf k,-} c_{\bf k',+} \approx \langle b_{\bf k+k'}^\dagger \rangle c_{\bf k,-} c_{\bf k',+} \end{gather} With this approximation, the low-energy Hamiltonian becomes quadratic: \begin{gather} \begin{aligned} H_\text{MF} =& \sum_{\bf k} \begin{pmatrix} c_{\bf +k,+} \\ c_{-\bf k,-}^\dagger \end{pmatrix}^\dagger \begin{pmatrix} +\epsilon_k & -\Delta_b \\ -\Delta_b & -\epsilon_k \end{pmatrix} \begin{pmatrix} c_{\bf +k,+} \\ c_{-\bf k,-}^\dagger \end{pmatrix} \\ &\;\;\;+ \frac{\epsilon_0 - 2\mu}{\tilde{g}^2} \Delta_b^2 \end{aligned} \\ \Delta_b = \tilde{g} \langle b_a \rangle \nonumber \end{gather} $\Delta_b$ is the superconducting order parameter. $\Delta_b > 0$ is assumed to be positive (without loss of generality). The ground state energy density is \begin{align} \frac{E_\text{MF}}{N} &= - \int_E D(E) \sqrt{E^2 + \Delta_b^2} + \frac{\epsilon_0 - 2\mu}{\tilde{g}^2} \Delta_b^2 \\ D(E) &= \int_{\bf k} \delta(E - \epsilon_k) = \begin{cases} 2\pi m & -\mu < E < W \\ 0 & \text{otherwise} \end{cases} \label{eq:DoS} \end{align} where $D(E)$ is the density of single-particle states, and $\int_{\bf k} = \int \frac{d^3{\bf k}}{(2\pi)^2} \Theta(W - \epsilon_k)$ integrates over momentum states with energy $\epsilon_k < W$. $W$ is a UV cutoff which can be taken to be $W = (2\pi m)^{-1} - \mu \approx (2\pi m)^{-1}$ so that $\int_E D(E) = 1$; this is roughly equal to the bandwidth $9t_2 \approx 2m^{-1}$. Evaluating the integral yields \begin{align} \frac{E_\text{MF}}{N} = -\pi m &\Bigg[ W \sqrt{W^2 + \Delta_b^2}+ \Delta_b^2 \log\!\left(\!W + \sqrt{W^2 + \Delta_b^2}\right) \nonumber\\ & \!\!\!+\mu\; \sqrt{\,\mu^2\, + \, \Delta_b^2} + \Delta_b^2 \log\!\left(\mu\, +\, \sqrt{\,\mu^2\, +\, \Delta_b^2}\right) \nonumber\\ & \!\!\!- 2\Delta_b^2 \log\Delta_b \Bigg] + \frac{\epsilon_0 - 2\mu}{\tilde{g}^2} \Delta_b^2 \label{eq:MF E} \end{align} The superconducting order parameter $\Delta_b$ can be calculated by minimizing the energy as a function of $\Delta_b$, which yields \begin{equation} \Delta_b = \dfrac{\sqrt{W^2 + \mu^2 + 2W\mu \cosh\frac{\epsilon_0 - 2\mu}{\pi m \tilde{g}^2}}}{\sinh\frac{\epsilon_0 - 2\mu}{\pi m \tilde{g}^2}} \end{equation} $\Delta_b$ depends strongly on the chemical potential $\mu$, which can be obtained from the filling constraint: \begin{align} \delta &= \delta_c + 2\delta_b \nonumber \\ \delta_c &= \langle c_i^\dagger c_i \rangle = 4\pi m \mu \label{eq:delta MF}\\ \delta_b &= \langle b_a^\dagger b_a \rangle \approx \frac{\Delta_b^2}{\tilde{g}^2} \nonumber \end{align} $\delta_c$ is the density of holes. $\delta_b$ is the density of bosonic trimers, which we approximate at the mean-field level: $\langle b_a^\dagger b_a \rangle \approx \langle b_a^\dagger \rangle \langle b_a \rangle = \Delta_b^2/\tilde{g}^2$. There are two regimes: (1) BCS superconductivity when $\frac{\epsilon_0 - 2\mu}{\pi m \tilde{g}^2} \gg 1$, and (2) resonantly-paired superconductivity when $\epsilon_0 \approx 2\mu$. \subsubsection{BCS Superconductivity Regime} When $\frac{\epsilon_0 - 2\mu}{\pi m \tilde{g}^2} \gg 1$, the order parameter can be approximated as \begin{equation} \Delta_b \approx 2\sqrt{W\mu} \, \exp\!\left(-\frac{\epsilon_0 - 2\mu}{2\pi m \tilde{g}^2}\right) \label{eq:Delta weak} \end{equation} and a BCS superconductivity regime occurs where $\Delta_b$ is very small.\footnote{% The $\sqrt{W\mu}$ prefactor in $\Delta_b$ [in \eqnref{eq:Delta weak}] comes from the limits of integration ($-\mu$ to $W$) in \eqnref{eq:DoS}. \eqnref{eq:Delta weak} is only valid when $\mu>0$. If $\mu=0$, then note that the ground state energy in the $\mu=0$ limit is equivalent to the energy in the $\mu=W$ limit if the mass is halved; i.e. $E_\text{MF}|_{\mu=0} = E_\text{MF}|_{\mu=W}^{m\to m/2}$ in \eqnref{eq:MF E}. Therefore if $\mu=0$, then $\Delta_b \approx 2W \exp\!\left(-\frac{\epsilon_0 - 2\mu}{\pi m \tilde{g}^2}\right)$ [by replacing $\mu\to W$ and $m \to m/2$ in \eqnref{eq:Delta weak}], which is significantly smaller than the expression in \eqnref{eq:Delta weak} when $0 < \mu$ and $\frac{\epsilon_0 - 2\mu}{\pi m \tilde{g}^2} \gg 1$ due to the missing factor of $\frac{1}{2}$ in the exponent.} As a result, the boson density is very small ($\delta_b \ll \delta$), which allows us to approximately solve for the chemical potential from \eqnref{eq:delta MF}: \begin{equation} \mu \approx \frac{\delta}{4\pi m} \end{equation} This regime is very similar to BCS superconductivity. This can be understood by integrating out the boson to obtain a 4-fermion interaction $\tilde{g}' c_+^\dagger c_-^\dagger c_- c_+$ with $\tilde{g}' \sim \frac{\tilde{g}^2}{\epsilon_0 - 2\mu}$. In terms of $\tilde{g}'$, the order parameter $\Delta_b$ scales exactly the same as the BCS order parameter (in two spatial dimensions): $\Delta_b \sim \Delta_\text{BCS} \sim \sqrt{W\mu} e^{-1/D \tilde{g}'}$, where $D = 2\pi m$ is the density of states from \eqnref{eq:DoS}. Note that in this regime, the boson density is very small, so the $\tilde{g}$ coupling term in $H_\text{eff}$ [\eqnref{eq:HeffApp2}] contributes very little to the energy. Therefore in this regime, the terms in the ``$\cdots$'' of $H_\text{eff}$ are likely to play an important role and possibly result in other kinds of symmetry breaking. So although BCS-like superconductivity results when the ``$\cdots$'' terms are dropped, a more detailed analysis is needed to determine the true ground state in this regime when the ``$\cdots$'' terms are included. \subsubsection{Resonantly-Paired Superconductivity Regime} The boson density diverges as the chemical potential approaches its maximum value: $\mu \to \epsilon_0/2$. Approximating $\mu \approx \epsilon_0/2$ allows us to solve for the boson density $\delta_b$ in \eqnref{eq:delta MF}, which can be used to express the order parameter:\footnote{% A similar equation for the boson density in three spatial dimensions appears in Eq.\,(6.8) of \refcite{GurarieRadzihovskyFeshbach}.} \begin{align} \Delta_b &\approx \tilde{g} \sqrt{\delta_b} \label{eq:Delta resonant}\\ \delta_b &\approx \frac{1}{2} \delta - \pi m \epsilon_0 \nonumber \end{align} Due to the significantly larger boson density, the order parameter $\Delta_b \sim \tilde{g}$ is immensely larger in this resonantly-paired regime than in the BCS regime where $\Delta_b \sim e^{-1/\tilde{g}^2}$ [\eqnref{eq:Delta weak}]. \newpage \section{Valence Skipping in a 4-site Cluster} \label{app:cluster} A toy model for trimer stability is obtained by considering a 4-site cluster \cite{4cluster} of the Hamiltonian $H_0$ in \eqnref{eq:H} at a chemical potential $\mu$: \begin{equation} \begin{split} \includegraphics{4siteModel} \end{split} \quad \begin{split} H_4 &= \Delta \, (n_2 + n_3 + n_4) \\ &+ V_1 \, n_1 (n_2 + n_3 + n_4) \\ &+ V_2 \, (n_2 n_3 + n_3 n_4 + n_4 n_2) \\ &- \mu \, (n_1 + n_2 + n_3 + n_4) \end{split} \label{eq:H4} \end{equation} Each site either has 0 or 1 fermions, $n_i = 0,1$, which is physically relevant when a large on-site Hubbard interaction prevents double occupancy. The ground state phase diagram of the 4-site cluster is shown in \figref{fig:4site}. The ``no trimer'' region is analogous to a change transfer insulator, while the ``trimer'' region is analogous to a trimer excitation of the change transfer insulator. \begin{figure} \includegraphics[width=\columnwidth]{4sitePhaseDiagram} \caption{% Ground states of the 4-site cluster model in \eqnref{eq:H4} with $\Delta = V_1$. Valence skipping (where charge of the ground state jumps by two) \cite{VarmaMissingValence} occurs along the thick line. Black and red dots denote filled orbitals with $n_i=1$. }\label{fig:4site} \end{figure} The ground states are easiest to understand in an ideal limit where $V_2=0$, $\Delta=V_1$, and $\mu=\frac{3}{2} V_1$. Then $H_4 = V_1 \, (n_1 - \frac{1}{2})(n_2 + n_3 + n_4 - \frac{3}{2})$, and it is simple to see that the ``no trimer'' and ``trimer'' states are degenerate ground states. \end{document}
proofpile-arXiv_067-8912
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The post-Newtonian (PN) approximation has played a crucial role in the data analysis of the recent discovery by the LIGO-Virgo detectors of gravitational waves (GW) generated by the coalescence of two neutron stars~\cite{GW170817}. It also constitutes an important input for validating the early inspiral phase of the coalescence of two black holes~\cite{GW150914, LIGOrun1}. In this paper, motivated by the need of improving the accuracy of template waveforms generated by inspiralling compact binaries (neutron stars or black holes), notably in view of the future LISA mission, we tackle the computation of the mass-type quadrupole moment of compact binary systems (without spins) at the high 4PN approximation level.\footnote{Here 4PN refers to the terms of order $1/c^8$ in the waveform and energy flux, \textit{i.e.} beyond the Einstein quadrupole formula. Seen as a small dissipative radiation reaction effect, this corresponds to the order 6.5PN or $1/c^{13}$ in the equations of motion, beyond the Newtonian acceleration.} This calculation is part of our current program to obtain the orbital phase evolution and waveform of inspiralling binaries up to the 4.5PN approximation. So far the main results achieved in this program are:\footnote{See Ref.~\cite{BlanchetLR} for a review of previous (pre-4PN) works.} \begin{enumerate} \item The non-linear effects in the GW propagation (so-called tails-of-tails-of-tails) that are responsible for the 4.5PN coefficient in the energy flux for circular orbits~\cite{MBF16}. This result has been confirmed by an independent PN reexpansion of resummed waveforms~\cite{MNagar17}; \item The conservative part of the equations of motion up to the 4PN order, obtained from the Fokker Lagrangian in harmonic coordinates~\cite{BBBFMa, BBBFMb, BBBFMc, MBBF17, BBFM17}. This result has also been obtained by means of the canonical Hamiltonian formalism of general relativity~\cite{JaraS13, JaraS15, DJS14, DJS15eob, DJS16}, and by the effective field theory (EFT)~\cite{GR06,FS4PN, FStail, GLPR16,Po16, FS19, FPRS19, Blumlein20}. \end{enumerate} The next steps are the computation of the multipole moments of the compact binary source to consistent order. To control the GW energy flux at the 4PN order, we essentially need the mass-type quadrupole moment at the 4PN order (it is currently known at the 3.5PN order~\cite{BIJ02, BI04mult, BFIS08, FMBI12}) and the current-type quadrupole moment at the 3PN order; indeed the other moments are already known, see for instance~\cite{BFIS08}. In particular, the mass-type octupole moment at the 3PN order has been computed in~\cite{FBI15}. To control the waveform and polarisation modes at the 4PN order we need more precision, for instance the current quadrupole and mass octupole moments would be needed up to 3.5PN order. The multipole moments are defined within the so-called MPM-PN formalism, which consists in computing the external field of the source by means of a multipolar-post-Minkowskian (MPM) expansion~\cite{BD86, B87, BD92, BDI95}, which is matched to the inner field obtained by direct PN iteration~\cite{B95, B98mult, PB02, BFN05}. Two possible alternative approaches also able to compute multipole moments at high PN orders, are the direct integration of the relaxed field equations (DIRE)~\cite{WW96} and the EFT~\cite{LMRY19}, both being currently developed at the 2PN order. See also Ref.~\cite{compere18} for a discussion on various alternative definitions of multipole moments for radiating space-times. In the present paper we are devoted to the (long and demanding) computation of the mass quadrupole moment of compact binary systems at the 4PN order. As in the problem of the equations of motion, a very important aspect of the calculation is the proper use of regularizations. At the 3PN order, it was found that ultra-violet (UV) divergences occur, due to the model of point particles, and have to be dealt with dimensional regularization (DR)~\cite{DJSdim, BDEI04}. At the 4PN order in the equations of motion, not only are there UV divergences, but also infra-red (IR) ones, linked to the presence of tails at 4PN order, and those must also be cured by means of DR~\cite{BBBFMc}. In particular it was shown that DR completely resolves the problem of ambiguity parameters in the 4PN equations of motion, including the ones associated with IR divergences~\cite{MBBF17}. Such feature of DR was also pointed out in the EFT approach~\cite{PR17}, and used to obtain equivalent ambiguity-free results~\cite{FS19, FPRS19, Blumlein20}. In the present computation we shall use DR for the UV divergences but leave open the problem of the IR divergences. Indeed this problem requires a separate analysis and is independent from the long calculations we perform in this paper. Therefore we shall continue to use, as in all our previous works~\cite{BlanchetLR}, the Hadamard partie finie procedure consisting of using the natural regulator $r^B=\vert\mathbf{x}\vert^B$ in the multipole moments and selecting the finite part in the expansion when $B\to 0$ (thus throwing away the poles $1/B^n$). This specific procedure comes as a direct consequence of the matching between the MPM exterior field and the near zone PN metric in 3 dimensions~\cite{B95, B98mult, PB02, BFN05}. However the recent work on the equations of motion showed that a combination of the Hadamard procedure together with DR is required in the presence of IR divergences. This is the so-called ``$\eta\varepsilon$'' regularization scheme introduced in Ref.~\cite{MBBF17}, which we shall better rename here the ``$B\varepsilon$'' regularization since the extra parameter $\eta$ (in addition to $\varepsilon=d-3$) is nothing but $B$ in the Hadamard partie finie process. In the $B\varepsilon$ regularization the limit $B\to 0$ is considered first and shown to be finite (\textit{i.e.}, no poles $\propto 1/B$) for generic non-integer values of $\varepsilon$~\cite{MBBF17}. Then the limit $\varepsilon\to 0$ is applied and this reduces to the standard DR; in particular, the poles $\propto 1/\varepsilon$ are renormalized by appropriate shifts of trajectories. Again we postpone to future work the task of understanding whether the Hadamard partie finie IR regularization should be replaced by the ``$B\varepsilon$'' regularization. At the 4PN order for non-spinning black-hole binaries (and at 2.5PN order for spinning black holes) the GW flux and phasing will require inclusion of absorption effects by the black-hole horizons \cite{PS95,TMT97,Alvi01,Po08,Chatz12}. The MPM-PN formalism developed for point-particle binaries is not suitable to compute these effects. As usual, they have to be added to the results computed here. The plan of this paper is as follows. In Sec.~\ref{sec:mult} we recall from previous works the definitions of the general mass multipole moments (of order $\ell$) in $d$ dimensions. In Sec.~\ref{sec:MQPot} we express the 4PN quadrupole moment in terms of elementary potentials, and find that all the terms can be computed, thanks in particular to the techniques of super-potentials and of surface-integrals. The problem of the dimensional regularization of UV divergences (including distributional derivatives of singular functions) is dealt with in Sec.~\ref{sec:dimregUVgen}, and the computation of the various types of potentials and terms is done in Sec.~\ref{sec:ComputePot}. Crucial to this calculation is the application of certain UV shifts of the trajectories determined in previous works on the 4PN equations of motion. Finally we present in Sec.~\ref{sec:resultMQ} the 4PN mass quadrupole moment in the case of circular orbit. The complete expression of the 4PN metric in $d$ dimensions is relagated in Appendix~\ref{app:PNpotentials}; the shifts are given in Appendix~\ref{app:shift}; finally we give in Appendix~\ref{app:MQAsPot} the exhaustive list of all the terms composing the 4PN quadrupole moment in terms of elementary potentials. \section{The mass-type multipole moments} \label{sec:mult} We provide first the expression of the $\ell$-th order mass-type multipole moments of a general isolated source in 3 dimensions \cite{B98mult}:\footnote{The notation is: $L = i_1\cdots i_\ell$ for a multi-index composed of $\ell$ multipolar indices $i_1, \cdots, i_\ell$; $iL = i i_1\cdots i_\ell$ for a multi-index with $\ell+1$ indices; $x_L = x_{i_1}\cdots x_{i_\ell}$ for the product of $\ell$ spatial vectors $x^i = x_i$. The symmetric-trace-free (STF) projection is denoted by $\hat{x}_L = \mathrm{STF}(x_{i_1}\cdots x_{i_\ell})$, or sometimes using brackets surrounding the indices, for instance $x_{\langle L}v_{P\rangle}$. Similarly, $\partial_L = \partial_{i_1}\cdots \partial_{i_\ell}$ for the product of $\ell$ partial derivatives $\partial_i=\partial/\partial x^i$, and $\hat{\partial}_L = \mathrm{STF}( \partial_{i_1}\cdots \partial_{i_\ell})$. In the case of summed-up (dummy) multi-indices $L$, we do not write the $\ell$ summations from 1 to 3 over their indices. The superscript $(n)$ denotes $n$ time derivatives, and an overbar indicates a PN-expanded quantity.} \begin{align} \label{ValueILGeneral} I_L(t)&= \mathop{\mathrm{FP}}_{B=0} \int \mathrm{d}^3\mathbf{x} \left(\frac{r}{r_0}\right)^B \int^1_{-1} \mathrm{d} z\biggl\{ \delta_\ell(z)\,\hat{x}_L\,\overline{\Sigma} -\frac{4(2\ell+1)}{c^2(\ell+1)(2\ell+3)} \,\delta_{\ell+1}(z) \,\hat{x}_{iL} \,\overline{\Sigma}^{(1)}_i \nonumber\\ & \qquad \qquad \qquad \qquad + \frac{2(2\ell+1)}{c^4(\ell+1)(\ell+2)(2\ell+5)}\,\delta_{\ell+2}(z) \,\hat{x}_{ijL} \,\overline{\Sigma}^{(2)}_{ij} \biggr\} (\mathbf{x}, t+z r/c) \,. \end{align} The source terms are defined from the PN expansion of the components of the pseudo stress-energy tensor of the matter system in harmonic coordinates, \textit{i.e.} $\overline{\tau}^{\mu\nu}$ where the overbar means the PN expansion, as (with $\overline{\tau}^{ii}=\delta_{ij}\overline{\tau}^{ij}$) \begin{equation}\label{Sigma} \overline{\Sigma} = \frac{\overline{\tau}^{00}+\overline\tau^{ii}}{c^2}\,,\qquad\overline{\Sigma}_i = \frac{\overline{\tau}^{0i}}{c}\,, \qquad\overline{\Sigma}_{ij} = \overline{\tau}^{ij}\,. \end{equation} The pseudo stress-energy tensor is defined from the gauge-fixed Einstein field equations as \begin{equation}\label{EFE} \Box h^{\mu\nu}=\frac{16\pi G}{c^4}\tau^{\mu\nu}\,, \end{equation} where $\Box\equiv\Box_\eta$ is the flat d'Alembertian operator and the field variable is defined by $h^{\mu\nu} = \sqrt{-g}\,g^{\mu\nu}-\eta^{\mu\nu}$, with $g=\text{det}(g_{\rho\sigma})$ and $\eta^{\mu\nu}=\text{diag}(-1,1,1,1)$. It obeys the usual harmonic coordinates condition $\partial_\nu h^{\mu\nu}=0$. The pseudo tensor is composed of a matter part and a gravitational part that we denote as \begin{equation}\label{tau} \tau^{\mu\nu} = \vert g\vert T^{\mu\nu}+\frac{c^4}{16\pi G}\,\Lambda^{\mu\nu}\,, \end{equation} where $T^{\mu\nu}$ is the matter stress-energy tensor, and $\Lambda^{\mu\nu}$ represents the gravitational source term which is a complicated non-linear, at least quadratic, functional of $h^{\rho\sigma}$ and its first and second space-time derivatives. An important feature of Eq.~\eqref{ValueILGeneral} is the presence of the Hadamard finite part (FP) when $B\rightarrow 0$, with regularization factor $(r/r_0)^B$ where $r=\vert \mathbf{x} \vert$ and $r_0$ is an arbitrary length scale. The role of this finite part operation is to deal with the IR divergences initially introduced in the multipole moments by the fact that the PN-expanded integrand of the multipole moments is valid in the near zone, and typically diverges at spatial infinity (when $r\rightarrow +\infty$). This specific regularization of the multipole moments is actually imposed by the matching between the inner PN field and the outer MPM field, given the particular way the MPM metric is generated at each post-Minkowskian order~\cite{B98mult}. However, as mentioned in Sec.~\ref{sec:intro}, more work is required to investigate whether the IR divergences should be treated instead by a variant of DR (\textit{i.e.}, the $B\varepsilon$ regularization of Ref.~\cite{MBBF17}). Concerning the UV divergences, they may be cured using the Hadamard partie finie regularization up to the 2PN order. As is well known, this technique fails at the 3PN and 4PN orders where DR has to be systematically used. In our practical calculation, a first evaluation using the Hadamard partie finie for the UV divergences is done, and then we systematically add the appropriate correction accounting for the DR. The functions $\overline{\Sigma}$, $\overline{\Sigma}_i$ and $\overline{\Sigma}_{ij}$ in the integrand of Eq.~\eqref{ValueILGeneral} are evaluated at the spatial point $\mathbf{x}$ and at time $t+z r/c$ where $r=\vert\mathbf{x}\vert$. In addition there is the extra integration variable $z$ entering the auxiliary function \begin{equation}\label{delta} \delta_\ell(z) = \frac{(2\ell+1)!!}{2^{\ell+1}\ell!}(1-z^2)^\ell\,, \qquad \int_{-1}^{1} \mathrm{d} z\,\delta_\ell(z) = 1\,. \end{equation} But in fact, as we are going to compute the PN expansion of $I_L$, the integral over $z$ is given as an explicit asymptotic PN series using the property of the functions $\delta_\ell(z)$ that \begin{subequations} \label{integralOverZ} \begin{align} \int_{-1}^1\mathrm{d} z \,\delta_\ell(z) \,\overline{\Sigma} (\mathbf{x}, t+ z r/c) &= \sum_{k=0}^{+\infty} \alpha_{k,\ell}\left(\frac{r}{c}\frac{\partial}{\partial t}\right)^{2k} \overline{\Sigma}(\mathbf{x}, t)\,,\\ \text{with}\quad\alpha_{k,\ell} &\equiv \frac{(2\ell+1)!!}{(2k)!! (2\ell + 2k +1)!!}\,. \end{align} \end{subequations} Therefore, $I_L(t)$ is just a sum of time derivatives of 3-dimensional integrals of PN quantities depending on the pseudo stress-energy tensor $\overline{\tau}^{\mu\nu}(\mathbf{x}, t)$. In the case of a compact binary source we use the usual point-particle stress-energy tensor for the matter. We write the components of the matter tensor for two particles as \begin{equation}\label{Tmunu} T^{\mu\nu} = \mu_1 \,v_1^\mu v_1^\nu\,\delta(\mathbf{x} - \bm{y}_1) + 1 \leftrightarrow 2 \,, \end{equation} where $v_1^\mu=\mathrm{d} y_1^\mu/\mathrm{d} t$ is the coordinate velocity of the particle, $v_1^\mu = (c, v_1^i)$, and $\delta$ is the 3-dimensional Dirac function. We have (with $m_1$ the constant PN mass) \begin{equation}\label{mu} \mu_1(t) = \frac{1}{\sqrt{-(g)_1}}\frac{m_1}{\sqrt{-(g_{\mu\nu})_1 \frac{v_1^\mu v_1^\nu}{c^2}}} \,, \end{equation} where the index $1$ indicates that the metric has to be evaluated at the location $\mathbf{x} = \bm{y}_1$; in the self-gravitating case, self-field divergences are removed by means of DR. As we shall use DR for the UV divergences, we require the generalization of the expression of the multipole moments to $d$ dimensions. The formula for the general $\ell$-th mass-type multipole moment reads~\cite{BDEI04} \begin{eqnarray}\label{ILexpr2} I_L(t)&=&\frac{d-1}{2(d-2)}\mathop{\mathrm{FP}}_{B=0}\int \mathrm{d}^d \mathbf{x}\left(\frac{r}{r_0}\right)^B \biggr\{\hat{x}_L\,\overline{ \Sigma}_{[\ell]} \nonumber\\ &&\qquad\qquad -\frac{4(d+2\ell-2)} {c^2(d+\ell-2)(d+2\ell)}\,\hat{x}_{iL}\, \overline{\Sigma}^{(1)}_{i[\ell+1]} \nonumber\\ &&\qquad\qquad +\frac{2(d+2\ell-2)} {c^4(d+\ell-1)(d+\ell-2)(d+2\ell+2)} \,\hat{x}_{ijL}\, \overline{\Sigma}^{(2)}_{ij[\ell+2]} \nonumber\\ &&\qquad\qquad - \frac{4(d-3)(d+2\ell-2)}{c^2(d-1)(d+\ell-2)(d+2\ell)} B \,\hat{x}_{iL}\,\frac{x_j}{r^2} \,\overline{\Sigma}_{ij[\ell+1]} \biggr\}(\mathbf{x},t)\,. \end{eqnarray} The overall $d$-dependent factor in front is such that~\eqref{ILexpr2} reduces to the usual Newtonian-looking expression of the multipole moments in the Newtonian approximation, given by $I_L = m_1 \,\hat{y}_1^{L} + 1 \leftrightarrow 2 + \mathcal{O}(c^{-2})$. The last term of~\eqref{ILexpr2} will in fact not contribute because of the $B$ and the $d-3$ factors appearing simultaneously. To see this one splits the integral into a near-zone contribution $r<\mathcal{R}$ and a far-zone one $r>\mathcal{R}$. In the near zone integral one has to apply the limit $B\to 0$ since there are no IR divergences (hence no poles $\propto 1/B$), while the far zone one which is UV-finite is to be computed with $d=3$, so both integrals are separately zero. In future work we shall investigate the fate of the last term in Eq.~\eqref{ILexpr2} within the $B\varepsilon$ regularization. As before the source terms are defined from the PN expansion of the pseudo stress-energy tensor in $d$ dimensions, $\overline{\tau}^{\mu\nu}$, defined by the Einstein field equations in harmonic coordinates, which take the same form as in Eqs.~\eqref{EFE}--\eqref{tau}, except that the Newton constant there reads $G=\ell_0^{d-3}G_\text{N}$, where $\ell_0$ is the characteristic length scale associated with DR and $G_\text{N}$ is the Newton constant in 3 dimensions. We have \begin{equation}\label{Sigmad} \overline{\Sigma} = \frac{2}{d-1}\frac{(d-2)\overline{\tau}^{00}+\overline{\tau}^{ii}}{c^2}\,, \end{equation} while $\overline{\Sigma}_i$ and $\overline{\Sigma}_{ij}$ take the same form as in~\eqref{Sigma}. Of course the matter terms are still given by the point mass expressions~\eqref{Tmunu}--\eqref{mu} but with now $\delta=\delta^{(d)}$, the Dirac function in $d$ dimensions. The generalization of Eqs.~\eqref{delta}--\eqref{integralOverZ} to $d$ dimensions reads as \begin{equation}\label{Sellcompact} \overline{\Sigma}_\ell(\mathbf{x},t)=\int_{-1}^1 \mathrm{d} z \,\delta_\ell^{(\varepsilon)} (z) \,\overline{\Sigma}(\mathbf{x},t+zr/c)\,, \end{equation} where we have posed $\varepsilon = d-3$, and \begin{equation}\label{deltal} \delta_\ell^{(\varepsilon)} (z) \equiv \frac{\Gamma\left(\ell+\frac{3}{2}+\frac{\varepsilon}{2}\right)}{ \Gamma\left(\frac{1}{2}\right)\Gamma \left(\ell+1+\frac{\varepsilon}{2}\right)} \,(1-z^2)^{\ell+\frac{\varepsilon}{2}}, \qquad\int_{-1}^{1} \mathrm{d} z\,\delta_\ell^{(\varepsilon)}(z) = 1\,. \end{equation} In practice we only need the formal PN expansion \begin{equation}\label{series} \overline{\Sigma}_{[\ell]}(\mathbf{x},t) = \sum_{k=0}^{+\infty}\alpha_\ell^k \left(\frac{r}{c}\frac{\partial}{\partial t}\right)^{2k}\overline\Sigma(\mathbf{x},t)\,, \end{equation} with the numerical coefficients now being given by \begin{equation}\label{coeffs} \alpha_\ell^k = \frac{1}{2^{2k} k!}\frac{\Gamma(\ell+\frac{d}{2})}{\Gamma(\ell+\frac{d}{2}+k)}\,. \end{equation} It is very useful to define for the matter stress-energy tensor the following matter currents \begin{equation}\label{sigma} \sigma = \frac{2}{d-1}\frac{(d-2)T^{00}+T^{ii}}{c^2}\,,\qquad\sigma_i = \frac{T^{0i}}{c}\,, \qquad \sigma_{ij} = T^{ij}\,, \end{equation} that are given in the case of compact binary systems by \begin{subequations}\label{mattercurrent} \begin{align} \sigma &= \tilde{\mu}_1 \,\delta (\mathbf{x} - \bm{y}_1) + 1 \leftrightarrow 2\,, \\ \sigma_i &= \mu_1 v_1^i \,\delta (\mathbf{x} - \bm{y}_1) + 1 \leftrightarrow 2\,, \\ \sigma_{ij} &=\mu_1 v_1^i v_1^j \,\delta (\mathbf{x} - \bm{y}_1) + 1 \leftrightarrow 2\,, \end{align} \end{subequations} where $\bm{y}_1=(y_1^i)$ is the particle's position, $\bm{v}_1=\mathrm{d}\bm{y}_1/\mathrm{d} t=(v_1^i)$ the coordinate velocity, and we have introduced, besides $\mu_1$ which keeps the same expression as in 3 dimensions, see Eq.~\eqref{mu}, the useful tilded version \begin{equation}\label{mutilde} \tilde{\mu}_1 = \frac{2}{d-1}\left(d-2 + \frac{v_1^2}{c^2}\right) \mu_1\,. \end{equation} The matter current densities~\eqref{mattercurrent} generate all the compact-support terms in the expression of the quadrupole moment. \section{The quadrupole moment as a function of potentials} \label{sec:MQPot} \subsection{The elementary potentials} \label{sec:pot} To start our derivation of the quadrupole moment we need to inject into~\eqref{ILexpr2} the PN metric $\overline{h}^{\mu\nu}$ which is an explicit solution of the Einstein field equations~\eqref{EFE} valid in the near zone (we recall that the overbar means PN expansion). In our recent computation of the equations of motion by means of the Fokker Lagrangian, the order to which the PN metric had to be expanded was given by the so-called ``$n+2$'' method~\cite{BBBFMa}. In the case of the mass quadrupole moment, such a method does not exist, and $\overline{h}^{\mu\nu}$ has to be expanded to higher PN order. We find that the metric components $\overline{h}^{00}$, $\overline{h}^{0i}$ and $\overline{h}^{ij}$ are respectively to be expanded up to orders $c^{-8}$, $c^{-7}$ and $c^{-8}$ included. Thus, we need $\overline{h}^{00}$ and $\overline{h}^{0i}$ at 3PN order (remind that $c^{-8}$ in $\overline{h}^{00}$ actually corresponds to 3PN), and $\overline{h}^{ij}$ at the 4PN order. Building up on previous works~\cite{BIJ02, BI04mult, BDEI05dr, FBI15} we parametrize the metric with appropriate PN elementary retarded potentials, namely scalar potentials $V$, $K$, $\hat{X}$ and $\hat{T}$, vector potentials $V_i$, $\hat{R}_i$ and $\hat{Y}_i$, and tensor ones $\hat{W}_{ij}$, $\hat{Z}_{ij}$ and $\hat{M}_{ij}$. The structure of our parametrization in 3 dimensions is \begin{subequations}\label{metric4PN3} \begin{align} \overline{h}^{00} &= - \frac{4V}{c^{2}} - \frac{2}{c^{4}} \left( \hat{W} + 4 V^2\right) - \frac{8}{c^{6}} \left( \hat{X} + \cdots \right) - \frac{64}{c^{8}} \left( \hat{T} + \cdots \right) + \mathcal{O}(c^{-10})\,, \\ \overline{h}^{0i} &= \frac{4V_{i}}{c^{3}} + \frac{8}{c^{5}} \left( \hat{R}_{i} + V_{i} V\right) + \frac{16}{c^7}\left( \hat{Y}_{i} + \cdots\right) + \mathcal{O}(c^{-9})\,,\\ \overline{h}^{ij} &= - \frac{4}{c^{4}} \Bigl(\hat{W}_{ij} - \frac{1}{2}\delta_{ij} \hat{W}\Bigr) - \frac{16}{c^{4}} \Bigl(\hat{Z}_{ij} - \frac{1}{2}\delta_{ij} \hat{Z}\Bigr) - \frac{32}{c^8}\Bigl( \hat{M}_{ij} + \cdots \Bigr) + \mathcal{O}(c^{-10})\,. \end{align} \end{subequations} The ellipsis symbolizes non-linear products of these elementary potentials. The complete expression of the metric at 4PN order in $d$ dimensions is given in Eqs.~\eqref{metric4PN} of Appendix~\ref{app:PNpotentials}. The potentials obey some imbricated flat space-time wave equations. Some have a compact support like \begin{equation}\label{V3} \Box V = - 4 \pi G\, \sigma \,,\qquad \Box V_{i} = - 4 \pi G\, \sigma_{i} \,, \end{equation} while there are many quadratic non-linear terms (sometimes called ``$\partial V\partial V$'') such as in \begin{equation}\label{W3} \Box\hat{W}_{ij} = -4\pi G\bigl(\sigma_{ij} -\delta_{ij}\,\sigma_{kk} \bigr) - \partial_i V \partial_j V\,, \end{equation} and higher order terms (called ``non-compact'') such as the cubic term $\hat{W}_{ij}\,\partial_{ij}V$ in \begin{equation}\label{X3} \Box\hat{X} = - 4 \pi G V \sigma_{ii}+\hat{W}_{ij} \,\partial_{ij} V+2 V_i \partial_t \partial_i V+ V \partial_t^2 V + \frac{3}{2} (\partial_t V)^2 - 2 \partial_i V_j \partial_j V_i\,. \end{equation} See Eqs.~\eqref{defpotentials}--\eqref{defpotentials4PN} for thorough definitions of all these potentials in $d$ dimensions. Among these note that the only purely 4PN potential which is needed for the 4PN quadrupole moment is $\hat{M}_{ij}$ (new with the present paper), which obeys the equation in 3 dimensions: \begin{align}\label{Mij3d} \Box\hat{M}_{ij} ={}& G \pi \Bigl[\Bigl(-4 V_{i} V_{j} + \delta^{ij} (2 V_{a} V_{a} + \hat{X})\Bigr) \sigma + 4\bigl(\hat{R}_{(i} + V V_{(i}\bigr) \sigma_{j)} - 2 \hat{W}_{a(i} \sigma_{j) a} \\ &\quad - 4 V^2 \sigma_{ij} + \delta^{ij} \Bigl(-2 \hat{R}_{b} \sigma_{b} - 2 V V_{m} \sigma_{m} + \frac{1}{2} \hat{W}_{kl} \sigma_{kl} + V^2 \sigma_{p p} - \frac{1}{2} \hat{W} \sigma_{q q}\Bigr)\Bigr] \nonumber\\ & - \partial_t V_{i} \partial_t V_{j} + V_{a} \partial_t \partial_{a}\hat{W}_{ij} + 2 V_{(i} \partial_{a}V_{j)} \partial_{a}V - \partial_t \hat{W}_{(i}{}_{a} \partial_{a}V_{j)}\nonumber\\ & + \frac{1}{2} \hat{W}_{ab} \partial_{ab} \hat{W}_{ij} - \frac{1}{2} \partial_{a}\hat{W}_{i b} \partial_{b}\hat{W}_{ja} + \frac{1}{2} \hat{W}_{(i}{}_{a} \partial_{a}V \partial_{j)}V - \frac{1}{4} \partial_{i}\hat{W}_{ab} \partial_{j}\hat{W}_{ab} \nonumber\\ & - 2\partial_{a}V_{(i} \bigl( \partial_{j)}\hat{R}_{a} + V_{a} \partial_{j)}V\bigr) - 2 \partial_{a}\hat{R}_{(i} \partial_{j)}V_{a} + \partial_t \hat{W}_{(i}{}_{a} \partial_{j)}V_{a} + \partial_{b}\hat{W}_{a(i} \partial_{j)}\hat{W}_{ab}\nonumber\\ & + 2\partial_{(i}V_{a} \partial_{j)}\hat{R}_{a} + \Bigl(- 2\partial_t \hat{R}_{(i} + \frac{1}{2} V_{(i} \partial_t V - \partial_{(i}\hat{X}\Bigr) \partial_{j)}V + V \Bigl(\frac{1}{2} \partial_t^{2} \hat{W}_{ij} - 2 \partial_t V_{(i} \partial_{j)}V\Bigr)\nonumber\\ & + \delta_{ij} \Bigl[ - \frac{1}{2} V_{a} \partial_t \partial_{a}\hat{W} - \frac{1}{4} V^2 \partial_t^2 V + \partial_t \hat{R}_{a} \partial_{a}V - \frac{1}{4} \hat{W}_{ab} \partial_{ab}\hat{W} + \partial_{a}V_{b} \partial_{b}\hat{R}_{a}\nonumber\\ &\quad - \frac{1}{8} \hat{W}_{ab} \partial_{a}V \partial_{b}V - \frac{1}{2} \partial_t \hat{W}_{ab} \partial_{a}V_{b} - \frac{1}{4} \partial_{a}\hat{W}_{bc} \partial_{c}\hat{W}_{ab} - \frac{1}{4} V_{a} \partial_t V \partial_{a}V \nonumber\\ &\quad + V \Bigl(\frac{3}{8} (\partial_t V)^2 - \frac{1}{2} V_{a} \partial_t \partial_{a}V - \frac{1}{4} \partial_t^{2} \hat{W} + \partial_t V_{a} \partial_{a}V - \frac{1}{4} \hat{W}_{ab} \partial_{ab}V + \frac{1}{2} \partial_{a}V_{b} \partial_{b}V_{a}\Bigr) \Bigr]\,.\nonumber \end{align} Inserting the PN metric into the mass quadrupole moment, we obtain the full expression in terms of the latter PN potentials. However we do not know explicitly all the potentials (either in $3$ or $d$ dimensions), since they are solutions of complicated wave equations such as~\eqref{Mij3d}. Thus, crucial simplifications of the result have to be performed first, in order to put the expression into computable form; see the complete result for all the terms in Appendix~\ref{app:MQAsPot}. \subsection{The method of super-potentials} \label{sec:superpot} The first technique we have used in order to be able to compute all the terms at the 4PN order is the method of ``super-potentials''. Many of the most difficult terms are of the form $\phi\,P$ where $\phi$ is a simple potential or derivative of a potential, and $P$ is a complicated potential whose expression in the whole space is not known. For instance $P$ could be the 4PN potential $\hat{M}_{ij}$ entering the spatial components of the metric and obeying the equation~\eqref{Mij3d}. On the other hand, in our case $\phi$ is one of the following potentials: $\partial_{ij} V$, $\partial_{t}\partial_i V$ or $\partial_j V_i$. To compute the integral $\int \mathrm{d}^{3} \mathbf{x} \,r^B\,\hat{x}_L \,\phi \,P$ (in 3 or $d$ dimensions) we notice that $\hat{x}_L \,\phi$ may be recast in the form of a Laplace operator acting on some solution $\Psi^{\phi}_L$: \begin{equation}\label{DeltaPsiL} \Delta \Psi^{\phi}_L = \hat{x}_L \phi\,. \end{equation} Assuming that $\Psi^{\phi}_L$ can be constructed analytically, a mere integration by part yields a volume integral whose source is known explicitly, namely $-\int \mathrm{d}^{3} \mathbf{x} \,r^B\, \Psi^{\phi}_L \,\Delta P$, plus terms that are essentially surface integrals at infinity when the Hadamard finite part is applied. Now, as it turns out, it is possible to construct the solution $\Psi^{\phi}_L$ by defining the super-potentials of $\phi$ as the hierarchy of solutions $\phi_{2k}$ of the sequence of Poisson equations \begin{equation}\label{Deltaphi2k} \Delta\phi_{2k+2}=\phi_{2k}\,, \end{equation} together with $\phi_0=\phi$. We thus have $\Delta^k\phi_{2k}=\phi$. The solution of Eq.~\eqref{DeltaPsiL} is then given in analytic closed form as~\cite{BFW14b} \begin{equation}\label{PsiL} \Psi^{\phi}_L = \Delta^{-1} \bigl(\hat{x}_{L}\,\phi\bigr) = \sum_{k=0}^{\ell}\frac{(-2)^k\ell!}{(\ell-k)!}\,x_{\langle L-K}\partial_{K\rangle}\phi_{2k+2}\,. \end{equation} This formula has been derived by induction in $3$ dimensions in~\cite{BFW14b} but the proof works as well in $d$ dimensions and no extra factor needs to be added. The precise choice of the Poisson solutions involved in the above algorithms is irrelevant for this particular problem, hence the operator $\Delta^{-1}$ has not been precisely defined in Eq.~\eqref{PsiL}. However, it is convenient in practice to take $\Delta^{-1}=\widetilde{\Delta^{-1}}$, where $\widetilde{\Delta^{-1}}=\mathrm{FP}_{B=0}\Delta^{-1}(r/r_0)^B$ represents the Poisson integral regularized at infinity by means of the Hadamard finite part prescription. With this tool in hands, we can thus transform the integral we were looking for into the much more tractable form \begin{equation}\label{tractable} \int \mathrm{d}^{3} \mathbf{x} \,r^B\,\hat{x}_L \,\phi P = \int \mathrm{d}^{3} \mathbf{x} \,r^B\Bigl(\Psi^{\phi}_L \Delta P + \partial_i\Bigl[ \partial_i \Psi^{\phi}_L P - \Psi^{\phi}_L \partial_i P\Bigr] \Bigr)\,. \end{equation} The first term involves the source of the potential $P$, and is therefore computable, while the second one is a surface term, which is also computable [see Sec.~\ref{sec:intpart}]. For instance, in the case where $P=\hat{M}_{ij}$, with $\hat{M}_{ij}$ being the 4PN tensor potential, we will replace $\Delta\hat{M}_{ij}$ by the source given explicitly by Eq.~\eqref{Mij3d}, which is correct since we are already at the maximal 4PN order and thus $\hat{M}_{ij}$ is merely Newtonian. In general $\phi$ will be equal to some derivative of a compact-support potential, such as $\phi=\partial_{ab}V$ where $V$ is given in~\eqref{V3}, but let us illustrate the computation of the super-potentials with the simpler case $\phi=V$. The compact source of this potential is $\sigma=\tilde{\mu}_1 \delta (\mathbf{x}-\mathbf{y_1})+1\leftrightarrow 2$ where $\tilde{\mu}_1$ is a function of time defined by Eq.~\eqref{mutilde}. Using the symmetric propagator (we neglect the odd dissipative effects which do not impact our calculation here\footnote{We have checked explicitly that dissipative contributions that are even in powers of $1/c$, due for instance to the coupling of two odd terms, never arise in the mass quadrupole at the 4PN order.}) we have \begin{subequations}\label{formV} \begin{align} V &= \sum_{k=0}^{+\infty} \left(\frac{\partial}{c\partial t}\right)^{2k} U_{2k}\,,\\ \text{where}\quad U_{2k} &= - 4 \pi G \,\Delta^{-k-1} \bigl[ \tilde{\mu}_1 \delta (\mathbf{x}-\mathbf{y_1}) \bigr] + 1\leftrightarrow 2 \,. \end{align} \end{subequations} From that definition we see that $U_{2k}$ is the super-potential of the Newtonian potential $U$ obeying the Poisson equation $\Delta U = -4\pi G \sigma$. It is straightforward to compute the functions $U_{2k}$ using the Appendix B of~\cite{BDE04} and we find \begin{equation}\label{U2k} U_{2k} = \frac{G\,\tilde{k}}{2^k (2k)!!} \,\frac{\Gamma(2-\frac{d}{2})}{\Gamma(k+2-\frac{d}{2})} \,\tilde{\mu}_1 \,r_1^{2k+2-d} + 1\leftrightarrow 2 \,, \end{equation} generalizing Eq.~(4.18) in~\cite{BFW14b} to $d$ dimensions. Finally, from Eqs.~\eqref{formV} we see that the super-potentials of $V$ are given in terms of those of $U$ by \begin{equation}\label{V2k} V_{2k} = \sum_{j=0}^{+\infty} \left(\frac{\partial}{c\partial t}\right)^{2j} U_{2k+2j}\,. \end{equation} By inserting~\eqref{V2k} into~\eqref{PsiL} we can obtain at each PN order an explicit expression for the superpotential $\Psi_L^V$. To compute that quantity but for some space-time derivative of a potential, for instance $\Psi_L^{\partial_{ab}V}$ or $\Psi_L^{\partial_t\partial_{a}V}$, we proceed in a similar way as all the space and time derivatives commute. Finally we are able to compute in a straightforward way all the super-potentials we need [see Table~\ref{tab:listPotOrder}]. \subsection{Integrations by part and surface terms} \label{sec:intpart} The second technique to simplify the expression of the quadrupole moment is to integrate some terms by part and transform volume integrals into simpler surface integrals at infinity. For instance, we systematically rewrite integrals involving the double gradient of a simple compact-support potential like $V$, defined in~\eqref{V3}, and a difficult one $P$ (with non-compact support) as \begin{equation} \label{intpart} \partial_i V \partial_i P = \frac{1}{2} \Bigl[\Delta (V P) - V \Delta P - P \Delta V\Bigr]\,. \end{equation} The second term in~\eqref{intpart} is much simpler because it contains (modulo higher PN corrections) the source of the potentials $P$, \textit{i.e.} $\Delta P = S + \mathcal{O}(c^{-2})$. The third term is also easy to evaluate because it depends only on the value of the potential $P$ at the location of the particles, since $V$ has a compact support: $\Delta V = - 4 \pi G \sigma + \mathcal{O}(c^{-2})$. As for the first term in~\eqref{intpart}, it yields an example of a so-called ``Laplacian term'', coming after integration by parts from the derivation of the regularization factor $(r/r_0)^B$. It is made of a surface integral at infinity, plus a possible volume integral whose expression in terms of the potentials is significantly simpler than the original one, see below. The surface integrals of the Laplacian terms, as well as the analogous so-called ``divergence terms'', are very easy to integrate within the Hadamard finite part prescription. Therefore, we keep as much as possible the terms into Laplacian or divergence form. Following~\cite{BI04mult} the generic Laplacian term, \textit{e.g.}, coming from the first term in the right side of~\eqref{intpart} with $G=\frac{1}{2}V P$, reads \begin{equation}\label{LaplacianTerm1} T_L = \mathop{\mathrm{FP}}_{B=0} \int \mathrm{d}^3 \mathbf{x} \left( \frac{r}{r_0}\right)^B \!\!\hat{x}_L\, r^{2k} \Delta G\,, \end{equation} where the factor $r^{2k}$ (with $k\in \mathbb{N}$) arises when applying the formula~\eqref{series} that implements the integration with respect to $z$. Integrating the Laplacian by parts, we obtain \begin{align}\label{LaplacianTerm2} T_L &= 2k(2k+2\ell+1) \mathop{\mathrm{FP}}_{B=0} \int \mathrm{d}^3 \mathbf{x} \left( \frac{r}{r_0}\right)^B\!\! \hat{x}_L\, r^{2k-2} G \nonumber \\ &+ \mathop{\mathrm{FP}}_{B=0} B (B + 4k+2\ell +1) \,r_{0}^{-B}\int_{r > \mathcal{R}} \mathrm{d}^3 \mathbf{x} \,r^{B-2} \hat{x}_L \, r^{2k} G \,. \end{align} Thanks to the prefactor $B$, the second integral can be restricted to the far zone: $r>\mathcal{R}$, where $\mathcal{R}$ is an arbitrary length. Indeed, the matching equation which leads to the expressions of the multipole moments is originally applied for smooth matter distributions, so that the metric is smooth everywhere in the near zone. As such, $G$ is also smooth there and, due to the factor $B$, the near zone contribution is zero after the FP procedure. (In practice, the point-particle approximation leads to UV divergences, but these are separately treated by DR, in which the FP plays no role.) Because of the $B$ prefactor in~\eqref{LaplacianTerm2}, we need to look at the $1/B$ pole in the integral. This pole can only come from a radial integral of the form $\int_\mathcal{R}^{+\infty} \mathrm{d} r \,r^{B-1} = - \mathcal{R}^B/B$. Thus, considering the asymptotic expansion of $G$ when $r \to \infty$, which we denote by $\mathcal{M}(G)$ as it is identical to the multipole expansion, we find that the pole comes only from the term of order $r^{-2k-\ell -1}$ in that expansion. At the 4PN order, we also obtain a logarithmic dependence in the asymptotic expansion of some of the potentials. Hence, if we define $X_p(\mathbf{n})$ and $X_p^{\ln} (\mathbf{n})$ to be the coefficients of $r^{-p -1}$ and $r^{-p -1}\ln r$ in the multipole expansion, we have \begin{equation}\label{expG} \mathcal{M}(G) = \dots + \frac{1}{r^{2k+\ell+1}}\left[X_{2k+\ell}(\mathbf{n}) + X_{2k+\ell}^{\ln}(\mathbf{n}) \ln \left(\frac{r}{r_0}\right)\right]+ o\left(r^{-2k-\ell-1}\right)\,, \end{equation} so that we finally obtain (applying the definition of the FP) \begin{align}\label{JL} T_L &= 2k(2k+2\ell+1) \mathop{\mathrm{FP}}_{B=0} \int \mathrm{d}^3 \mathbf{x} \left( \frac{r}{r_0}\right)^B\!\! \hat{x}_L\, r^{2k-2} G \nonumber \\ &+ \int \mathrm{d} \Omega \,\hat{n}_L \Bigl[-(4k+2\ell +1)X_{2k+\ell}(\bm{n})+ X_{2k+\ell}^{\ln} (\mathbf{n})\Bigr]\,. \end{align} The first integral is still of the form \eqref{LaplacianTerm1} except that the Laplacian factor $\Delta G$ has been replaced by $G$ itself. Since the latter quantity is typically the product of several potentials (or their derivatives), the integrand has actually been simplified. As for the second integral, it is a surface contribution that depends on neither scale $\mathcal{R}$ nor $r_0$. We have presented in~\eqref{JL} a general formula; however with our conventions for the simplification of the 4PN quadrupole moment, we will only need the case $k=0$. In such case the coefficient of the first term in Eq.~\eqref{JL} vanishes and the only task consists in evaluating the angular integral involving the $1/r^{\ell+1}$ coefficients of $G$. In particular, there is then no need to know (and in general we do not know) $G$ everywhere but just its asymptotic expansion, which is computed from the known source of the potential [see Sec.~\ref{sec:PotAtInf}]. The other surface integrals occurring are the ``divergence terms'', for instance the second term in the right side of~\eqref{tractable}. Indeed, most of these terms come from the method of super-potentials. They are of the form \begin{equation} K = \mathop{\mathrm{FP}}_{B=0} \int \mathrm{d}^3 \mathbf{x} \left(\frac{r}{r_0}\right)^B \partial_i H_i\,. \end{equation} A similar reasoning to the one before shows that they depend only on the $1/r^2$ coefficient, say $Y_i(\mathbf{n})$, in the asymptotic or multipole expansion $\mathcal{M}(H_i)$ when $r\to\infty$: \begin{equation}\label{expHi} \mathcal{M}(H_i) = \dots + \frac{1}{r^2}\left[Y_i(\mathbf{n}) + Y_i^{\ln}(\mathbf{n}) \ln \left(\frac{r}{r_0}\right)\right] + o\left(r^{-2}\right)\,, \end{equation} and the FP procedure yields simply \begin{equation} K = \int \mathrm{d} \Omega \,n_i Y_i(\mathbf{n})\,. \end{equation} We find that there in no contribution from the logarithm in~\eqref{expHi}. By the previous method, we have to obtain the asymptotic expansion when $r\to+\infty$ of $G$ or $H_i$ [Eqs.~\eqref{expG} or~\eqref{expHi}], where $G$ and $H_i$ are made of products of derivatives of potentials, involving in general one potential which is not known in the whole space, for instance the 4PN potential $P=\hat{M}_{ij}$ or its trace. To find the expansion when $r\to+\infty$ of such potential, we rely on the method explained in Sec.~\ref{sec:PotAtInf}. Once the latter two techniques --- super-potentials and surface integrals --- have been applied, one obtains an extremely long expression for the quadrupole moment as a function of potentials and super-potentials, where all terms can be explicitly integrated. The full result is presented in the Appendix~\ref{app:MQAsPot}. \section{Dimensional regularization of UV divergences}\label{sec:dimregUVgen} Our approach being based on an effective representation of the bodies by Dirac distributions, it relies on DR to deal with the divergences related to the point-like character of the object description. As stressed in Sec.~\ref{sec:intro}, this regularization was shown, both by traditional PN methods and by the EFT, to be able to tackle this issue properly and without ambiguities at the 3PN order and beyond. \subsection{Regularization of potentials and volume integrals}\label{sec:dimregUV} The computation of the volume integrals that remain in the quadrupole after the treatment of Laplacian and divergence terms requires the potentials in $d$ dimensions but only in the form of a local expansion around the particles $\bm{y}_{1,2}$ (third column in the list of potential presented in Table~\ref{tab:listPotOrder}), since it is ultimately the difference between the 3-dimensional potential computed by means of Hadamard's regularization and its $d$-dimensional counterpart that we really need to control, and that the parts outside the singularities cancel in the limit $d\to 3$. The compact parts of potentials pose no problem as we just have to iterate the symmetric propagator with the Green function of the Laplace operator in $d$ dimensions. Notably, the Poisson solution of $\Delta u_1 = -4\pi \delta^{(d)}(\mathbf{x}-\bm{y}_1)$ reads\footnote{See the Appendix B in~\cite{BDE04} for a compendium of useful formulas in $d$ dimensions.} \begin{equation}\label{greend} u_1 = \tilde{k}\,r_1^{2-d}\,,\qquad\tilde{k} = \frac{\Gamma(\frac{d-2}{2})}{\pi^{\frac{d-2}{2}}}\,. \end{equation} To compute potentials in $d$ dimensions one needs in principle the generalization of functions such as $g$ given by~\eqref{Fock} in $d$ dimensions. This is known in the case of $g$, but it is rather cumbersome and needed only if one wants to compute the potentials in $d$ dimensions in the whole space. However, the expansion of $g$ in $d$ dimensions around $\bm{y}_{1,2}$ is quite handy, and given by the Appendix B of~\cite{BBBFMa}. Therefore, while some potentials cannot be easily computed for any $\mathbf{x} \in \mathbb{R}^d$, we can compute them around the singularities $\bm{y}_{1,2}$, which is actually sufficient to apply the DR. In Hadamard's regularization, the $3$-dimensional spatial integral is defined by the \textit{partie finie} (Pf) prescription, depending on two constants $s_1$ and $s_2$ associated with logarithmic divergences at the two singular points (see Ref.~\cite{BFreg} for a review of this regularization), say \begin{equation}\label{IHad} I_{\mathcal{R}} = \mathop{\mathrm{Pf}}_{s_1,s_2}\int_{r<\mathcal{R}} \mathrm{d}^3\mathbf{x}\,S(\mathbf{x})\, , \end{equation} where $S$ denotes the non-compact support integrand and we limit the integration to a spherical ball $r<\mathcal{R}$ in order to keep only the UV divergences (see~\cite{BFeom} for a review). On the other hand, in DR, the integral is automatically regularized by means of the analytic continuation in the dimension $d$, so that \begin{equation}\label{Id} I^{(d)}_{\mathcal{R}} = \int_{r<\mathcal{R}} \mathrm{d}^d\mathbf{x}\,S^{(d)}(\mathbf{x})\,. \end{equation} We assume that we can compute the expansion of the source in $d$ dimensions in the vicinity of the particles, say $\bm{y}_1$. In 3 dimensions, we have the expansion around $\bm{y}_1$, \begin{equation}\label{expS} S(\mathbf{x}) = \sum_{p_0 \leqslant p \leqslant N} r_1^p \mathop{\sigma}_1{}_{\!p} (\mathbf{n}_1) + o(r_1^N) \,, \end{equation} where $r_1=\vert\mathbf{x}-\bm{y}_1\vert$ and the coefficients $\mathop{\sigma}_{1p}$ depend on the unit direction $\mathbf{n}_1=(\mathbf{x}-\bm{y}_1)/r_1$ of approach to the singularity. In $d$ dimensions, we have a similar albeit more complicated expansion (where we pose $\varepsilon=d-3$) \begin{equation}\label{expSd} S^{(d)}(\mathbf{x}) = \sum_{\substack{p_0\leqslant p\leqslant N\\ q_0\leqslant q\leqslant q_1}} r_1^{p+q\varepsilon} \mathop{\sigma}_1{}^{(\varepsilon)}_{\!p,q} (\mathbf{n}_1) + o(r_1^N) \,, \end{equation} where the coefficients now depend on an extra integer $q$ reflecting the more complicated structure of the expansion involving powers $p+q\varepsilon$ (here both $p$, $q \in\mathbb{Z}$). Since the two expansions~\eqref{expS} and~\eqref{expSd} must agree in the limit $\varepsilon\to 0$, the relation \begin{equation}\label{eps0} \sum_{q_0\leqslant q\leqslant q_1} \mathop{\sigma}_1{}^{(0)}_{\!p,q} = \mathop{\sigma}_1{}_{\!p}\,, \end{equation} must hold for any $p$. Now, we are interested in the \textit{difference} between DR and the Hadamard partie finie, because this is precisely what we have to add to the Hadamard result~\eqref{IHad} in order to get the correct $d$-dimensional result~\eqref{Id}. This difference is \begin{equation}\label{DI} \mathcal{D}I = I^{(d)}_{\mathcal{R}} - I_{\mathcal{R}} \,, \end{equation} of which we merely compute the pole part $\propto 1/\varepsilon$ and the finite part $\propto\varepsilon^0$ in the Laurent expansion when $\varepsilon \rightarrow 0$, the other terms vanishing in that limit. The key point is that the difference~\eqref{DI} does not depend on $\mathcal{R}$ but only on the coefficients of the expansion around the two singularities as defined by~\eqref{expSd}, modulo neglected $\mathcal{O}(\varepsilon)$ terms. We have~\cite{BDEI05dr} \begin{align}\label{IDiffHadDR} \mathcal{D}I =& \frac{\Omega_{d-1}}{\varepsilon}\sum_{q_0\leqslant q\leqslant q_1} \left[\frac{1}{q+1} + \varepsilon \ln \left(\frac{s_1}{\ell_0}\right) \right]\langle\mathop{\sigma}_1{}^{(\varepsilon)}_{\!-3,q}\rangle + 1\leftrightarrow 2\,, \end{align} where it is crucial that the angular average be performed in $d$ dimensions, \textit{i.e.}, \begin{equation}\label{IntAngul} \langle\mathop{\sigma}_1{}_{\!p,q}^{(\varepsilon)}\rangle = \int\frac{\mathrm{d}\Omega_{d-1}}{\Omega_{d-1}} \mathop{\sigma}_1{}_{\!p,q}^{(\varepsilon)}(\mathbf{n}_1)\,,\qquad \Omega_{d-1}= \frac{2\pi^{\frac{d}{2}}}{\Gamma\left(\frac{d}{2}\right)}\,. \end{equation} Here, $\mathrm{d} \Omega_{d-1}(\mathbf{n}_1)$ is the solid angle on the $(d-1)$-dimensional sphere and $\Omega_{d-1}=\int\mathrm{d} \Omega_{d-1}$. In actual calculations, we have verified that there is no problem with the value $q=-1$ in~\eqref{IDiffHadDR} since we always have $\langle{}_1\sigma_{-3,-1}^{(\varepsilon)}\rangle=0$. As we see from~\eqref{IDiffHadDR}, the calculation generates many UV-type poles $1/\varepsilon$, but we shall prove that all the poles can be removed by the specific shift determined from the 4PN equations of motion in Refs.~\cite{BBBFMc,MBBF17}. However, before we do so, let us discuss that at 4PN order all the poles are not of the form~\eqref{IDiffHadDR}, but there are other poles coming directly from the calculation of potentials at the location of the particles with DR. This more involved case is detailed now. \subsection{Regularization of the potentials at the locations of the particles}\label{sec:potpart} For the integrals with compact support, only the values of the potentials at $\mathbf{x} = \bm{y}_{1,2}$ are required. Obviously, when the potential is known in $d$ dimensions, we can directly deduce its value at the particles $\bm{y}_{1,2}$. This is however not the case for some of the most difficult potentials. In particular, we have to use another method to compute the values of $\hat{X}$, $\hat{Y}_i$, $\hat{M}=\hat{M}_{ii}$ and $\hat{T}$ at $\bm{y}_{1,2}$ in $d$ dimensions. From the fourth column of Table~\ref{tab:listPotOrder}, we see that these values are required at Newtonian order, except for the case of $\hat{X}$, needed at 1PN order. Let $S$ be the source of a potential $P$ that we want to compute at the particle positions $\mathbf{x} = \bm{y}_{1,2}$. At Newtonian order for $P$, we simply have $\Delta P = S$. The source admits some expansions around the singularities in 3 dimensions and $d$ dimensions given by Eqs.~\eqref{expS} and~\eqref{expSd} above. As usual, we proceed in two steps, Hadamard's regularization, to which we add the corrections due to DR. We thus compute first the value of the potential $P$ at point 1 in $3$ dimensions using the Hadamard partie finie. In that case, the potential at any field point $\mathbf{x}'$ is given by the Poisson integral of the singular source $S(\mathbf{x})$, and defined in the sense of the Hadamard partie finie, \begin{equation}\label{Px} P_{\mathcal{R}}(\mathbf{x}')=-\frac{1}{4\pi}\,\mathop{\mathrm{Pf}}_{s_1,s_2} \int_{r<\mathcal{R}}\frac{\mathrm{d}^3\mathbf{x}}{\vert\mathbf{x}- \mathbf{x}'\vert}\,S(\mathbf{x})\,, \end{equation} where $s_1$ and $s_2$ are the two associated arbitrary constants. Again, we restrict the integration volume to a spherical ball $r<\mathcal{R}$ in order to focus attention on UV divergences. Now, it has been shown in~\cite{BFreg} that the Hadamard partie finie of the potential $P$, or rather $P_{\mathcal{R}}$ when the IR part of the integral is ignored, at the location of the singular point 1, reads \begin{equation}\label{P1Had} (P_{\mathcal{R}})_1 = - \frac{1}{4\pi} \,\mathop{\mathrm{Pf}}_{s_1,s_2} \int_{r<\mathcal{R}} \frac{\mathrm{d}^3\mathbf{x}}{r_1} \,S(\mathbf{x}) + \left[\ln\left(\frac{r_1'}{s_1}\right) - 1\right] \langle\mathop{\sigma}_1{}_{\!-2}(\mathbf{n}_1)\rangle\,. \end{equation} The first term corresponds to the naive replacement of $\mathbf{x}'$ by the source point $\bm{y}_1$ in~\eqref{Px}, while the second term accounts for the presence of the logarithmic divergence $\ln r_1'=\ln\vert\mathbf{x}'-\bm{y}_1\vert$ in the limit $\mathbf{x}'\to\bm{y}_1$, the formally infinite contribution $\ln r_1'$ therein being considered to be a constant (see~\cite{BFreg} for more details). Note that the constant $s_1$ cancels out between the two terms in~\eqref{P1Had} so that $(P)_1$ depends only on $s_2$ and $r_1'$. The second term in~\eqref{P1Had} contains the usual angle average in 3 dimensions, \begin{equation}\label{angle} \langle\mathop{\sigma}_1{}_{\!-2}\rangle = \int \frac{\mathrm{d} \Omega_1}{4\pi} \mathop{\sigma}_1{}_{\!-2}(\mathbf{n}_1)\,. \end{equation} After having computed the Hadamard value $(P)_1$ in this way, we correct it so that in corresponds to DR. In DR the value of the potential $P^{(d)}_{\mathcal{R}}$ at the point $\bm{y}_1$ is simply obtained by replacing $\mathbf{x}'$ by $\bm{y}_1$ inside the Poisson integral in $d$ dimension, since the regularization is taken care of by the analytic continuation in $d$. Hence \begin{equation}\label{P1dimreg} P^{(d)}_{\mathcal{R}}(\bm{y}_1) = -\frac{\tilde{k}}{4\pi}\,\int_{r< \mathcal{R}}\frac{\mathrm{d}^d\mathbf{x}}{r_1^{d-2}}\,S^{(d)}(\mathbf{x})\,, \end{equation} where $\tilde{k}$ has been defined in Eq.~\eqref{greend}. Given the results $(P)_1$ and $P^{(d)}(\bm{y}_1)$ of the two regularizations we define their UV difference as \begin{equation}\label{DP1} \mathcal{D}P(1) = P^{(d)}_{\mathcal{R}}(\bm{y}_1)-(P_{\mathcal{R}})_1\,, \end{equation} which is independent of $\mathcal{R}$. We only compute the pole part followed by the finite part when $\varepsilon \rightarrow 0$. The difference depends again only on the coefficients of the expansion around the two singularities as given by~\eqref{expSd}, but the formula is more involved than in~\eqref{IDiffHadDR}. We have~\cite{BDE04} \begin{align}\label{P1DiffHadDR} \mathcal{D}P(1) =& -\frac{1}{\varepsilon (1+\varepsilon)}\sum_{q_0\leqslant q\leqslant q_1} \left(\frac{1}{q} + \varepsilon \left[\ln \left(\frac{r_1'}{\ell_0}\right) -1\right]\right)\langle\mathop{\sigma}_1{}^{(\varepsilon)}_{\!-2,q}\rangle\nonumber \\ & -\frac{1}{\varepsilon (1+\varepsilon)} \sum_{q_0\leqslant q\leqslant q_1} \left(\frac{1}{q+1} + \varepsilon \ln \left(\frac{s_2}{\ell_0}\right)\right) \sum_{\ell = 0}^{+\infty} \frac{(-)^\ell}{\ell!}\,\partial_L\left(\frac{1}{r_{12}^{1+\varepsilon}}\right)\langle n_2^L\mathop{\sigma}_2{}^{(\varepsilon)}_{\!-\ell-3,q}\rangle\,. \end{align} In the second term the sum over $\ell$ is actually finite since there is a maximal order of the singularity, bounded by a negative integer $p_0$ in~\eqref{expSd}. After consistently correcting the results obtained in Hadamard's regularization by means of Eqs.~\eqref{IDiffHadDR} and~\eqref{P1DiffHadDR}, the constants $s_1$, $s_2$, $r'_1$ and $r'_2$ must individually cancel out since they are actually absent in $d$ dimensions. Instead, poles associated with logarithms of the characteristic length scale $\ell_0$ may arise. They do at the 3PN order and beyond, in both the gravitational field and the accelerations~\cite{DJSdim, BDE04, BDEI05dr}. In previous works, those entering the accelerations have been conveniently traded, by applying an unphysical shift of the worldlines, for two logarithmic constants, denoted as $\ln r'_1$, $\ln r'_2$~\cite{BDE04, BBBFMa}. Indeed, the ensuing equations of motion have then the same form as the ones derived from a purely 3-dimensional calculation based on Hadamard's treatment. In particular, $\ln r'_1$ and $\ln r'_2$ play the role of trackers for the UV divergences. A different, simpler, but very close, choice of shift will be made in section~\ref{sec:resultMQ} by taking $r'_1=r'_2=r_0'$. \subsection{Distributional derivatives and the Gel'fand-Shilov formula} Finally, we need to take care of the compact support contributions that are generated by the purely distributional part of the derivatives of potentials appearing in the non-compact terms in $d$ dimensions. For that purpose, we use the Schwartz distributional derivative~\cite{Schwartz} or, equivalently, the Gel'fand-Shilov formula~\cite{gelfand}. The distributional derivatives are imperatively to be applied in $d$ dimensions in order to be well defined and avoid the appearance of undesirable products of Dirac distributions with functions that are singular on their support, like $r_1^{-1}\delta^{(3)}(\mathbf{x}-\bm{y}_1)$. Let $P$ be one of our elementary potentials $(V, V_i, \hat{W}_{ij}, \cdots)$ presented in Appendix~\ref{app:PNpotentials}. Around the two singularities, it admits an expansion similar to~\eqref{expSd}, namely \begin{equation}\label{expPd} P = \sum_{\substack{p_0\leqslant p\leqslant N\\ q_0\leqslant q\leqslant q_1}} r_1^{p+q\varepsilon} \mathop{f}_1{}^{(\varepsilon)}_{\!p,q} (\mathbf{n}_1) + o(r_1^N) \,, \end{equation} where, in particular, the maximal divergence corresponds to the generally negative power $p_0\in\mathbb{Z}$. Then, the distributional derivative of this potential is given by \begin{equation}\label{distrderiv} \partial_{i} P = (\partial_{i} P)_\text{ord} + D_{i}[P]\,, \end{equation} where the first term represents the ``ordinary'' piece of the derivative, while the purely distributional part reads \begin{equation}\label{distrpart} D_{i}[P] = \Omega_{d-1} \sum_{\ell=0}^{+\infty} \frac{(-)^\ell}{\ell!} \,\partial_L\delta^{(d)}(\mathbf{x}-\bm{y}_1)\,\langle n_1^{iL} \mathop{f}_1{}_{\!-\ell-2,-1} \rangle + 1 \leftrightarrow 2\,, \end{equation} which is a generalized version of the Gel'fand-Shilov formula~\cite{gelfand}. We use here the notation~\eqref{IntAngul} for the angular average in $d$ dimensions, and denote by $L$ the multi-index $i_1\cdots i_\ell$ with $\ell$ indices (and $n_1^{iL}=n_1^{i}n_1^{i_1}\cdots n_1^{i_\ell}$). The only contributions to $D_{i}[P]$ come thus from the singular terms with powers $p = -\ell -2$ and with $q=-1$. Moreover, as in~\eqref{P1DiffHadDR} the sum in the right-hand side of~\eqref{distrpart} is actually finite since we have $\ell \leqslant -2 - p_0$. Typically, distributional spatial derivatives will contribute when computing the second derivative of a potential, say $\partial_{ij} P$. In that case, we shall have $\partial_{ij} P = (\partial_{ij} P)_\text{ord} + D_{ij}[P]$ with the distributional term given by (see~\cite{BFreg} for a review) \begin{equation}\label{distrpartij} D_{ij}[P] = D_{i}[\partial_j P] + \partial_i D_{j}[P]\,, \end{equation} where $D_i$ represents the distributional derivative operator defined by~\eqref{distrpart}. Only terms linear in the Dirac functions $\delta(\mathbf{x}-\bm{y}_{1})$ or $\delta(\mathbf{x}-\bm{y}_{2})$ are to be kept since the product of two delta-functions, or derivatives of delta-functions, is always zero in DR. So, the partial derivatives in Eq.~\eqref{distrpartij} are to be taken as ordinary. To compute the distributional time derivative, one first define the partial derivative with respect to the source points $\bm{y}_{1,2}$ as \begin{equation}\label{distrpart1} \mathop{D}_1{}_{i}[P] = - \Omega_{d-1} \sum_{\ell=0}^{+\infty} \frac{(-)^\ell}{\ell!} \,\partial_L\delta^{(d)}(\mathbf{x}-\bm{y}_1)\,\langle n_1^{iL} \mathop{f}_1{}_{\!-\ell-2,-1} \rangle \quad\text{and}\quad 1\leftrightarrow 2\,. \end{equation} This definition is consistent with the translational invariance of $r_1=|\mathbf{x}-\bm{y}_1|$, which is nothing but the small expansion parameter of the potentials near the singularity $\mathbf{x}=\bm{y}_1$. It also implies that $\mathop{D}_{i}[P]+\mathop{D}_{1i}[P]+\mathop{D}_{2i}[P]=0$, which ensures that the partial derivatives obey $\partial_{i} P+\partial_{1i} P+\partial_{2i} P=0$; this is the consequence of the fact that the potential, as a function, depends on the trajectories only through the two distances to the field point $\mathbf{r}_1 = \mathbf{x}-\bm{y}_{1}$ and $\mathbf{r}_2 = \mathbf{x}-\bm{y}_{2}$. Then, the distributional time derivative is naturally obtained as $\partial_{t} P = (\partial_{t} P)_\text{ord} + D_{t}[P]$, where \begin{equation}\label{distrpartt} D_{t}[P] = v_1^i\mathop{D}_1{}_{i}[P] + v_2^i\mathop{D}_1{}_{2}[P]\,. \end{equation} Mixed time-space or second time derivatives are computed using \begin{subequations}\label{distrpartttit} \begin{align} D_{it}[P] &= D_{i}[\partial_t P] + \partial_i D_{t}[P]\,,\\ D_{tt}[P] &= D_{t}[\partial_t P] + \partial_t D_{t}[P]\,. \end{align} \end{subequations} We observe from~\eqref{distrpartij} and~\eqref{distrpartttit} that the operations of applying successive distributional derivatives do not \textit{a priori} commute. Fortunately, we have checked that this non-commutation does not affect our computation of the quadrupole moment up to 4PN order. Let us finally mention an important point. At the 4PN order, there are terms involving the spatial derivative $\partial_i\hat{X}$ of the non-linear potential $\hat{X}$, defined by~\eqref{X3} in 3 dimensions and provided in Appendix~\ref{app:PNpotentials} in $d$ dimensions. This potential involves some cubic ``self'' terms that diverge like $1/r_1^3$ in 3 dimensions, and like $1/r_1^{3d-6}$ in $d$ dimensions, when $r_1\to 0$, as they essentially arise from the product of three Newtonian-like potentials. Now, if we were to apply the formula~\eqref{distrpart} in 3 dimensions, we would find that the derivative $\partial_i\hat{X}$ does contain a distributional term, proportional to the derivative of the Dirac function in 3 dimensions. However, in $d$ dimensions, the situation is different. Indeed, we see from~\eqref{distrpart} that $\partial_i (1/r_1^{3d-6})$ does not generate any distributional term, because the singularity corresponds to the value $q=-3$ while one needs $q=-1$ for the singularity to contribute in Eq.~\eqref{distrpart}. Therefore, the derivative $\partial_i\hat{X}$ should just be considered as ordinary. This is why we had to be careful at computing distributional derivatives only in $d$ dimensions. We found in our calculations that only second derivatives of potentials, like $\partial_{ij} P$ or $\partial_{t}^2 P$, yield distributional contributions. \section{Computation of the various types of terms} \label{sec:ComputePot} Based on the computational and regularization methods explained above, from the sum of terms composing the mass quadrupole provided in the Appendix~\ref{app:MQAsPot}, we can list all the potentials required to control the mass quadrupole at 4PN. These potentials can be required either everywhere when they enter non-compact support integrals, or only at the location of particles $\mathbf{x} = \bm{y}_{1,2}$ when they enter compact-support terms, as was seen in Sec.~\ref{sec:dimregUVgen}. For instance, the potentials at $\mathbf{x} = \bm{y}_{1}$ (excluding the superpotentials) are those that enter $\tilde{\mu}_1$ at 4PN order and $\mu_1$ at 3PN order [see Appendix~\ref{app:PNpotentials}]. Besides, the potentials that enter non-compact integrals are needed in $d$ dimensions in a neighborhood of the particles $\bm{y}_{1,2}$ in order to implement the UV DR. Finally, in order to compute the surface terms (either Laplacian or divergence terms), we need to know their explicit expressions when $r\to\infty$ in 3 dimensions only. The techniques we employ to compute these potentials are well documented elsewhere~\cite{BDI95, BFP98, BFeom, BIJ02, BI04mult, FBI15}. We provide below a short summary of those and outline their most salient features. The Table~\ref{tab:listPotOrder} gives the different PN orders to which the various types of potentials are required for this computation. \begin{table}[h] \begin{center} \begin{tabular}{| c || c | c | c | c |} \hline Potential & 3-dim whole space & $d$-dim near $\bm{y}_{1,2}$ & ~at $\bm{y}_{1,2}$~ & $r\to +\infty$ \tabularnewline \hline \hline $V$ & 2PN & 2PN & 3PN & 3PN \tabularnewline \hline $V_i$ & 2PN & 2PN & 2PN & 2PN \tabularnewline \hline $K$ & $\times$ & 1PN & $\times$ & $\times$ \tabularnewline \hline $\Psi_{ij}^{\partial_{ab} V}$ &2PN&2PN&2PN&2PN \tabularnewline \hline $\Psi_{ij}^{\partial_a V_b}$ &1PN&1PN&1PN&1PN \tabularnewline \hline $\Psi_{ij}^{\partial_t \partial_a V}$ &1PN&1PN&1PN&1PN \tabularnewline \hline $\Psi_{ijk}^{\partial_a V}$ &1PN&1PN&1PN&1PN \tabularnewline \hline $\hat{W}_{ij}$ &1PN & 1PN &1PN & 2PN \tabularnewline \hline $\hat{W}$ & 1PN & 1PN & 2PN &2PN \tabularnewline \hline $\hat{Z}_{ij}$ &N & N & N& 1PN \tabularnewline \hline $\hat{Z}$ &N& N& N &1PN \tabularnewline \hline $\hat{R}_{i}$ &N& N& 1PN& 1PN \tabularnewline \hline $\hat{Y}_i$ &$\times$& $\times$&N&N \tabularnewline \hline $\hat{X}$ &N& N&1PN&1PN \tabularnewline \hline $\hat{M}_{ij}$ &$\times$&$\times$&$\times$&N \tabularnewline \hline $\hat{M}$ &$\times$&$\times$&N&N \tabularnewline \hline $\hat{T}$ &$\times$&$\times$&N&N \tabularnewline \hline \end{tabular} \caption{List of the PN orders required for the different potentials and super-potentials to control the 4PN mass quadrupole. The notation for the super-potential associated with the potential $\phi$ at multipolar order $\ell$ is $\Psi^{\phi}_L$, as defined by Eqs.~\eqref{Deltaphi2k}--\eqref{PsiL}. The second column corresponds to the potentials computed in $3$ dimensions in the whole space (for all $\mathbf{x}\in\mathbb{R}^3$). They are required for performing the volume integrals of the non-compact support terms. The third column corresponds to the potentials computed in $d$ dimensions but in the form of an expansion around the particles ($\bm{y}_1$ and $\bm{y}_2$). These expansions are inserted into the ``difference'' due to the DR of UV divergences. The next column is the value of the potentials at the location of particles ($\bm{y}_1$ or $\bm{y}_2$) needed for the compact-support terms, while the last one corresponds to the potentials computed in the form of an expansion at infinity, needed for evaluation of the surface integrals. Note the particular case of the potential $K$, which always appears combined with $d-3$ factor, thus playing no role except for dimensional regularization.} \label{tab:listPotOrder} \end{center} \end{table} \subsection{Compact-support potentials}\label{sec:compact} The first potentials to compute are the compact support potentials, \textit{i.e.} $V$, $V_i$ and $K$ in Table~\ref{tab:listPotOrder}, and the compact-support parts of more complicated potentials. The d'Alembertian of these potentials is proportional to the matter source densities $\sigma$, $\sigma_i$ or $\sigma_{ij}$ defined in Eq.~\eqref{sigma}. We need the compact-support potentials only in 3 dimensions (none of them develop a pole). They are computed using the symmetric propagator, by iteration of the Green's function of the Laplace operator, say \begin{equation}\label{compact} P = \Box^{-1}_\text{sym}\delta^{(3)} = -\frac{1}{4\pi}\left[\frac{1}{r} + \frac{1}{c^2}\partial_t^2\left(\frac{r}{2}\right) + \frac{1}{c^4}\partial_t^4\left(\frac{r^3}{24}\right) + \cdots\right]\,. \end{equation} We do not include here the ``odd'' parity (dissipative) terms at orders 2.5PN and 3.5PN as they are already known~\cite{BIJ02}. The values of $\hat{Y}_i$, $\hat{M}_{ij}$ and its trace $\hat{M}$, and $\hat{T}$ have been computed at $\mathbf{x} = \bm{y}_1$ by applying successively Eqs.~\eqref{P1Had} and~\eqref{P1DiffHadDR}. For the 1PN potential $\hat{X}$ the procedure is a little more complex but is fully explained in Sec.~IV of~\cite{BDE04} [see in particular Eq.~(4.30a) there]. In fact, the values of $\hat{X}$ and $\hat{T}$ at point $\bm{y}_1$ have already been computed in Ref.~\cite{BDLW10a} and we used (and recomputed) those results. Note that some of the potentials, computable in the whole space in $d$ dimensions, are finite in the ``bulk'', \textit{i.e.} outside the singularities, but develop a pole when computed at the points $\bm{y}_{1,2}$. This is the case of the potentials $\hat{W}\equiv\hat{W}_{ii}$ and $\hat{Z}$ which, according to the Table~\ref{tab:listPotOrder}, are respectively needed at 2PN and 1PN orders at the points $\bm{y}_{1,2}$, in order to obtain the effective mass $\tilde{\mu}_1$ at the 4PN order given by~\eqref{mutilde1pot}. The values of these potentials have been computed directly from their expression in the whole space as well as by the application of the procedure relying on Eqs.~\eqref{P1Had} and~\eqref{P1DiffHadDR}. In the case of the 2PN potential $\hat{W}_{ij}$, we have obtained it in whole space and at the points $\bm{y}_{1,2}$ following the regularization [see Sec.~\ref{sec:Wij2PN}]; but we have also computed the trace $\hat{W}$ by solving the equivalent more convenient equation [see Eq.~\eqref{W3}] \begin{equation}\label{eqWii} \Box\Bigl(\hat{W}+\frac{V^2}{2}\Bigr) = 8\pi G\Bigl(\sigma_{ii}-\frac{1}{2}\sigma\,V\Bigr) - \frac{1}{c^2}(\partial_t V)^2\,. \end{equation} The advantage of this alternative formulation is that the non-compact support term is to be computed at relative 1PN order only. This permits us to apply the machinery of Eqs.~\eqref{P1Had} and~\eqref{P1DiffHadDR}, and its generalization at 1PN described in Ref.~\cite{BDE04}. As already said, the end result for these potentials computed at point 1 in DR includes a pole $1/\varepsilon$. For instance the relevant combination of potentials $\hat{W}$ and $\hat{Z}$ entering the 4PN effective mass $\tilde{\mu}_1$ [see Eq.~\eqref{mutilde1pot}] contains the pole: \begin{equation}\label{WZpole} (\hat{W})_1 + \frac{4}{c^2}(\hat{Z})_1 = -\frac{5G^4 m_{1}^2 m_{2}^2}{3 c^4 \varepsilon r_{12}^4} + \mathcal{O}(\varepsilon^0)\,. \end{equation} The effective mass $\tilde{\mu}_1$ itself will thus also contain this pole, together with several others due to the non-compact potentials $\hat{X}$, $\hat{T}$, $\hat{Y}_i$ and $\hat{M}$ computed at 1 [see Table~\ref{tab:listPotOrder}]. We find \begin{equation}\label{mutildepole} \tilde{\mu}_1 = \frac{G^3 m_1^2 m_2}{c^8 \varepsilon r_{12}^3}\left[ \frac{53}{15} m_1 \left( 3(n_{12}v_{12})^2 - v_{12}^2 + \frac{G m_1}{r_{12}}\right) + \frac{G m_2}{r_{12}}\left(\frac{79}{5}m_1 - \frac{1}{3} m_2\right)\right] + \mathcal{O}(\varepsilon^0)\,. \end{equation} We mention also the tricky case of the potential $\hat{R}_{i}$ at the 1PN order which, when computed at point 1, contains a ``cancelled'' pole, in the sense that the (1PN generalisation of the) formula~\eqref{P1DiffHadDR} has no pole, nevertheless it contains a finite non-zero term $\mathcal{O}(\varepsilon^0)$ which does contribute to the effective mass $\tilde{\mu}_1$ at 4PN order. In Sec.~\ref{sec:resultMQ}, we shall show that all the UV poles coming from volume integrals on non-compact support terms, according to Eq.~\eqref{IDiffHadDR}, and the values at 1 of non-compact potentials, after Eq.~\eqref{P1DiffHadDR}, are removed by applying the \textit{same} specific shift as the one computed from the 4PN equations of motion in Refs.~\cite{BBBFMa, MBBF17}. This is an important verification of the present calculation, showing the consistency with the calculation of the equations of motion. \subsection{Non compact-support potentials}\label{sec:noncompact} The non-compact support terms are non-linear terms of basically two types. One is a rather simple type called ``$\partial V \partial V$'', made of a quadratic product of derivatives of compact support potentials, for instance $V$, $V_i$ and $K$, or the compact parts of other potentials. To the lowest PN order, we compute these terms at any field point $\mathbf{x}$ in analytic closed form using the elementary solution $g$ of the Poisson equation (see~\cite{BFeom} for more details) \begin{equation}\label{geq} \Delta g = \frac{1}{r_1 r_2}\,, \end{equation} where $r_1=\vert\mathbf{x}-\bm{y}_1\vert$ and $r_2=\vert\mathbf{x}-\bm{y}_2\vert$. The function $g$ solving~\eqref{geq}, in the sense of distributions in 3 dimensions, is the Fock function~\cite{Fock} \begin{equation}\label{Fock} g^\text{Fock} = \ln S \equiv \ln \bigl(r_1 + r_2 + r_{12}\bigr)\,, \end{equation} where $r_{12}=\vert\bm{y}_1-\bm{y}_2\vert$. When handling PN corrections, it is also necessary to consider the solutions of the iterated Poisson equations and more complicated elementary functions. These are fully reviewed in Sec.~\ref{sec:Wij2PN}, and in addition, a subtle point about the use of those elementary functions, namely that they must be correctly matched to the exterior zone~\cite{BFeom}. As we shall see the general way to proceed in order to ensure this matching is to use Eqs.~\eqref{eq:kernels_matching}--\eqref{eq:source_matching} below. The only potential which is needed in the whole space in 3 dimensions and that involves a piece more complicated than the $\partial V \partial V$ terms is $\hat{X}$. The more involved structure of the latter piece corresponds to the cubic non-linear source term $\hat{W}_{ij}\partial_{ij}V$ in Eq.~\eqref{X3}. Fortunately, from Table~\ref{tab:listPotOrder}, this term is needed at Newtonian order only and, as it turns out, we know it in closed analytic form at that order. As shown in~\cite{BDI95, BFP98}, in order to construct it, it suffices to find the solutions of the two Poisson equations (together with $1\leftrightarrow 2$) \begin{subequations}\label{KH} \begin{align} \Delta K_1 &= 2~\partial_{ij}\left({1\over r_2}\right) \partial_{ij}\ln r_1 \,,\\ \Delta H_1 &= 2~\partial_{ij}\left({1\over r_1}\right) \frac{\partial^2 g}{\partial y_1^i\partial y_2^j} \,. \end{align} \end{subequations} Remarkably, the unique solutions of Eqs.~\eqref{KH} valid in the sense of distributions (and tending to zero when $r\to\infty$) can be written down under the explicit closed form \begin{subequations}\label{KHexpl} \begin{align} K_1 &= -\frac{1}{r_2^3} + \frac{1}{r_2r_{12}^2}-\frac{1}{r_1^2r_2}+ \frac{r_2}{2r_1^2r_{12}^2}+\frac{r_{12}^2}{2r_1^2r_2^3} +\frac{r_1^2}{2r_2^3r_{12}^2} \,, \\ H_1 &= -\frac{1}{2r_1^3}-\frac{1}{4r_{12}^3} - \frac{1}{4r_1^2r_{12}}- \frac{r_2}{2 r_1^2r_{12}^2}+\frac{r_2}{2r_1^3r_{12}} +\frac{3r_2^2}{4r_1^2r_{12}^3}+\frac{r_2^2}{2r_1^3r_{12}^2}- \frac{r_2^3}{2r_1^3r_{12}^3} \,. \end{align}\end{subequations} With these solutions in hands, we control the non-compact potential $\hat{X}$ at the Newtonian order. Note that since $K_1$ and $H_1$ decrease at least like $1/r$ when $r\to +\infty$, there is no need to perform a matching to the far zone for these, as the matching equation~\eqref{eq:kernels_matching} [or the Poisson type version~\eqref{multP}] is automatically satisfied. Finally, it remains to proceed with the extremely long computational task\footnote{This task is systematically done using a series of routines mainly developed by one of us (G.F.) with \textit{Mathematica} and the \textit{xAct} library~\cite{xtensor}.} of computing each of the non-compact support terms as a volume integral [see Appendix~\ref{app:MQAsPot}], using, in a first stage, the Finite Part regularization for the IR divergences and the Hadamard partie finie regularization for the UV ones. Unfortunately the result of this computation is too long to be presented here. \subsection{The quadratic potential $\hat{W}_{ij}$ at 2PN order} \label{sec:Wij2PN} We now present more in details the calculation of the $\partial V \partial V$ pieces of the potentials relevant for the 4PN mass quadrupole, focusing on the case of $\hat{W}_{ij}$. We start with the derivation of the elementary kernels that are used to compute the potentials $\hat{W}_{ij}$ at 2PN order, as well as $\hat{Z}_{ij}$ and $\hat{R}_i$ at 1PN order, \textit{cf.} Sec.~\ref{sec:pot}. We treat as example the potential $\hat{W}_{ij}$ at 2PN order as it is the more cumbersome to compute, and as it involves all the elementary kernels that are needed to derive the other potentials $\hat{Z}_{ij}$ and $\hat{R}_i$. Our calculations are valid in 3 dimensions where this potential obeys the wave equation~\eqref{W3}. Since the first term has a compact support it is easily computed \emph{via} iterated Poisson integrals [see Sec.~\ref{sec:compact}] and we ignore it here. The second term is non-linear and with non-compact (NC) support; it reads \begin{subequations}\label{WijNC} \begin{align} &\Box \hat{W}^\text{(NC)}_{ij} = -\partial_i V \partial_j V\,,\\ \text{with}\quad &\Box V = -4 \pi G \tilde{\mu}_1 \delta_1 + 1 \leftrightarrow 2\,, \end{align} \end{subequations} where we recall our notation $\tilde{\mu}_1=\tilde{\mu}_1(t)$ for the effective time-dependent mass defined by Eq.~\eqref{mutilde}, see also~\eqref{mutilde1pot}. Up to 2PN order we have \begin{equation} V = \frac{G \tilde{\mu}_1}{r_1} + \frac{1}{c^2} \partial_t^2\Bigl(G \tilde{\mu}_1 \frac{r_1}{2}\Bigr) + \frac{1}{c^4} \partial_t^4\Bigl(G \tilde{\mu}_1 \frac{r_1^3}{24}\Bigr) + 1 \leftrightarrow 2 + \mathcal{O}(c^{-6})\,. \end{equation} By plugging this expression into the source term of $\hat{W}^\text{(NC)}_{ij}$, introducing the partial derivatives with respect to the source points $y^i_{1,2}$, and using the fact that $\tilde{\mu}_1$ and $\tilde{\mu}_2$ are just functions of time, we are able to express the solution up to 2PN order by means of a set of elementary kernel functions. It is convenient to split $\hat{W}^\text{(NC)}_{ij}$ up to 2PN order into ``self'' terms, which are essentially proportional to the mass squared of each particles $\tilde{\mu}_1^2$ or $\tilde{\mu}_2^2$, and ``interaction'' terms proportional to $\tilde{\mu}_1 \tilde{\mu}_2$ (or their time derivatives): \begin{equation}\label{eq:Wijsplit} \hat{W}^\text{(NC)}_{ij} = \hat{W}^\text{self}_{ij} + \hat{W}^\text{inter}_{ij} + \mathcal{O}(c^{-6})\,. \end{equation} The interaction part is expressed in terms of a series of elementary kernel functions $g$, $f$, $f^{12}$, $f^{21}$, $h$, $h^{12}$, $h^{21}$ and $k$ as \begin{align}\label{eq:Wij2PNinter} \mathop{\hat{W}}_{1}{\!}^\text{inter}_{ij} &= - G^2 \tilde{\mu}_1\tilde{\mu}_2\,\underset{1}{\partial_{(i}} \underset{2}{\partial_{j)}}g -\frac{G^2}{c^2}\left\{\partial_t^2\Bigl[\tilde{\mu}_1\tilde{\mu}_2\,\underset{1}{\partial_{(i}} \underset{2}{\partial_{j)}}f\Bigr] + 2 \ddot{\tilde{\mu}}_1\tilde{\mu}_2\,\underset{1}{\partial_{(i}} \underset{2}{\partial_{j)}}f^{12} \right.\nonumber\\ &\quad\quad \left. + 4 \dot{\tilde{\mu}}_1\tilde{\mu}_2 v_1^k\,\underset{1}{\partial_{k(i}} \underset{2}{\partial_{j)}}f^{12} + 2 \tilde{\mu}_1\tilde{\mu}_2 a_1^k\,\underset{1}{\partial_{k(i}} \underset{2}{\partial_{j)}}f^{12} + 2 \tilde{\mu}_1\tilde{\mu}_2 v_1^k v_1^l \,\underset{1}{\partial_{kl(i}} \underset{2}{\partial_{j)}}f^{12}\right\} \nonumber\\ &\quad - \frac{G^2}{c^4}\left\{\partial_t^4\Bigl[\tilde{\mu}_1\tilde{\mu}_2\,\underset{1}{\partial_{(i}} \underset{2}{\partial_{j)}}h\Bigr] + \partial_t^2\Bigl[2 \ddot{\tilde{\mu}}_1\tilde{\mu}_2\,\underset{1}{\partial_{(i}} \underset{2}{\partial_{j)}}k^{21} + 4 \dot{\tilde{\mu}}_1\tilde{\mu}_2 v_1^k\,\underset{1}{\partial_{k(i}} \underset{2}{\partial_{j)}}k^{21} \right.\nonumber\\ &\quad\quad \left. + 2 \tilde{\mu}_1\tilde{\mu}_2 a_1^k\,\underset{1}{\partial_{k(i}} \underset{2}{\partial_{j)}}k^{21} + 2 \tilde{\mu}_1\tilde{\mu}_2 v_1^k v_1^l \,\underset{1}{\partial_{kl(i}} \underset{2}{\partial_{j)}}k^{21}\Bigr] \right.\nonumber\\ &\quad\quad \left. + \tilde{\mu}_1\tilde{\mu}_2 a_1^m a_2^k \,\underset{1}{\partial_{m(i}} \underset{2}{\partial_{j)k}}k + \tilde{\mu}_1\tilde{\mu}_2 a_1^m v_2^k v_2^l \,\underset{1}{\partial_{m(i}} \underset{2}{\partial_{j)kl}}k + \tilde{\mu}_1\tilde{\mu}_2 v_1^m v_1^n a_2^k\,\underset{1}{\partial_{mn(i}} \underset{2}{\partial_{j)k}}k \right.\nonumber\\ &\quad\quad \left. + \tilde{\mu}_1\tilde{\mu}_2 v_1^m v_1^n v_2^k v_2^l\,\underset{1}{\partial_{mn(i}} \underset{2}{\partial_{j)kl}}k + 2 \tilde{\mu}_1\tilde{\mu}_2 c_1^k\,\underset{1}{\partial_{k(i}} \underset{2}{\partial_{j)}}h^{12} + 8\tilde{\mu}_1\tilde{\mu}_2 b_1^k v_1^l\,\underset{1}{\partial_{kl(i}} \underset{2}{\partial_{j)}}h^{12} \right.\nonumber\\ &\quad\quad \left. + 6\tilde{\mu}_1\tilde{\mu}_2 a_1^k a_1^l\,\underset{1}{\partial_{kl(i}} \underset{2}{\partial_{j)}}h^{12} + 12\tilde{\mu}_1\tilde{\mu}_2 a_1^k v_1^l v_1^m\,\underset{1}{\partial_{klm(i}} \underset{2}{\partial_{j)}}h^{12} \right.\nonumber\\ &\quad\quad \left. + 2\tilde{\mu}_1\tilde{\mu}_2 v_1^k v_1^l v_1^m v_1^n \,\underset{1}{\partial_{klmn(i}} \underset{2}{\partial_{j)}}h^{12}\right\}\,, \end{align} where we denote ${}_{1}{\partial_i} = \partial/\partial y_1^i$ and ${}_{2}{\partial_i} = \partial/\partial y_2^i$, with overdots meaning the time derivatives of $\tilde{\mu}_{1,2}$, and where $\bm{v}_{1,2}$ are the velocities, $\bm{a}_{1,2}$ the accelerations, $\bm{b}_{1,2}$ the time-derivatives of accelerations and $\bm{c}_{1,2}$ their second-time derivatives. To the terms given here, which can be called ``interaction''-terms, one must add the corresponding ``self''-terms, proportional to $G^2 \tilde{\mu}_1^2$ or, for instance, $G^2\dot{\tilde{\mu}}_1\tilde{\mu}_1$. The self terms are obtained from the interaction terms by performing the limit of source points $y_2^i\rightarrow y_1^i$ and replacing $\tilde{\mu}_2$ by $\tilde{\mu}_1$. However the terms become more divergent when performing the limit $y_2^i\rightarrow y_1^i$ and we have to carefully use the self-field regularization. With this caveat in mind we have \begin{equation}\label{eq:Wij2PNself} \mathop{\hat{W}}_{1}{\!}^\text{self}_{ij} = \lim_{\bm{y}_2\rightarrow \bm{y}_1\atop\tilde{\mu}_2\rightarrow\tilde{\mu}_1} \left[\mathop{\hat{W}}_{1}{\!}^\text{inter}_{ij}\right]\,. \end{equation} And, of course we have to add to~\eqref{eq:Wij2PNinter} and~\eqref{eq:Wij2PNself} the terms corresponding to $1 \leftrightarrow 2$. The expression~\eqref{eq:Wij2PNinter} has been parametrized by means of the hierarchy of elementary kernel functions obeying the following Poisson equations: \begin{subequations}\label{eq:kernels_def} \begin{align} \Delta g &= \frac{1}{r_1r_2}\,, \\ \Delta f &= g\,, \qquad\quad \Delta f^{12} = \frac{r_1}{2r_2}\,, \qquad\quad \Delta f^{21} = \frac{r_2}{2r_1}\,, \\ \Delta h &= f\,, \qquad\quad \Delta h^{12} = \frac{r_1^3}{24r_2}\,, \qquad\quad \Delta h^{21} = \frac{r_2^3}{24r_1}\,,\\\Delta k &= \frac{r_1 r_2}{4}\,, \qquad \Delta k^{12} = f^{21}\,, \qquad\qquad \Delta k^{21} = f^{12}\,, \end{align} \end{subequations} where the numerical coefficients have been introduced for later convenience, \textit{cf.} Ref.~\cite{BFeom}. \subsubsection{Computing the particular solutions} Particular solutions for the latter equations, for instance the Fock function given by~\eqref{Fock}, are known from a long time, see \textit{e.g.}~\cite{BDI95, JaraS98, BFeom}. The general structure of these solutions is constituted of two parts: an homogeneous regular solution of the Laplace or iterated Laplace operator multiplied by $\ln S$ (with $S=r_1 + r_2 + r_{12}$) and a specific polynomial of $r_1$, $r_2$ and $r_{12}$. As we seek for particular solutions, we are free to add a global homogeneous solution, for example by adding a numerical constant to the Fock function. This will not affect the true final solution as the proper homogeneous function will be selected by the matching procedure described hereafter. Thus we have chosen to start with the following particular solutions, that differ from those in Appendix A of~\cite{JaraS98} by homogeneous solutions and will be denoted with a hat: \begin{subequations}\label{eq:kernels_part} \begin{align} \hat{g} &= \ln S + \frac{197}{810}\,, \\ \hat{f} &= \frac{1}{12}\biggl[\bigl(r_1^2+r_2^2-r_{12}^2\bigr)\biggl(\ln S - \frac{73}{810}\biggr) + r_1r_{12}+r_2r_{12}-r_1r_2\biggr]\,,\\ \hat{h} &= \frac{1}{320}\biggl[\Bigl(r_1^4+r_2^4-r_{12}^4-2r_{12}^2(r_1^2+r_2^2)+\frac{2}{3}r_1^2r_2^2\Bigr)\biggl(\ln S - \frac{37}{81}\biggr)\nonumber\\ &\qquad\quad + r_1r_2(r_{12}^2-r_1^2-r_2^2)+r_{12}(r_1+r_2)(r_1r_2-r_{12}) + \frac{4}{9}r_1^2r_2^2 + \frac{5}{3}r_{12}(r_1^3+r_2^3)\biggr]\,,\\ \hat{k} &=\frac{1}{120}\biggl[ \Bigl(r_{12}^4-3r_1^4-3r_2^4+6r_1^2r_2^2+2r_{12}^2(r_1^2+r_2^2)\Bigr)\ln S \nonumber\\ &\qquad\quad + \frac{21}{10}(r_1^4+r_2^4) -\frac{r_{12}^4}{30} + 3r_{12}(r_1^3+r_2^3) + (r_1^2+r_2^2)\Bigl(3r_1r_2-\frac{31}{15}r_{12}^2\Bigr) \nonumber\\ &\qquad\quad +r_1r_2r_{12}^2 - \frac{21}{5}r_1^2r_2^2-r_{12}(r_1+r_2)\bigl(r_{12}^2-3r_1r_2\bigr)\biggr]\,, \end{align} \end{subequations} together with the functions $\hat{f}^{12}$, $\hat{h}^{12}$ and $\hat{k}^{12}$ obtained by exchanging the field point ${\bf x}$ with the source point ${\bf y_1}$: \begin{equation}\label{eq:f12part} \hat{f}^{12} = \hat{f}\Big|_{{\bf x}\longleftrightarrow{\bf y_1}}\,,\qquad\hat{h}^{12} = \hat{h}\Big|_{{\bf x}\longleftrightarrow{\bf y_1}}\,,\qquad\hat{k}^{12} = \hat{k}\Big|_{{\bf x}\longleftrightarrow{\bf y_1}}\,, \end{equation} and similarly $\hat{f}^{21}$, $\hat{h}^{21}$ and $\hat{k}^{21}$ obtained by exchanging ${\bf x}$ and ${\bf y_2}$. It is straightforward to check that the kernel functions ~\eqref{eq:kernels_part} and~\eqref{eq:f12part} satisfy the constitutive relations~\eqref{eq:kernels_def} and also, in addition, the relations \begin{equation}\label{eq:extra} \Delta_1\hat{f}^{12} = \hat{g}\,,\qquad \Delta_1\hat{h}^{12} = \hat{f}^{12}\,,\qquad \Delta_1\hat{k} = \hat{f}^{21}\,,\qquad \Delta_1\hat{k}^{21} = \hat{f}\,, \end{equation} together with the relations obtained from $1\leftrightarrow 2$. The extra relations~\eqref{eq:extra} are specifically true for the homogeneous solutions chosen in Eqs.~\eqref{eq:kernels_part}. All those relations suggest that there is an underlying algebra relating those particular kernel functions to higher order, and thus that we should be able to compute a particular solution to the general Poisson equation $\Delta \varphi_{nm} = r_1^{2n-1}r_2^{2m-1}$, with $(n,m) \in \mathbb{N}^2$. This would lead to the knowledge at all even PN orders of the quadratic potentials $\hat{W}_{ij}$, $\hat{Z}_{ij}$ and $\hat{R}_i$, up to the possible odd-odd couplings. \subsubsection{Matching procedure} We shall now define from the previous particular solutions some ``matched'' solutions, in such a way that a certain matching equation is fulfilled. It is convenient to define first some associated functions that obey d'Alembertian rather than Poisson equations, namely \begin{equation}\label{eq:kernels_def} \Box \mathcal{G} = \frac{1}{r_1r_2}\,, \qquad \Box \mathcal{F}^{12} = \frac{r_1}{2r_2}\,, \qquad \Box \mathcal{K} = \frac{r_1r_2}{4}\,, \qquad \Box \mathcal{H}^{12} = \frac{r_1^3}{24r_2}\,. \end{equation} Performing a PN expansion we transform these wave equations~\eqref{eq:kernels_def} into Poisson like equations, and recover the definitions~\eqref{eq:kernels_def} of the previous Poisson kernels, with \begin{subequations}\label{eq:kernels_def_PN_exp} \begin{align} & \mathcal{G} = g + \frac{1}{c^2}\,\partial_t^2 f+ \frac{1}{c^4}\,\partial_t^4 h + \mathcal{O}(c^{-6})\,,\\ & \mathcal{F}^{12} = f^{12} + \frac{1}{c^2}\,\partial_t^2 k^{21}+ \mathcal{O}(c^{-4})\,,\\ & \mathcal{H}^{12} = h^{12} + \mathcal{O}(c^{-2})\,,\\ & \mathcal{K} = k + \mathcal{O}(c^{-2})\,. \end{align} \end{subequations} In order to have the correct prescription for the inverse d'Alembertian, we have to match the particular solutions~\eqref{eq:kernels_part} to the far-zone. This is taken into account by adding some specific homogeneous solutions to Eqs.~\eqref{eq:kernels_part} (see~\cite{BFeom} for a more detailed discussion). Let $\Box\Psi = S({\bf x},t)$ be one of the wave equations~\eqref{eq:kernels_def}, with some non-compact support source $S$. The matching equation states that the multipolar expansion of the solution $\Psi$, denoted $\mathcal{M}(\Psi)$, should satisfy \begin{equation}\label{eq:kernels_matching} \mathcal{M}(\Psi) = \underset{B=0}{\text{FP}} \,\Box^{-1}_R\biggl[\biggl(\frac{r}{r_0}\biggr)^B\mathcal{M}(S)\biggr] - \frac{1}{4\pi}\sum_{\ell=0}^{+\infty}\frac{(-)^\ell}{\ell !}\,\partial_L\left(\frac{1}{r}\,\mathcal{S}_L(t-r/c)\right)\,, \end{equation} where the first term is a solution of the multipole expanded wave equation $\Box \mathcal{M}(\Psi)=\mathcal{M}(S)$, defined by means of the finite part procedure at $B=0$ applied to the standard retarded integral operator $\Box^{-1}_R$, and the second term is a homogeneous solution constructed out of the multipole moments: \begin{equation}\label{eq:source_matching} \mathcal{S}_L(u) = \underset{B=0}{\text{FP}} \int \mathrm{d}^3 \mathbf{x} \left(\frac{r}{r_0}\right)^B\!x_L\,S({\bf x},u)\,, \end{equation} which themselves integrate over the source $S$. Suppose now that we know a particular solution of the wave equation, say $\hat{\Psi}$ such that $\Box\hat{\Psi} = S$. We look for a homogeneous solution $\Psi^\text{hom}$ such that $\Psi=\hat{\Psi}+\Psi^\text{hom}$ satisfies Eqs.~\eqref{eq:kernels_matching}--\eqref{eq:source_matching}. Since the homogeneous solution is directly in the form of a multipole expansion, $\Psi^\text{hom}=\mathcal{M}(\Psi^\text{hom})$, we obtain the following relation: \begin{equation}\label{eq:recipe_matching} \Psi^\text{hom} = \widetilde{\Box_R^{-1}}\mathcal{M}\left(S\right) - \mathcal{M}\bigl(\hat{\Psi}\bigr) - \frac{1}{4\pi}\sum_{\ell=0}^{+\infty}\frac{(-)^\ell}{\ell !}\,\partial_L\left(\frac{1}{r}\,\mathcal{S}_L(t-r/c)\right)\,, \end{equation} where $\widetilde{\Box_R^{-1}}$ denotes the Hadamard-regularized retarded integral in~\eqref{eq:kernels_matching}.\footnote{Actually, in all this construction we are interested only in the conservative dynamics and we can use instead the symmetric integral operator $\widetilde{\Box_\text{sym}^{-1}}$.} The previous recipe~\eqref{eq:recipe_matching} completely determines the homogeneous solution, since all the terms in the right-hand side are known. We have applied this method to determine all the relevant homogeneous solutions in the kernel functions $g$, $f$, \textit{etc}. For example, expanding the last term of Eq.~\eqref{eq:recipe_matching}, and identifying the relevant PN orders, it comes \begin{subequations}\label{eq:gfhom} \begin{align} g^\text{hom} &= \widetilde{\Delta^{-1}} \mathcal{M}\Bigl(\frac{1}{r_1r_2}\Bigr) - \mathcal{M}\bigl(\hat{g}\bigr)\,,\\ f^\text{hom} &= \widetilde{\Delta^{-2}} \mathcal{M}\Bigl(\frac{1}{r_1r_2}\Bigr) - \mathcal{M}\bigl(\hat{f}\bigr) +\frac{1}{4}\bigl(r \,Y-n^i Y_i\bigr)\,, \end{align} \end{subequations} where we have denoted (notice the STF multipole factor $\hat{x}_L$) \begin{equation} Y_L = - \frac{1}{2\pi}\,\underset{B=0}{\text{FP}} \int \mathrm{d}^3 \mathbf{x}\left(\frac{r}{r_0}\right)^B \frac{\hat{x}_L}{r_1 r_2} = \frac{r_{12}}{\ell+1} \sum_{m=0}^\ell y_1^{\langle M}y_2^{L-M\rangle}\,. \end{equation} We emphasize that although we introduced the ``Poisson'' kernels $g$, $f$, $f^{12}$, \textit{etc.} for convenience, it is better to consider the ``d'Alembertian'' kernels ($\mathcal{G}$, $\mathcal{F}^{12}$, \textit{etc.}) as more fundamental quantities. Indeed, when working with the Poisson kernels, we have to take into account the fact that the FP operation at $B=0$ and the inverse Laplacian do not commute, thus for instance \begin{equation}\label{eq:noncommute} \widetilde{\Delta^{-1}}\mathcal{M}\left(f\right) \neq \widetilde{\Delta^{-3}}\mathcal{M}\Bigl(\frac{1}{r_1r_2}\Bigr)\,, \end{equation} and the matching procedure in more complicated. Nonetheless, after adding some correction terms accounting \textit{e.g.} for the fact~\eqref{eq:noncommute}, the result comes out the same. Note however that the effect~\eqref{eq:noncommute} emerges at 2PN order, and only affects the computation of $h$. Having matched the particular solutions, one has constructed the good prescription for the elementary kernels. The end results for $g$ and $f$ are for example \begin{subequations} \begin{align} g =& \ln\left(\frac{S}{2r_0}\right)-1 \,,\\ f =& \frac{r_1r_2\,{\bf n}_1\cdot{\bf n}_2}{6} \left[\ln\left(\frac{S}{2r_0}\right)+\frac{1}{6}\right] +\frac{r_{12}r_1+r_{12}r_2-r_1r_2+2r\,{\bf n}\cdot({\bf y_1}+{\bf y_2})-3r^2}{12}\,. \end{align} \end{subequations} We do not display the other kernels since their homogeneous solutions are rather complicated, and we have described the general procedure to obtain them. In the end, by means of this technique, we have obtained in the whole space (and 3 dimensions) the potential $\hat{W}_{ij}$ at 2PN order following Eqs.~\eqref{eq:Wij2PNinter}--\eqref{eq:Wij2PNself}, as well as the potentials $\hat{Z}_{ij}$ and $\hat{R}_{i}$ at 1PN order. Notably, this permits to check the value of the trace $\hat{W}=\hat{W}_{ii}$ at 2PN order evaluated at point 1, with respect to the direct calculation reported in Sec.~\ref{sec:potpart}. To end with, let us mention a delicate issue that arises in the latter comparison. It comes from the fact that the particular solution of Eq.~\eqref{eqWii}, \begin{equation}\label{eq:Wiipart} \hat{W}^{\text{part}} = -V^2/2+ \widetilde{\Box^{-1}_R} \left[8\pi G\left(\sigma_{ii}-\frac{1}{2}\sigma\,V\right) - \frac{1}{c^2}(\partial_t V)^2\right]\,, \end{equation} is not \textit{a priori} the matched one. Indeed, applying the operator $\widetilde{\Box^{-1}_R}$ to the right-hand side of the latter equality leads to an almost identical expression for $\hat{W}$, but where $V^2$ is now replaced by $\widetilde{\Box^{-1}_R} \Box V^2$. Thus, the homogeneous solution to be added to $\hat{W}^{\text{part}}$ is given by the ``commutator'' \begin{equation}\label{eq:Wiihom} \hat{W}^{\text{hom}}= \bigl[\widetilde{\Box^{-1}_R}, \Box\bigr]\Bigl(-\dfrac{V^2}{2}\Bigr)\,, \end{equation} which does not vanish in general. More precisely, by means of techniques similar to those used in the derivation of the formula~\eqref{JL}, one can show that, for any function admitting an asymptotic expansion with general terms of the form $f_{p,q}(\mathbf{n}) r^{p} (\ln r)^q$ (for $p\le p_\text{max}$) near infinity, one has \begin{subequations}\label{eq:commutator} \begin{align} &\bigl[\widetilde{\Box^{-1}_R},\Box\bigr]F = \sum_{\ell=0}^{+\infty} \frac{(-)^\ell}{\ell!} \sum_{s=0}^{+\infty} \Delta^{-s} \hat{x}_L \left(\frac{\mathrm{d}}{\mathrm{d} t}\right)^{2s} \hat{f}_L\, , \\ & \text{with}\quad \hat{f}_L = \!\!\!\!\!\!\sum^{+\infty}_{k=\max(0,\,\ell-p_\text{max})} \frac{(-)^k}{k!!(k-2\ell-1)!!} \left( \frac{\mathrm{d}}{\mathrm{d} t}\right)^k \left[(2k-2\ell-1) \hat{f}^L_{k-\ell,0} - \hat{f}^L_{k-\ell,1} \right]\, . \end{align} \end{subequations} Here, the coefficients $\hat{f}^L_{p,q}$ are those of the decomposition of ${f}^L_{p,q}(\mathbf{n})$ in the spherical harmonic functions $\hat{n}_L=\text{STF} (n_{i_1}\cdots n_{i_\ell})$ for $\ell\geqslant 0$, while $\Delta^{-s} \hat{x}^L$ represents the solution \begin{align} \Delta^{-s}\hat{x}_L = \frac{\Gamma(\ell+3/2)}{2^{2s}\Gamma(s+1)\Gamma(s+\ell+3/2)} \hat{x}^L r^{2s}\,, \end{align} of the iterated Poisson equation $\Delta^s P =\hat{x}_L$. Working with the symmetric Green function, we find by application of the formula~\eqref{eq:commutator}: \begin{align} \hat{W}^{\text{sym hom}} = -\frac{G^2 m}{c^4} \left[\frac{1}{6} I^{(4)} + \ddot{\tilde{\mu}} c^2\right]\, , \end{align} where we have introduced for convenience the total mass $m=m_1+m_2$, the effective mass $\tilde{\mu}=\tilde{\mu}_1+\tilde{\mu}_2$, and the Newtonian moment of inertia $I=m_1 \bm{y}_1^2 +1\leftrightarrow 2$ of the binary system. \subsection{Potentials at infinity} \label{sec:PotAtInf} Finally, as we have seen in Sec.~\ref{sec:MQPot} many of the potentials are required at $r\to+\infty$ in order to compute the surface terms. For these we need only their contributions in 3 dimensions. For the potentials that are already known for any $\mathbf{x} \in \mathbb{R}^3$, we just perform their expansions when $r\to+\infty$. In particular, we have computed in the whole space the potential $\hat{W}_{ij}$ at 2PN order in Sec.~\ref{sec:Wij2PN}, which is a great help for the calculation of surface terms. However, other potentials are not known, namely $\hat{X}$ at 1PN order, and $\hat{T}$, $\hat{Y}_i$ as well as $\hat{M}_{ij}$ at Newtonian order. For those potentials, we proceed differently. To obtain the expansion when $r\to+\infty$ of the potential $P$, we consider the equation that it satisfies, $\Box P = S$, where the source $S$ is known, for instance given by Eq.~\eqref{Mij3d}. For a potential at the 4PN order, the equation reduces to a Poisson equation $\Delta P = S$. Then, we compute the asymptotic or multipole expansion $\mathcal{M}(P)$ from the source $S$ and its multipole expansion $\mathcal{M}(S)$ by the Poisson-like version of the matching formula~\eqref{eq:kernels_matching}, namely \begin{equation}\label{multP} \mathcal{M}(P) = \mathop{\mathrm{FP}}_{B=0} \Delta^{-1} \biggl[\left(\frac{r}{r_0}\right)^B \mathcal{M}(S) \biggr]-\frac{1}{4 \pi} \sum_{\ell=0}^{+\infty} \frac{(-)^\ell}{\ell!} \partial_L \left(\frac{1}{r}\right) \mathcal{P}_L(t) \,, \end{equation} where \begin{equation}\label{calPL} \mathcal{P}_L(u) = \mathop{\mathrm{FP}}_{B=0} \int \mathrm{d}^3\mathbf{x}~ \left(\frac{r}{r_0} \right)^B x_L\,S(\mathbf{x},u)\,. \end{equation} The first term in~\eqref{multP} corresponds to integrating the multipole expansion of the known source term by term, while the second term represents an homogeneous solution parametrized by computable multipole moments~\eqref{calPL}. In practice, to compute the first term in~\eqref{multP}, we apply the following formulas: \begin{subequations}\label{EqPotInf} \begin{align} \widetilde{\Delta^{-1}}\Bigl[ r^{\alpha} \hat{n}_{L}\Bigr] &= \frac{r^{\alpha+2} \hat{n}_{L}}{(\alpha-\ell+2)(\alpha+\ell+3)}\,, \quad\text{for $\alpha \in \mathbb{C}\setminus\bigl\{\ell-2, -\ell-3\bigr\}$} \,, \\ \widetilde{\Delta^{-1}}\Bigl[ r^{\ell-2} \hat{n}_{L}\Bigr] &= \frac{1}{2 \ell+1} \left[\ln\left(\frac{r}{r_0}\right) - \frac{1}{2 \ell +1} \right] r^\ell \hat{n}_{L} \,,\\ \widetilde{\Delta^{-1}}\Bigl[ \frac{\hat{n}_{L}}{r^{\ell+3}}\Bigr] &= -\frac{1}{2 \ell+1} \left[\ln\left(\frac{r}{r_0}\right) + \frac{1}{2 \ell +1} \right] \frac{\hat{n}_{L}}{r^{\ell+1}} \,, \end{align} \end{subequations} where we abbreviated $\widetilde{\Delta^{-1}}=\mathrm{FP}_{B=0}\Delta^{-1}(r/r_0)^B$. Besides, for $\hat{X}$ at 1PN, we face the integration of terms involving also some logarithm $\ln(r/r_0)$. For that, we have, in the generic case $\alpha \in \mathbb{C}\setminus\{\ell-2, -\ell-3\}$, \begin{equation}\label{intlog} \widetilde{\Delta^{-1}}\biggl[\ln\left(\frac{r}{r_0} \right) r^{\alpha} \hat{n}_{L}\biggr] = \frac{r^{\alpha+2} \hat{n}_{L}}{(\alpha-\ell+2)(\alpha+\ell+3)}\biggl[\ln\left(\frac{r}{r_0} \right) - \frac{2\alpha+5}{(\alpha-\ell+2)(\alpha+\ell+3)} \biggr]\,. \end{equation} \section{The 4PN mass quadrupole for circular orbits} \label{sec:resultMQ} We have applied the method described in this paper to compute the source mass quadrupole moment at the 4PN order in the case of circular orbits. As for the Fokker Lagrangian computation of the equations of motion~\cite{BBBFMa}, we first used Hadamard's partie finie to cure the UV divergences, and obtain a first result depending on $\ln s_1$ and $\ln s_2$. Then we computed the difference between the DR and the Hadamard partie finie regularization for the UV divergences, and obtained a new result free of $\ln s_1$ and $\ln s_2$ but containing poles in $1/\varepsilon$, and also the DR scale $\ell_0$. Let us recall that these poles should cancel out when expressing physical observables such as the energy flux or the orbital phase of the system, but can still be present in intermediate non gauge invariant results such as the equations of motion or the source multipole moments. However, in that case, it is extremely useful to remove the UV poles by applying a shift of the particle's trajectories. This provides an important test of the result and also a substantial simplification. Indeed, at the 3PN order, it was already shown that applying the same shift as used for the 3PN equations of motion to the 3PN source mass quadrupole moment, indeed consistently removes all the UV poles~\cite{BDE04}. At the 4PN order, the situation is a bit more complicated. The shift that we applied to the Fokker Lagrangian in order to obtain the final result for the 4PN equations of motion as obtained in~\cite{BBBFMc, MBBF17}, and from which we derived all the conserved quantities in~\cite{BBFM17}, is composed of three terms: \begin{enumerate} \item The shift $\bm{\xi}_{1,2}$ given in the Appendix C of~\cite{BBBFMa}\footnote{There are some missing terms in the equations~(C3) of~\cite{BBBFMa}; the correct expression, also taking into account the final determination of the ambiguity parameters~\cite{BBBFMc, MBBF17}, is given in Eqs.~\eqref{shift4PNxidecomp}--\eqref{shift4PNxi} below.} and which removed all the UV-type $1/\varepsilon$ poles in the Fokker Lagrangian; \item The shift $\bm{\chi}_{1,2}$ that was applied in~\cite{BBBFMc} and removes all the IR-type $1/\varepsilon$ poles of the Fokker Lagrangian (this shift has not yet been published in full form); \item Finally, the shift $\bm{\eta}_{1,2}$ given in the Appendix A of~\cite{BBFM17} that does not contain any pole and was merely used for convenience. \end{enumerate} For completeness, we provide in Appendix~\ref{app:shift} the full expressions of the shifts $\bm{\xi}_{1,2}$ and $\bm{\eta}_{1,2}$. Note that the shift $\bm{\chi}_{1,2}$ will not be used in the present paper since we treat the IR divergences by means of the Finite Part regularization instead of DR. However we intend to consider the shift $\bm{\chi}_{1,2}$ in future work when we investigate the problem of IR divergences in the 4PN mass quadrupole moment. We have applied the sum of the shifts $\bm{\xi}_{1,2}$ and $\bm{\eta}_{1,2}$ to the 4PN quadrupole moment and checked that all the UV-type poles $1/\varepsilon$ (as well as the usual concomitant constants such as Euler's constant $\gamma_\text{E}$) cancel out as they should. Recall that the shifts have been determined from the separate calculation of the Fokker Lagrangian and equations of motion. Furthermore we have seen in Sec.~\ref{sec:potpart} that at the 4PN order some of the potentials needed to control the compact support terms do contain poles. These poles combine with those coming from the DR of the volume integrals of non-compact support terms in Sec.~\ref{sec:dimregUV}. The proper cancellation of all the poles constitutes a robust check of our UV DR computations, and a major confirmation that we understand the connection between the conservative equations of motion and multipole moments within the framework of the MPM-PN approach. The next steps are to reduce our result to the frame of the center of mass (CM) and then to the case of quasi circular orbits. We only need the 3PN expressions of the CM coordinates, and the 3PN equations of motion for circular orbits, in order to express the mass quadrupole moment at the 4PN order in the CM frame for circular orbits. Therefore, even if our result does not yet use DR for the IR divergences, we can still consistently express it in the CM frame for circular orbits --- as the 3PN dynamics can be derived using the Finite Part regularization for the IR, and as the IR shift $\bm{\chi}_{1,2}$ only starts at the 4PN order. The result is then much more compact and is given as follows. Finally, the UV-shifted mass quadrupole moment for circular orbits at the 4PN order, where the applied shifts $\bm{\xi}_{1,2}$ and $\bm{\eta}_{1,2}$ are given in Appendix~\ref{app:shift}, reads \begin{equation}\label{Iij} I_{ij} = \mu \left(A \, x_{\langle i}x_{j \rangle}+B \, \frac{r^2}{c^2}v_{\langle i}v_{j \rangle} + \frac{G^2 m^2\nu}{c^5r}\,C\,x_{\langle i}v_{j \rangle}\right) + \mathcal{O}\left(\frac{1}{c^{9}}\right)\,, \end{equation} where the terms up to the 4PN order are explicitly given by\footnote{The terms $C$ represent the time-odd 2.5PN and 3.5PN contributions and are given here for completeness.} \begin{subequations}\label{IijABC} \begin{align} A &= 1 + \gamma \biggl(- \frac{1}{42} - \frac{13}{14} \nu \biggr) + \gamma^2 \biggl(- \frac{461}{1512} - \frac{18395}{1512} \nu - \frac{241}{1512} \nu^2\biggr) \nonumber\\ & \quad + \gamma^3 \biggl(\frac{395899}{13200} - \frac{428}{105} \ln\biggl(\frac{r}{r_{0}{}} \biggr) + \biggl[\frac{3304319}{166320} - \frac{44}{3} \ln\biggl(\frac{r}{r'_{0}}\biggr) \biggr]\nu + \frac{162539}{16632} \nu^2 + \frac{2351}{33264} \nu^3 \biggr) \nonumber\\ & \quad + \gamma^4 \biggl (- \frac{1023844001989}{12713500800} + \frac{31886}{2205} \ln\biggl(\frac{r}{r_{0}{}} \biggr) + \biggl[- \frac{18862022737}{470870400} - \frac{2783}{1792} \pi^2 \nonumber\\ & \qquad \quad - \frac{24326}{735} \ln\biggl(\frac{r}{r_{0}{}} \biggr) + \frac{8495}{63} \ln\biggl(\frac{r}{r'_{0}} \biggr)\biggr] \nu + \biggl[\frac{1549721627}{40360320} + \frac{44909}{2688} \pi^2- \frac{4897}{21} \ln\biggl(\frac{r}{r'_{0}} \biggr)\biggr]\nu^2\nonumber\\ & \qquad \quad - \frac{22063949}{5189184} \nu^3 + \frac{71131}{314496} \nu^4 \biggl)\,, \\ B &=\frac{11}{21} - \frac{11}{7} \nu + \gamma \biggl(\frac{1607}{378} - \frac{1681}{378} \nu + \frac{229}{378} \nu^2\biggr) \nonumber\\ & \quad + \gamma^2 \biggl(- \frac{357761}{19800} + \frac{428}{105} \ln\biggl(\frac{r}{r_{0}{}} \biggr) - \frac{92339}{5544} \nu + \frac{35759}{924} \nu^2 + \frac{457}{5544} \nu^3 \biggr) \nonumber\\ & \quad + \gamma^3 \biggl(\frac{17607264287}{1589187600} - \frac{4922}{2205} \ln\biggl(\frac{r}{r_{0}{}} \biggr) + \biggl[\frac{5456382809}{529729200} + \frac{143}{192} \pi^2 - \frac{1714}{49} \ln\biggl(\frac{r}{r_{0}{}} \biggr) - \frac{968}{63} \ln\biggl(\frac{r}{r'_{0}} \biggr)\biggr] \nu \nonumber\\ & \qquad \quad + \biggl[\frac{117172607}{1681680} - \frac{41}{24} \pi^2 + \frac{968}{21} \ln\biggl(\frac{r}{r'_{0}} \biggr)\biggr] \nu^2 - \frac{1774615}{81081} \nu^3 - \frac{3053}{432432} \nu^4 \biggl)\,, \\ C &= \frac{48}{7} + \gamma \left(-\frac{4096}{315} - \frac{24512}{945}\nu \right)\,. \end{align} \end{subequations} Let us remind that this result has been obtained using the FP prescription for the IR divergences, and the DR for the UV divergences. In future work, we shall switch to DR for the IR divergences as well, in the form of the regularization $B\varepsilon$ which has been successfully applied recently to the 4PN equations of motion~\cite{MBBF17}. In our notation $\gamma$ is a PN parameter defined as \begin{equation}\label{gamma} \gamma = \frac{G m}{r c^2}\,, \end{equation} where $r=\vert\bm{y}_1-\bm{y}_2\vert$ is the radial separation in harmonic coordinates, $\bm{x}=\bm{y}_1-\bm{y}_2$ is the relative distance and $\bm{v}=\bm{v}_1-\bm{v}_2$ is the relative velocity. The total mass is $m=m_1+m_2$, and the reduced mass $\mu$ and symmetric mass ratio $\nu$ are given by \begin{equation}\label{nu} \nu = \frac{\mu}{m} = \frac{m_1m_2}{(m_1+m_2)^2}\,. \end{equation} Two constants parametrize the logarithmic terms of~\eqref{IijABC}. We have the constant $r_0$ which was introduced in the Finite Part regularization for the IR, see Eq.~\eqref{ValueILGeneral}. Then there is the constant $r_0'$ associated with the UV regularization, and which has been introduced by definition through the shift $\bm{\xi}_{1,2}$ in Eqs.~\eqref{shift4PNxi}.\footnote{In previous works on the 3PN/4PN equations of motion in harmonic coordinates, two gauge constants $r'_1$ and $r'_2$ in the logarithms were considered for the UV divergences instead of one~\cite{BFeom, BI03CM, BBBFMa}. In the CM frame this yielded the two convenient combinations (with $X_{1,2}=m_{1,2}/m$) \begin{align*} \ln r'_0 = X_1 \ln r'_1 + X_2 \ln r'_2\,,\\ \ln r''_0 = \frac{X_1^2 \ln r'_1 - X_2^2 \ln r'_2}{X_1-X_2}\,, \end{align*} with $r''_0$ entering specifically the expression of the particle's positions in the CM frame at the 3PN order~\cite{BI03CM}. Because of the factor $(X_1-X_2)^{-1}$ in $\ln r''_0$ there is an apparent divergence when the two masses are equal, but of course it is compensated by a factor $X_1-X_2$ in the CM relations. In the present paper, we make the choice $r_1' = r_2'$, which avoids such spurious divergence and has the advantage that the particle's positions are exactly $y_1^i=-y_2^i$ when the masses are equal. Hence we have only one UV constant $r'_0 = r''_0 = r'_1 = r'_2$.} The result~\eqref{Iij}--\eqref{IijABC} extends to the 4PN order the expression of the mass quadrupole moment that was known at the 3.5PN order~\cite{BIJ02, BI04mult, BFIS08, FMBI12}. It constitutes an important step in our program of completing the waveform and phase evolution of compact binary systems at the 4PN order, but remind that a thorough investigation of the IR divergences at the level of the 4PN multipole moment is still required and postponed to future work. \acknowledgments The authors would like to thank Laura Bernard for providing us the files of the different shifts used for the 4PN equations of motion. We also thank Alejandro Boh\'e for discussions at an early stage of this work. This research made use of the computer facility Horizon Cluster funded by the Institut d'Astrophysique de Paris. We thank St\'ephane Rouberol for running smoothly this cluster for us.
proofpile-arXiv_067-8914
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In this article, we link the representation theory of easy quantum groups with interpolating categories of the kind studied by Deligne. This provides many new examples for the latter theory. Each of these examples is a subcategory of one of Deligne's categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ with the same objects, but restricted morphism spaces. We will start reviewing some background material on (easy) quantum groups in order to put our results into context. However, in the course of the paper, we will be mostly using the combinatorial aspects of this theory and leave the quantum group aspects aside. \medskip There are various settings in which the term quantum group is used. Originally, quantum groups were introduced by Drinfeld \cite{Di87} and Jimbo \cite{Ji85} as Hopf algebra deformations of the universal enveloping algebras of semisimple Lie algebras. However, in this article we consider topological quantum groups in the sense of Woronowicz \cite{Wo87}. A compact matrix quantum group is a deformation of the algebra of continuous complex-valued functions on a compact matrix group. In such a non-commutative setting, Woronowicz proved a Tannaka--Krein type result \cite{Wo88} showing that any compact matrix quantum group can be fully recovered from its representation category. This was the starting point for Banica and Speicher \cite{BS09} to introduce (orthogonal) easy quantum groups. These form a subclass of compact matrix quantum groups, which can be build up from purely combinatorial structures, called categories of partitions. Categories of partitions are made of set partitions with a relatively simple graphical calculus. For any category of partitions $\mathcal C$, Banica and Speicher defined a series of monoidal categories, later in the present article denoted by $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n), n\in \mathbb N_0$. An easy quantum group is then a compact matrix quantum group whose representation category is the image of some category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n)$ under a certain fiber functor. An example of an easy quantum group is the $n$-th symmetric group $S_n$ induced by the category of all partitions $\mathcal C=P$. An honest quantum group example, where the underlying algebra is non-commutative, is Wangs's \cite{Wa98} free symmetric quantum groups $S_n^+$ induced by the category of all non-crossing partitions. In 2016, Raum and Weber \cite{RW16} completed the classification of all categories of partitions and we will use this classification throughout the paper. \medskip In \cite{De07}, Deligne introduced and studied categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ interpolating the representation categories of all symmetric groups. Deligne's categories depend on a complex interpolation parameter $t$, they are always Karoubian (pseudo-abelian) and monoidal. However, for $t\not\in\mathbb N_0$, they turn out semisimple, while for $t\in\mathbb N_0$, they are not. Instead, there is a unique semisimple quotient category, the semisimplification in the sense of Barrett--Westbury (\cite{BW99}, see also \cite{EO18}). Its defining tensor ideal is formed by all negligible morphisms, that is, morphisms whose compositions with other morphisms have trace $0$ whenever they are endomorphisms. The semisimplification of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ in the case $t=n\in\mathbb N_0$ is equivalent to $\ensuremath{\mathop{\mathrm{Rep}}}(S_t)$, the ordinary category of representations of the $n$-th symmetric group, whose finitely many irreducible objects have a well-known parametrisation by a finite set of Young diagrams, depending on $n$. This description extends to a parametrisation of the indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ by Young diagrams of arbitrary size, independent from $t$ (see \cite{CO11}). An intriguing feature of Deligne's categories is their combinatorial definition via set partitions, which looks very much like the calculus used for easy quantum groups. In fact, we have $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_n)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,n)$ for $n\in \mathbb N_0$. As categories of partitions $\mathcal C$ can be regarded as subcategories of the category of all partitions $P$, it is natural to consider interpolation categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ such that $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ is recovered as a special case for $\mathcal C=P$. The definition of such interpolation categories can also be found in Freslon \cite{Fr17}, who employed them to study a version of Schur--Weyl duality. However, they have never been studied systematically within the framework of Deligne's interpolating categories and we intend to initiate such an endeavour. \medskip In particular, we want to study the semisimplicity and the indecomposable objects in such interpolating partition categories. The table in \Cref{fig:table-intro} summarises some known results about special cases, together with some results obtained in this paper which are new to our knowledge (more examples are considered in the last section, \Cref{sec-examples}). \begin{figure}[ht] \centering \caption{Special cases of interpolating partition categories. Indecomposable objects are computed for seven more interpolating partition categories in \Cref{sec-examples}.} \label{fig:table-intro} \begin{longtable}{|C{2.4cm}||C{1.6cm}|C{3.4cm}|C{3cm}|C{2.1cm}|} \hline $\mathcal{C}$ & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal{C},t)$ & Non-semisimple & Indecomposable objects up to isomorphism & Reference \tabularnewline \hhline{=====} $P=$ \\all partitions & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ & $t\in \mathbb N_0$ & Young diagrams of arbitrary size & \cite{CO11}\tabularnewline \hline $P_2=$ \\partitions with block size two & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t)$ & $t\in \mathbb Z$ & Young diagrams of arbitrary size & \cite{CH17} \tabularnewline \hline $P_{even}=$ partitions with even block size & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t)$ & $t\in \mathbb N_0$ & bipartitions of arbitrary size & Thm.~\ref{thm-grouptheo-semisimple}, Prop.~\ref{thm::indecomp_obj_Hn} \tabularnewline \hline $NC=$ non-crossing partitions & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t^+)$ & $t=2\cdot \text{cos}(j\pi/{l} )$, \\ $l\in \mathbb N_{\geq 2}, j\in \mathbb N_{\leq l-1}$ & modified Jones--Wenzl idempotents & Lem.~\ref{lem::St+_semisimple}, Lem.~\ref{lem::indecomposables_St+} \tabularnewline \hline $NC_2=$ non-crossing partitions with block size two & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ & $t=4\cdot \text{cos}(j\pi/{l})^2$, \\ $l\in \mathbb N_{\geq 2}, j\in \mathbb N_{\leq l-1}$& Jones--Wenzl idempotents & \cite{GW02} \tabularnewline \hline $NC_{even}=$ non-crossing partitions with even block size & $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t^+)$ & $t=4\cdot \text{cos}(j\pi/{l})^2$, \\ $l\in \mathbb N_{\geq 2}, j\in \mathbb N_{\leq l-1}$& finite binary sequences of arbitrary length & Lem.~\ref{lem::Ht+_semisimple}, Prop.~\ref{thm::indecomp_obj_Hn} \tabularnewline \hline \end{longtable} \end{figure} More systematically, it turns out that many results on the semisimplicity and indecomposable objects can be derived for general interpolating partition categories $\RepCt$. We find that, as semisimplicity can be encoded in polynomial conditions, such categories will be semisimple for generic values of the deformation parameter $t$, that is, for all values outside of a set of algebraic complex numbers depending on $\mathcal C$. We recall these special values for $t$ for several known special cases before proving a general result for \emph{group-theoretical} categories of partitions, an uncountable family covering all but countably many cases of categories of partitions (as described by \cite{RW16}). \begin{theorem}[\Cref{thm-grouptheo-semisimple}] \label{thm::main_thm_1} Let $\mathcal C$ be a any group-theoretical category of partitions. Then $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ is semisimple if and only if $t\not\in\mathbb N_0$. \end{theorem} In particular, this recovers and generalises known results for $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ as well as for the interpolation categories for the hyperoctahederal groups, $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t)$. To prove this general result, we observe that for group-theoretical categories of partitions, certain lattices of subobjects are, in fact, sublattices of the corresponding lattices of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$. This enables us to apply techniques developed by Knop (\cite{Kn07}) originally to study generalisations of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$, which involve a concise analysis of the mentioned sublattices, and which we carry out for arbitrary categories of partitions. \medskip We go on deriving a general parametrisation scheme of the indecomposable objects in interpolating partition categories. Since we are working in the context of Karoubian categories, the study of indecomposables amounts to an analysis of primitive idempotents in endomorphism algebras, which in our case are the algebras spanned by partitions with a fixed number of upper and lower points $k\in\mathbb N_0$. We show that the indecomposables are parametrised by the irreducible complex representations of certain finite-groups, which we associate to a distinguished set of so called projective partitions, extending the work of \cite{FW16}. Hence, up to the representation theory of certain finite groups, all indecomposable objects can be found by determining the set of projective partitions in a given partition category. This yields a general description of the indecomposable objects for all categories of partitions. We define projective partitions (\Cref{def::projPart}), the finite groups $S(p)$ associated to them (\Cref{def::GroupsSp}), and an equivalence relation among them (\Cref{def::equivprojpart}), to prove: \begin{theorem}[\Cref{thm::indecompsable_obj_by_A_k}] \label{thm::main_thm_2} Let $\mathcal C$ be a category of partition and let $t$ be a non-zero complex number. Then the non-zero indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ up to isomorphism are in bijection with (and explicitly constructible from) the irreducible complex representations up to isomorphism of the finite groups $S(p)$ for an explicit set $\mathcal{P}$ of projective partitions $p$. \end{theorem} In particular, this is an analogue and, in fact, a generalisation of the parametrisation of the indecomposables by Young diagrams of arbitrary size for $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ as explained above. From the knowledge of all indecomposables in $\RepCt$ we derive a description of the associated graded ring of the Grothendieck ring, using a suitable filtration, for all $\mathcal C$ (\Cref{Grothendieck-ring}), as well as first results on the indecomposables in the semisimplification $\widehat{\RepCt}$ for group-theoretical $\mathcal C$ and $t\in\mathbb N_0$. Beyond that, we apply our general results to obtain a concrete parametrisation of the indecomposable objects in $\RepCt$ for all categories of partitions $\mathcal C$ which either contain the partition $\Pabab$ or in which all partitions are noncrossing. Moreover, we show that it also corresponds to the known description of indecomposables by Jones--Wenzl idempotents for the Temperley--Lieb categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O^+_t)$ (\Cref{prop::Indec_Otp}), which we relate to the interpolation categories for non-crossing partitions $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S^+_t)$ by constructing a suitable monoidal equivalence (\Cref{lem::equivalence_St+_Ot+}, \Cref{lem::indecomposables_St+}). It will be interesting to convert the general result of \Cref{thm::main_thm_2} to concrete parametrisations for more families of partition categories. Beyond that, it seems intriguing to study semisimplicity and indecomposable objects in interpolation categories of unitary easy quantum groups \cite{TW17}, corresponding to a calculus of two-colored partitions, or of linear categories of partitions \cite{GW19}, whose generators are not necessarily partitions, but more generally, linear combinations thereof. Eventually, such an analysis can be undertaken for the generalisations of partition categories described in \cite{MR19}, whose morphisms involve finite graphs. \medskip \textbf{Structure of this paper.} In Section 2, we recall the definition and classification of categories of partitions and introduce the interpolating categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (\mathcal C,t)$. In Section 3, we provide some general results on the semisimplicity of these categories and recall explicit computations for several known special cases. Moreover, we determine all parameters $t$ for which $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ is semisimple in the case that $\mathcal C$ is group-theoretical. We start Section 4 with some general results on indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (\mathcal C,t)$ before deriving an explicit description of the indecomposables using projective partitions, as well as results on its Grothendieck ring and semisimplifications coming from interpolating partition categories. In Section 5 we apply our general scheme to various special cases, including to $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t^+)$, and to the well-studied example of Temperley--Lieb categories. \medskip \section{Interpolating partition categories} In this section, we introduce interpolating partition categories. To this end, we start by recalling the theory of categories of partitions, including their classification. At the end of the section, we explain how interpolating partition categories interpolate the representation categories of the corresponding easy quantum groups. \subsection{Categories of partitions} \label{ssec::categories-of-partitions} For the following definitions and examples we refer to the initial article \cite{BS09}. For any $k,l\in \mathbb N_0$ we denote by $P(k,l)$ the set of partitions of $\{ 1,\ldots ,k,1',\ldots ,l' \}$ into disjoint, non-empty subsets. These subsets are called the \emph{blocks of $p$} and we denote their number by $\#p$. We can picture every partition $p\in P(k,l)$ as a diagram with $k$ upper and $l$ lower points, where all points in the same block of $p$ are connected by a string. \begin{align*} \begin{matrix} 1 & 2 & & k & \\ \bullet & \bullet & \ldots & \bullet &\\ & & p & & &\\ \bullet & \bullet & \quad \ldots & &\bullet \\ 1' & 2' & & &l' \\ \end{matrix} \end{align*} Note that only the connected components of a diagram of a partition are unique, not the diagram itself. In the following we will repeatedly consider the following special partitions: \begin{alignat*}{4} &{\mathord{\uparrow}} &&=\{\{1'\}\} \in P(0,1), && \Pabab &&=\{\{1,2'\},\{2,1'\}\} \in P(2,2), \\ &\Paa &&=\{\{1,1'\}\} \in P(1,1), &&\Pabcb &&=\{\{1\},\{2,1'\},\{2'\}\}\in P(2,2),\\ &\Pab &&=\{\{1\},\{1'\}\}\in P(1,1), && \Paaaa &&=\{\{1,2,1',2'\}\}\in P(2,2),\\ &\UPartition{}{0.4:1,2} &&=\{\{1,2\}\}\in P(2,0), &&\Paabb &&=\{\{1,2\},\{1',2'\}\}\in P(2,2),\\ &\LPartition{}{0.6:1,2} &&=\{\{1',2'\}\}\in P(0,2), \qquad \qquad &&\Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1)} &&=\{\{1,3'\},\{2,3\},\{1',2'\}\}\in P(3,3). \end{alignat*} A \emph{category of partitions} $\mathcal C$ is a collection of subsets $\mathcal C(k,l) \subseteq P(k,l),k,l\in \mathbb N_0$, containing the partitions $\UPartition{}{0.4:1,2} \in P(2,0)$ and $\Paa \in P(1,1)$, which is closed under the following operations: \begin{enumerate}[label=$\bullet$] \item The \emph{tensor product} $p\otimes q \in P(k+k',l+l')$ is the horizontal concatenation of two partitions $p\in P(k,l)$ and $q\in P(k',l')$. \item The \emph{involution} $p^* \in P(l,k)$ is obtained by turning a partition $p\in P(k,l)$ upside-down. \item Let $p\in P(k,l)$ and $q\in P(l,m)$. Then we can consider the vertical concatenation of the partitions $p$ and $q$. We may obtain connected components, called \emph{loops}, which are neither connected to upper nor to lower points. We denote their number by $l(q,p)$. The \emph{composition} $q\cdot p \in P(k,m)$ of $p$ and $q$ is the vertical concatenation, where we remove all loops. \end{enumerate} \begin{example} $\LPartition{}{0.6:1,2} \ensuremath{\otimes} \Paa \ensuremath{\otimes} \UPartition{}{0.4:1,2} = \Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1)}$, $(\LPartition{}{0.6:1,2})^*=\UPartition{}{0.4:1,2}$, $(\Paa)(\Paa) = \Paa$, $(\LPartition{}{0.6:1,2})(\UPartition{}{0.4:1,2})=\Paabb$, $(\Paaaa)^2=\Paaaa$, $(\Paabb)^2=\Paabb$. \end{example} For any subset $E\subseteq P=\bigsqcup_{k,l} P(k,l)$ we denote by $\langle E \rangle$ the category of partitions, which is obtained by taking the closure of $E \cup \{ \LPartition{}{0.6:1,2}, \Paa \}$ under tensor products, involution and composition. \begin{example} We will study the following examples throughout the paper. \begin{itemize}[label=$\star$] \item The category of all partitions $P$ is obviously a category of partitions and we have $P=\langle \Pabab, {\mathord{\uparrow}}, \Paaaa \rangle$. \item The category of partitions $P_{even}:=\langle \Pabab, \Paaaa \rangle$ consists of the partitions which have only blocks of even size. \item The category of partitions $P_2:=\langle \Pabab \rangle$ consists of those partitions which have only blocks of size two. \item The category of partitions $NC:=\langle {\mathord{\uparrow}}, \Paaaa \rangle$ consists of all non-crossing partitions, i.e. partitions whose representing diagrams have no strings that cross each other. Note that this is independent of the choice of the representing diagram. \item The category of partitions $NC_{even}:=\langle \Paaaa \rangle$ consists of the non-crossing partitions which have only blocks of even size. \item The category of partitions $NC_2$ consists of those non-crossing partitions which have only blocks of size two; it is the minimal category of partitions in the sense that it is generated by $\emptyset\subset P$. \end{itemize} \end{example} In 2016, Raum and Weber \cite{RW16} classified all categories of partitions and we briefly summarise their results. All categories of partitions fall into one of the following cases: \begin{itemize} \item The categories of partitions $\mathcal C$ with $\Pabab \in \mathcal C$ are exactly \[ P, P_{even}, P_2, \langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle, \langle \Pabab, {\mathord{\uparrow}} \rangle, \langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle,\] see \cite{BS09}. \item The categories of partitions $\mathcal C$ which contain only noncrossing partitions are exactly \[ NC, NC_{even}, NC_2, \langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle, \langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle, \langle \Pabcb \rangle, \langle {\mathord{\uparrow}} \rangle,\] see \cite{BS09} and \cite{We13}. Note that $\langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle = \langle \Pabab, \Pabcb \rangle$. \item The categories of partitions $\mathcal C$ with $\Pabab \notin \mathcal C$ and $\Pabcabc \in \mathcal C$ are exactly \[ \langle \Pabcabc \rangle, \langle \Pabcabc, {\mathord{\uparrow}} \ensuremath{\otimes} {\mathord{\uparrow}} \rangle, \langle \Pabcabc, \Paaaa \rangle, \langle \Pabcabc, \Paaaa,h_s \rangle,s\in \mathbb N,\] where $\Pabcabc$ denotes the partition $\{\{1,3'\},\{2,2'\},\{3,1'\}\}\in P(3,3)$ and $h_s$ denotes the partition $\{\{1,3,5,\ldots,2s-1\},\{2,4,6,\ldots,2s\}\}\in P(2s,0)$, see \cite{We13}. This are the so called \emph{half-liberated categories}. \item The categories of partitions with \[ \Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1) \Pline (1.5,0.25) (2.5,0.75)} = \{\{1,2,2',3'\},\{3,1'\} \in P(3,3) \] are called \emph{group-theoretical}. They are indexed by all normal subgroups of $\mathbb Z_2^{*n}$ for some $n\in\mathbb N\cup\{\infty\}$ which are invariant under a certain semigroup action and there are uncountably many such categories, see \cite{RW15}. \item The categories of partitions $\mathcal C$ with $\Paaaa \in \mathcal C$, ${\mathord{\uparrow}} \ensuremath{\otimes} {\mathord{\uparrow}} \notin \mathcal C$ and $\Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1) \Pline (1.5,0.25) (2.5,0.75)} \notin \mathcal C$ are exactly those generated by the element \begin{center} \begin{tikzpicture} \coordinate [label=left:{$\pi_k=$}](O) at (0,0.5); \coordinate [label=above:{$\ldots$}](A2) at (0.75,0); \coordinate [label=above:{$\ldots$}](A7) at (2.55,0); \coordinate [label=above:{$\ldots$}](A7) at (4.35,0); \coordinate [label=above:{$\ldots$}](A7) at (6.15,0); \coordinate [label=below:{$1'$}](A1) at (0,0); \coordinate (A2) at (0.3,0); \coordinate (A3) at (1.2,0); \coordinate [label=below:{$k'$}](A4) at (1.5,0); \coordinate (A5) at (1.8,0); \coordinate (A6) at (2.1,0); \coordinate (A7) at (3,0); \coordinate [label=below:{$2k'$}](A8) at (3.3,0); \coordinate (A9) at (3.6,0); \coordinate (A10) at (3.9,0); \coordinate (A11) at (4.8,0); \coordinate [label=below:{$3k'$}](A12) at (5.1,0); \coordinate (A13) at (5.4,0); \coordinate (A14) at (5.7,0); \coordinate (A15) at (6.6,0); \coordinate [label=below:{$4k'$}](A16) at (6.9,0); \draw (A1) -- (0,1.2) -- (6.9,1.2) -- (A16); \draw (A8) -- (3.3,1.2); \draw (A9) -- (3.6,1.2); \draw (A2) -- (0.3,0.9) -- (3,0.9) -- (A7); \draw (A10) -- (3.9,0.9) -- (6.6,0.9) -- (A15); \draw (3,0.9) to [bend left] (3.9,0.9); \draw (A3) -- (1.2,0.6) -- (2.1,0.6) -- (A6); \draw (A11) -- (4.8,0.6) -- (5.7,0.6) -- (A14); \draw (2.1,0.6) -- (2.7,0.6) to [bend left] (4.2,0.6) -- (4.8,0.6); \draw (A4) -- (1.5,0.3) -- (1.8,0.3) -- (A5); \draw (A12) -- (5.1,0.3) -- (5.4,0.3) -- (A13); \draw (1.8,0.3) -- (1.9,0.3) to [bend left] (2.3,0.3) -- (2.7,0.3) to [bend left] (4.2,0.3) -- (4.6,0.3) to [bend left] (5,0.3) -- (5.1,0.3); \end{tikzpicture}\end{center} for some $k\in \mathbb N$ and $\langle \pi_k \mid k\in \mathbb N \rangle$, see \cite{RW16}. \end{itemize} These cases are pairwise distinct except that $\langle \pi_1 \rangle = \langle \Paaaa \rangle = NC_{even}$ and the categories \[P,P_{even},\langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle,\langle \Pabcabc, \Paaaa \rangle, \langle \Pabcabc, \Paaaa,h_s \rangle,s\in \mathbb N,\] are also group-theoretical. \begin{remark} \label{rem::uparrow_in_C} Note that the only categories of partitions $\mathcal C$ with ${\mathord{\uparrow}} \in \mathcal C$ are \[ P, NC, \langle \Pabab, {\mathord{\uparrow}} \rangle, \langle {\mathord{\uparrow}} \rangle. \] \end{remark} \begin{proof} It follows from the classification that any category of partitions $\mathcal C$ which is not one of these four is generated by partitions whose sum of upper and lower points is even. It follows that the sum of upper and lower points is even for any partition in $\mathcal C$ and hence ${\mathord{\uparrow}} \notin \mathcal C$. \end{proof} \subsection{Interpolating partition categories} We refer for instance to \cite{EGNO15} and \cite{NT13} for the terminology in the following subsection. The following natural definition may be deduced from Banica--Speicher's definition of easy quantum groups in \cite{BS09}. It may also be found in \cite{Fr17}. \begin{definition}[Interpolating partition categories] For any category of partitions $\mathcal C$ and $t\in \mathbb C$ the category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}_0(\mathcal C,t)$ has: \begin{alignat*}{2} &\text{Objects:} &&[k], k\in \mathbb N_0 ,\\ &\text{Morphisms:} &&\ensuremath{\mathrm{Hom}}([k],[l])= \mathbb C \mathcal C(k,l),\\ &\text{Composition:} \quad &&\ensuremath{\mathrm{Hom}}([l],[m]) \times \ensuremath{\mathrm{Hom}}([k],[l]) \to \ensuremath{\mathrm{Hom}}([k],[m]), \\ & &&(q,p)\mapsto qp := t^{l(q,p)}~ q\cdot p \text{ for all } p\in \mathcal C(k,l), q\in \mathcal C(l,m) \end{alignat*} The \emph{interpolating partition category} $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ is the Karoubi envelope or (pseudo-abelian completion) of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}_0(\mathcal C,t)$, that is, the idempotent completion of the additive completion. \end{definition} \begin{example}\label{ex::uRep} By definition, $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P_2,t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t)$, the category interpolating the representation categories of the orthogonal groups $\ensuremath{\mathop{\mathrm{Rep}}}(O_n)$ introduced by Deligne in 1990 \cite{De90} and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)= \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$, the category interpolating the representation categories of the symmetric groups $\ensuremath{\mathop{\mathrm{Rep}}}(S_n)$ introduced by Deligne in 2007 \cite{De07}. \end{example} The tensor product of partitions turns $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ into a (strict) monoidal category with unit object $\mathbf{1}=[0]$. Moreover, we can define duals in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ as follows. Any object is self-dual, i.e. for any $k\in \mathbb N_0$ the dual object of $[k]$ is given by $[k]^{\vee} := [k]$, and the (co)evaluation maps are\\ \ \\ \begin{minipage}[t]{0.45\textwidth}\begin{center}\begin{tikzpicture} \coordinate [label=right:{ev$_{k}:[k]^{\vee} \ensuremath{\otimes} [k]\to \mathbf{1}$ given by}](O) at (-1.2,1.5); \coordinate [label=right:{ev$_k=$}](O) at (-1.2,0.6); \coordinate [label=right:{$\in P(2k,0)$,}](O) at (2.7,0.6); \coordinate [label=below:{$\ldots$}](O) at (0.45,1); \coordinate [label=below:{$\ldots$}](O) at (2.25,1); \coordinate (A1) at (0,1); \coordinate (A2) at (0.9,1); \coordinate (A3) at (1.2,1); \coordinate (A4) at (1.5,1); \coordinate (A5) at (1.8,1); \coordinate (A6) at (2.7,1); \fill (A1) circle (2pt); \fill (A2) circle (2pt); \fill (A3) circle (2pt); \fill (A4) circle (2pt); \fill (A5) circle (2pt); \fill (A6) circle (2pt); \draw (A1) -- (0,0.25) -- (2.7,0.25) -- (A6); \draw (A2) -- (0.9,0.5) -- (1.8,0.5) -- (A5); \draw (A3) -- (1.2,0.75) -- (1.5,0.75) -- (A4); \end{tikzpicture}\end{center}\end{minipage} \begin{minipage}[t]{0.45\textwidth}\begin{center}\begin{tikzpicture} \coordinate [label=right:{coev$_{k}:\mathbf{1}\to [k]^{\vee} \ensuremath{\otimes} [k]$ given by}](O) at (-1.6,0.7); \coordinate [label=right:{coev$_k=$}](O) at (-1.6,-0.2); \coordinate [label=right:{$\in P(0,2k)$.}](O) at (2.7,-0.2); \coordinate [label=above:{$\ldots$}](A2) at (0.45,-0.5); \coordinate [label=above:{$\ldots$}](A7) at (2.25,-0.5); \coordinate (A1) at (0,-0.5); \coordinate (A2) at (0.9,-0.5); \coordinate (A3) at (1.2,-0.5); \coordinate (A4) at (1.5,-0.5); \coordinate (A5) at (1.8,-0.5); \coordinate (A6) at (2.7,-0.5); \fill (A1) circle (2pt); \fill (A2) circle (2pt); \fill (A3) circle (2pt); \fill (A4) circle (2pt); \fill (A5) circle (2pt); \fill (A6) circle (2pt); \draw (A1) -- (0,0.25) -- (2.7,0.25) -- (A6); \draw (A2) -- (0.9,0) -- (1.8,0) -- (A5); \draw (A3) -- (1.2,-0.25) -- (1.5,-0.25) -- (A4); \end{tikzpicture}\end{center}\end{minipage} The categorical left and right trace, induced by the dual structure, coincide and are given by\\ \begin{center} \scalebox{.8}{ \begin{tikzpicture} \coordinate [label=left:{\scalebox{1.25}{$\ensuremath{\mathrm{tr}}(p)=$ ev$_{k} \circ (p\ensuremath{\otimes} \ensuremath{\mathrm{id}}_{[k]})~ \circ$ coev$_{k}=$}}](O) at (0,0); \coordinate [label=left:{\scalebox{1.25}{$p$}}](O) at (1,0); \coordinate [label=right:{\scalebox{1.25}{$\in \ensuremath{\mathrm{End}}([0])\cong \mathbb C$}}](O) at (3.5,0); \coordinate (A1) at (0,0.5); \coordinate (A2) at (1.5,0.5); \coordinate (A3) at (0,-0.5); \coordinate (A4) at (1.5,-0.5); \fill (A1) circle (2.5pt); \fill (A2) circle (2.5pt); \fill (A3) circle (2.5pt); \fill (A4) circle (2.5pt); \draw[dashed] (A1) -- (A2) -- (A4) -- (A3) -- (A1); \draw (2,0.5) -- (2,-0.5); \draw[gray] (2.5,0.5) -- (2.5,-0.5); \draw[gray] (3,0.5) -- (3,-0.5); \draw (3.5,0.5) -- (3.5,-0.5); \draw (A1) to [bend left=90] (3.5,0.5); \draw (A2) to [bend left=90] (2,0.5); \draw (A3) to [bend right=90] (3.5,-0.5); \draw (A4) to [bend right=90] (2,-0.5); \draw[gray] (0.5,0.5) to [bend left=90] (3,0.5); \draw[gray] (1,0.5) to [bend left=90] (2.5,0.5); \draw[gray] (0.5,-0.5) to [bend right=90] (3,-0.5); \draw[gray] (1,-0.5) to [bend right=90] (2.5,-0.5); \end{tikzpicture} } \end{center} for any $p\in \mathcal C(k,k)$. Hence $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal{C},t)$ is a pivotal category with coinciding left and right traces. Note that we defined the evaluation and coevaluation maps slightly differently than Deligne, insofar as the $i$-th point is paired with the $(2k+1-i)$-th point, not with the $k+i$-th point in the above diagrams. \subsection{Interpolating partition categories and easy quantum groups}\label{subsec::easyQG} Categories of partitions have initially been introduced by Banica and Speicher to define easy quantum groups. In this subsection, we recall their definition and explain how interpolating partition categories interpolate the representation categories of the corresponding easy quantum groups. For the rest of this article, however, we will only work with the interpolating partition categories themselves and no knowledge of easy quantum groups is required. Let us start by briefly recalling the theory of compact matrix quantum groups. A \emph{compact matrix quantum group} is a triple $G=(A,u,n)$ of a C*-algebra $A$, an integer $n\in \mathbb N_0$ and a matrix $u\in A^{n\times n}$ such that the elements $\{ u_{ij} \mid 1\leq i,j \leq n \}$ generate $A$, the matrix $u=(u_{ij})$ is unitary and its transpose is invertible and the map $\Delta :A \to A \ensuremath{\otimes} A,u_{ij}\mapsto \sum_{k=1}^n u_{ik} \ensuremath{\otimes} u_{kj}$ is a *-homomorphism, see \cite{Wo87}. A \emph{finite-dimensional (co)representation of $G$} is a matrix $v\in A^{m\times m}$ with $\Delta(v_{ij})=\sum_{k=1}^m v_{ik}\otimes v_{kj}$. A morphism between two (co)representations $v\in A^{m\times m}$ and $v'\in A^{m'\times m'}$ is a linear map $T:(\mathbb C^n)^{\ensuremath{\otimes} m} \to (\mathbb C^n)^{\ensuremath{\otimes} m'}$ with $Tv = v'T$. In particular, the matrix $u\in A^{n\times n}$ is representation of $G$, called \emph{fundamental (co)representation}. In 1988, Woronowicz proved a Tannaka-Krein type result \cite{Wo88} for CMQGs showing that any compact matrix quantum group $G$ is uniquely determined by its representation category $\ensuremath{\mathop{\mathrm{Rep}}}(G)$, i.e. the category of finite-dimensional, unitary (co)rep\-resentation, (for more details see for instance \cite[§4]{We17}). In 2009, Banica and Speicher \cite{BS09} defined for any category of partitions $\mathcal C$ and $n\in \mathbb N_0$ a fiber functor into the category of finite-dimensional Hilbert spaces \begin{align*} &\mathcal{F}: \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n) \to \text{Hilb}_f \text{ with } \\ &\mathcal{F}([k])=(\mathbb C^n)^{\ensuremath{\otimes} k} \text{ for any } k\in \mathbb N_0 \text{ and } \\ &\mathcal{F}(p)\in \ensuremath{\mathrm{Hom}}((\mathbb C^n)^{\ensuremath{\otimes} k},(\mathbb C^n)^{\ensuremath{\otimes} l}) \text{ for any } p\in \mathcal C(k,l). \end{align*} A compact matrix quantum groups $G=(A,u,n)$ is called \emph{(orthogonal) easy quantum group} if there exists a category of partitions $\mathcal C$ such that $\ensuremath{\mathrm{Hom}}_{\ensuremath{\mathop{\mathrm{Rep}}}(G)} (u^{\ensuremath{\otimes} k}, u^{\ensuremath{\otimes} l})=\text{span}_{\mathbb C} \{ \mathcal{F}(p) \mid p\in \mathcal C(k,l)\}$. The Tannaka-Krein duality implies that, for any category of partitions $\mathcal C$ and $n\in \mathbb N_0$, there exists an easy quantum group $(A,u,n)$ (in its maximal version), which is unique up to isomorphism, denoted by $G_n(\mathcal C)$, with $\ensuremath{\mathrm{Hom}}_{\ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C))} (u^{\ensuremath{\otimes} k}, u^{\ensuremath{\otimes} l})=\text{span}_{\mathbb C} \{ \mathcal{F}(p) \mid p\in \mathcal C(k,l)\}$. \begin{example} \label{ex::std_ex} The easy quantum group $G_n(P)$ is the triple $(C(S_n),u,n)$ where $C(S_n)$ is the set of complex-valued continuous functions over the symmetric group $S_n$ (regarded as a matrix group) and $u$ is the matrix of coordinate functions. Similarly, $G_n(P_{even})$ corresponds to the hyperoctahedral group $H_n=S_2 \wr S_n$ and $G_n(P_2)$ corresponds to the orthogonal group $O_n$. This fits together with \Cref{ex::uRep} and based on that notation we denote $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t):=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P_{even},t)$. The easy quantum groups $S_n^+=G_n(NC)$, $H_n^+=G_n(NC_{even})$ and $O_n^+=G_n(NC_2)$ are called free symmetric quantum groups, free hyperoctahedral quantum groups and \linebreak free orthogonal quantum groups, respectively, and we denote $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t^+):=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC,t)$, \linebreak $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t^+):=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC_{even},t)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+):=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC_2,t)$. \end{example} The definition of easy quantum groups implies that, for any category of partitions $\mathcal C$ the canonical functor $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n) \to \ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C))$ is surjective on objects and morphisms (for $\mathcal C=P$ compare with \cite[Prop.~3.19.]{CO11}). In the following section we will discuss that $\ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C))$ is even equivalent to the unique semisimple quotient of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n)$. \begin{lemma} \label{lem::fiber-functor} Let $\mathcal C$ be a category of partitions, $n\in \mathbb N_0$ and consider the easy quantum group $G_n(\mathcal C)=(A,u,n)$. Then the functor $$\mathcal{G}:\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n) \to \ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C)),~ [k]\mapsto u^{\ensuremath{\otimes} k},~ p\mapsto \mathcal{F}(p)$$ is full and essentially surjective. \end{lemma} \section{Semisimplicity for interpolating partition categories} In this section we analyse the categories $\RepCt$ with respect to semisimplicity. We will consider the categories from \Cref{ex::std_ex} on a case-by-case basis, before following a generic approach due to Knop to analyse $\RepCt$ for all group-theoretical categories of partitions $\mathcal C$. In both cases we use a reduction argument which shows that it suffices to check whether certain determinants vanish. We will start by explaining this reduction argument. By construction, the category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ is Karoubian (i.e., pseudo-abelian), but in general, it is not abelian. However, we can construct a unique semisimple (and hence, abelian) quotient category from it, the \emph{semisimplification} $\widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)}$. Let us recall some definitions and general results on this idea, for more details see \cite{EO18}. \begin{definition} Let $\mathcal{R}$ be a $k$-linear pivotal category over a field $k$ with coinciding left and right traces. A morphism $f:X \to Y$ in $\mathcal{R}$ is called \emph{negligible} if $\ensuremath{\mathrm{tr}}(f\circ g)=0$ for all morphisms $g:Y\to X$ in $\mathcal{R}$. We denote by $\mathcal{N}$ the set of all negligible morphisms in $\mathcal{R}$. \end{definition} \begin{remark} The set of all negligible morphisms $\mathcal{N}$ is a tensor ideal and the quotient category $\mathcal{R}/\mathcal{N}$ is again a spherical category with $\ensuremath{\mathrm{tr}} (f + \mathcal{N}) = \ensuremath{\mathrm{tr}} (f)$ for any endomorphism $f$ in $\mathcal{R}$. \end{remark} \begin{lemma}[{\cite[Thm.~2.6.]{EO18}}] \label{rem::negl_morphisms} Let $k$ be an algebraically closed field. Let $\mathcal{R}$ be a $k$-linear Karoubian pivotal category with coinciding left and right traces such that all morphism spaces are finite-dimensional and the trace of any nilpotent endomorphism is zero. Then the quotient category \[ \widehat{\mathcal{R}} := \QR{\mathcal{R}}{\mathcal{N}} \] is a semisimple category, the \emph{semisimplification of $\mathcal{R}$}, whose simple objects correspond to the indecomposable objects of $\mathcal{R}$ of non-zero dimension. \end{lemma} To use this result for interpolation categories $\RepCt$, we observe: \begin{lemma} \label{lem-trace-nilpotent} For any category of partitions $\mathcal C$ and $t\in\mathbb C$, the trace of any nilpotent endomorphism in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ is zero. \end{lemma} \begin{proof} Let $f$ be a nilpotent endomorphism in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$. Then $f$ is also a nilpotent endomorphism in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)$. By \cite[Th.~3.24., Cor.~5.23.]{CO11} $\widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)}=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)/\mathcal{N}$ is a semisimple category. Since the trace of any nilpotent endomorphism in a semisimple category is zero, we have $\ensuremath{\mathrm{tr}}_{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)}(f)=\ensuremath{\mathrm{tr}}_{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)}(f)=\ensuremath{\mathrm{tr}}_{\widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)}} (f + \mathcal{N})=0$. \end{proof} Combining the previous two lemmas, we obtain: \begin{lemma} \label{cor-criterion} Let $\mathcal C$ be a category of partitions and $t\in\mathbb C$. The category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)$ is semisimple if and only if all negligible morphisms are trivial. \end{lemma} For any category of partitions $\mathcal C$, the semisimple quotient categories $\widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,t)}$, $t\in \mathbb C$ interpolate the representation categories of the corresponding easy quantum groups $\ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C))$, $n\in \mathbb N_0$, in the following sense (for $\mathcal C=P$ compare with \cite[Thm.~6.2.]{De07}, for $\mathcal C=P_2$ compare with \cite[Thm.~9.6.]{De07}): \begin{proposition} \label{prop::fiber-functor} Let $\mathcal C$ be a category of partitions, $n\in \mathbb N_0$ and let $\mathcal{G}:\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n) \to \ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C))$ be the canonical functor described in \Cref{lem::fiber-functor}. Then the induced functor \[ \widehat{\mathcal{G}}: \widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,n)} \to \ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C)) \] is an equivalence of categories. \end{proposition} \begin{proof} Since $\ensuremath{\mathop{\mathrm{Rep}}}(G_n(\mathcal C))$ is semisimple, all negligible morphisms are trivial. As the image of a morphism $f$ under a full tensor functor is negligible if and only if $f$ is negligible, the functor $\widehat{\mathcal{G}}$ is faithful. Together with \Cref{lem::fiber-functor} we conclude that $\widehat{\mathcal{G}}$ is an equivalence of categories. \end{proof} This abstract argument can be made practical by realising that the existence of negligible endomorphisms is detected by the determinants of certain Gram matrices. \begin{definition}[{\cite[Def. 4.2.]{BC07}}] For any category of partitions $\mathcal C$, we introduce the short-hand notation $\mathcal C(k)=\mathcal C(0,k)$, denoting the partitions in $\mathcal C$ with no upper points. The \emph{Gram matrices} are given by $$ G^{(k)} := (t^{l(p^*,q)})_{p,q\in\mathcal C(k)} \quad\text{for all }k\in \mathbb N_0. $$ \end{definition} Notice that the entries of the Gram matrix are just the traces of the compositions $p^* q$. \begin{example} The following table features the entries of the Gram matrix $G^{(1)}$ for $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$: \[ \begin{tabular}{c|cc} & $\ensuremath{\mathrm{id}}_1$ & \Pab \\ \hline $\ensuremath{\mathrm{id}}_1$ & $t$ & $t$ \\ \Pab & $t$ & $t^2$ \end{tabular} \] Its determinant is $t^2(t-1)$. Note that the Gram matrices explained here differ from those computed in \cite[Ex. 3.14.]{CO11}, which use the ``usual'' trace form in the finite-dimensional endomorphism algebras. \end{example} \begin{proposition} \label{lem::semisimple_endo} \label{lem::semisimple_determinant_general} Let $t\in \mathbb C$ and let $\mathcal C$ be a category of partitions. Then $\RepCt$ is semisimple if and only if it satisfies $\det(G^{(k)})\neq 0$ for all $k\in\mathbb N$. \end{proposition} \begin{proof} By \Cref{cor-criterion}, $\RepCt$ is semisimple if and only if it does not contain any non-trivial negligible morphisms. Now $\RepCt$ is constructed as a Karoubi envelope, that is, an idempotent completion of an additive completion, but we claim that negligibility can be traced back to the original category, $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}_0(\mathcal C,t)$ in this case. First, as any negligible morphism of a direct summand extends trivially to a negligible morphism of the full object, we only have to worry about the additive completion. We can think of its morphisms as matrices whose entries are morphisms in the original category. One sees that, for such a matrix to be a negligible morphism, all of its entries have to be negligible. Hence, $\RepCt$ is semisimple if and only if there are no non-trivial negligible morphism $f\in \ensuremath{\mathrm{Hom}} ([k],[l])$ for all $k,l\in \mathbb N_0$. Comparing diagrams we see that this is equivalent to it having no non-trivial negligible morphism $f\in \ensuremath{\mathrm{Hom}} ([0],[k])$ for all $k\in \mathbb N_0$. Hence, $\RepCt$ is semisimple if and only if the form $$ \ensuremath{\mathrm{Hom}}([0],[k]) \times \ensuremath{\mathrm{Hom}}([0],[k]) \to \mathbb C, (p,q)\mapsto t^{l(q^*,p)} $$ is non-degenerate. The Gram matrix of this form is exactly $G^{(k)}$, and hence, the form is non-degenerate if and only if $G^{(k)}$ has a trivial kernel. Thus the claim follows (note that $\det(G^{(0)})=1)$. \end{proof} \begin{corollary} For any category of partitions $\mathcal C$ and any transcendental $t\in\mathbb C$, $\RepCt$ is semisimple. \end{corollary} \begin{proof} The determinant of the Gram matrix $\det(G^{(k)})$ depends on $t$ polynomially for any $k\in \mathbb N$. \end{proof} Let us contrast this with the case $t=0$. \begin{lemma}\label{lem::semisimplification-t0} For any category of partitions $\mathcal C$, $\widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,0)}$ is equivalent to the category of complex vector spaces. \end{lemma} \begin{proof} The morphism space $\ensuremath{\mathrm{Hom}}([k],[l])$ in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,0)$ consists of negligible morphisms if $k>0$ or $l>0$, while the non-zero endomorphism $\ensuremath{\mathrm{id}}_0$ of the object $[0]$ is not negligible. \end{proof} Deligne showed that $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ is semisimple if and only if $t\notin \mathbb N_0$, see \cite[Thm.~2.18.]{De07}. We will show that this is also the case for all group-theoretical categories of partitions, including $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t)$. Let us first recall some known examples. \begin{remark} \label{rem::TL_semisimple} The category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ is exactly the (Karoubian version of) the Temperley--Lieb category $TL(q)$ with $t=q+q^{-1}$ (introduced in \cite{GL98}). It is well-known to be semisimple if and only if $q$ is not a $2l$-th root of unity, i.e. $q\notin \{ e^{\frac{j \pi}{l}} \mid l\in \mathbb N_{\geq 2}, j\in \{1,\ldots,l-1\}\}$ (for instance, this follows from results in \cite{GW02}). This implies that the category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ is semisimple if and only if \[ t\notin \{2\cdot \cos \left(\frac{j \pi}{l}\right) \mid l\in \mathbb N_{\geq 2}, j\in \{1,\ldots,l-1\}\}. \] \end{remark} \begin{proposition} \label{lem::St+_semisimple} The category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t^+)$ is semisimple if and only if \[ t\notin \{4\cdot \cos \left(\frac{j \pi}{l} \right)^2 \mid l\in \mathbb N_{\geq 2}, j\in \{1,\ldots,l-1\}\}. \] \end{proposition} \begin{proof} By \cite{Tu93} or \cite[Prop.~5.37.]{Ju19}, the determinants described in \Cref{lem::semisimple_determinant_general} are non-zero if and only if $t$ is of the asserted form. This implies the assertion with \Cref{lem::semisimple_determinant_general}. \end{proof} \begin{proposition} \label{lem::Ht+_semisimple} The category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t^+)$ is semisimple if and only if \[ t\notin \{4\cdot \cos \left(\frac{j \pi}{l}\right)^2 \mid l\in \mathbb N_{\geq 2}, j\in \{1,\ldots,l-1\}\}. \] \end{proposition} \begin{proof} By \cite{Ah16}, the determinants described in \Cref{lem::semisimple_determinant_general} are non-zero if and only if $t$ is of the asserted form. This implies the assertion with \Cref{lem::semisimple_determinant_general}. \end{proof} \subsection{Semisimplicity in the group-theoretical case} In this section we show our first main theorem, namely that any category $\RepCt$ associated to a group-theoretical category of partitions $\mathcal C$ is semisimple if and only if $t\notin \mathbb N_0$. In 2007, Knop \cite{Kn07} studied tensor envelopes of regular categories and Deligne's category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (S_t)$ is a special case in his setting. Using the semilattice structure of subobjects, he gives a criterion for semisimplicity for most of the tensor categories he is considering, including $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$. We will mimic his proof by studying it in the special case of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$, and then generalising it to all categories $\RepCt$ associated to group-theoretical categories of partitions. The key observation which allows us to use Knop's idea is the following. If we consider Knop's work in the special case of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$, the semilattice of subobjects of $[k]$ corresponds to the meet-semilattice on partitions of $k$ points given by the refinement order. It is well-known that the (reversed) refinement order induces a lattice structure on partitions on $k$ points or non-crossing partitions on $k$ points, see for instance \cite{NS06}. In the following, we will use that group-theoretical categories of partitions are closed under common coarsening of partitions, the meet with respect to the refinement order, and hence we also obtain a semilattice structure. Let us start by briefly recalling some basics on partially ordered sets and semilattices, see \cite[Ch.~9]{NS06} and \cite[Ch.~7]{Kn07}. \begin{definition}[{\cite[Def. 9.15.]{NS06}}] Let $(L,\leq)$ be a finite partially ordered set (poset). For two elements $u,v\in L$ we consider the set $\{ w\in L \mid w\leq u, w\leq v\}$. If the maximum of this set exists, it is called the \emph{meet of $p$ and $q$} and denoted by $p\wedge q$. If any two elements of $L$ have a meet, then $(L,\wedge)$ is called the \emph{meet-semilattice of $L$}. \end{definition} \begin{remark}[{\cite[Rem. 10.2.]{NS06}}] Let $L$ be a finite poset and let $L=\{u_1,\ldots,u_{|L|}\}$ be a listing. We consider the $|L|\times |L|$-matrix $M$ with \begin{align*} M_{ij}= \left\{ \begin{array}{ll} 1 & \text{if } u_i\leq u_j , \\ 0 & \text{otherwise} .\\ \end{array} \right. \end{align*} Then $M$ is invertible over $\mathbb Z^{|L|\times |L|}$ and the function \[ \mu: L\times L \to \mathbb Z, (u_i,u_j)\mapsto (M^{-1})_{ij} \] is independent of the choice of the listing. \end{remark} \begin{definition}[{\cite[Def. 10.5.]{NS06}}] Let $L$ be a finite poset. Then the above-noted function, $\mu: L\times L \to \mathbb Z$, is called the \emph{Möbius function of $L$}. \end{definition} As usually, we write $u<v$ if $u\leq v$ and $u\neq v$ for $u,v\in L$. \begin{lemma} \label{lem::cover} Let $L$ be a finite poset and let $u,v\in L$. \begin{enumerate}[label=(\roman*)] \item Then $\mu (u,u)=1$. \item If $v$ covers $u$, i.e. $u<v$ and there is no element $w\in L$ with $u< w < v$, then $\mu (u,v)=-1$. \end{enumerate} \end{lemma} \begin{proof} We can choose a listing of $L$ such that matrix $M$, which defines the Möbius function, is unitriangular, see \cite[Ex. 10.25]{NS06}. Hence $\mu (u,u)=(M^{-1})_{uu}=1$. If $v$ covers $u$, we can additionally assume that $u$ and $v$ appear one after the other in the listing of $L$. Then \[ N:= \left( \begin{array}{ll} M_{uu} & M_{vu} \\ M_{uv} & M_{vv} \\ \end{array} \right) =\left( \begin{array}{ll} 1& 0 \\ 1& 1\\ \end{array} \right) \] is a block on the diagonal of $M$. Since $M$ is unitriangular, we have \[ \left( \begin{array}{ll} (M^{-1})_{uu} & (M^{-1})_{vu} \\ (M^{-1})_{uv} & (M^{-1})_{vv} \\ \end{array} \right) = N^{-1}= \left( \begin{array}{ll} 1& 0 \\ -1& 1\\ \end{array} \right). \] It follows that $\mu (u,v)=(M^{-1})_{uv}=-1$. \end{proof} The M\"obius function can be helpful for computing certain determinants derived from a meet-semilattice: \begin{lemma}[{\cite[Lem.~7.1.]{Kn07}}] \label{lem::det_semilattice} Let $\phi:L\to \mathbb C$ be a function on a finite poset $L$ which is a meet-semilattice. Then, with $\mu$ the M\"obius function of $L$, \[ \det(\phi(u\wedge v))_{u,v\in L}) = \prod_{x\in L} \Big( \sum_{\substack{y\in L \\ y\leq x}} \mu(y,x) \cdot \phi(y) \Big) .\] \end{lemma} Now, we recall the definition of the refinement order on partitions and show that partitions of $k$ lower points in a group-theoretical category of partitions have a meet-semilattices with respect to this partial order. Note that Nica and Speicher are considering the reversed refinement order in \cite{NS06}; however, to be consistent with the conventions in Knop's article \cite{Kn07}, our definition is dual to theirs. \begin{definition}[{\cite[Ch.~9]{NS06}}] Let $k,l\geq 0$, and partitions $p,q\in P(k,l)$ on $k+l$ points. We write $p\leq q$ if and only if each block of $q$ is completely contained in one of the blocks of $p$. The induced partial order is called the \emph{refinement order}. \end{definition} Note that $p\leq q$, if $p$ can be obtained by coarsening the block structure of $q$ and we say that $p$ is \emph{coarser} than $q$. Moreover, the meet $p\wedge q$ of $p$ and $q$ exists in $P(k,l)$ and is the \emph{common coarsening}, i.e. the finest partition which is coarser than both $p$ and $q$. \begin{lemma} \label{lem::common_coarsening} Let $\mathcal C$ be a category of partitions. Then $\mathcal C$ is closed under common coarsenings if and only if $\mathcal C$ is group-theoretical. \end{lemma} \begin{proof} If $\mathcal C$ is closed under coarsening, then it contains $\Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1) \Pline (1.5,0.25) (2.5,0.75)}$, since this partition is a coarsening of the partition $\Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1)}$, which is contained in any category of partitions. See \cite[Lemma 2.3.]{RW14} for the opposite inclusion. \end{proof} \begin{lemma} Let $\mathcal C$ be a group-theoretical category of partitions $\mathcal C$ and $k\in \mathbb N_0$. Then the poset $\mathcal C(k)=\mathcal C(0,k)$ has a meet-semilattice with respect to the refinement order. \end{lemma} This allows us to give a condition for the semisimplicity of $\RepCt$, see \cite[Lemma 8.2.]{Kn07}. \begin{lemma} \label{lem::semisimple_determinant} Let $\mathcal C$ be an group-theoretical category of partitions. Then $\RepCt$ is semisimple if and only \[ \Omega_k := \prod_{p\in \mathcal C(k)} \Big( \sum_{\substack{q\in \mathcal C(k) \\ q\leq p}} \mu_\mathcal C(q,p) \cdot t^{\#q} \Big) \neq 0 \quad \text{for all } k\in \mathbb N.\] \end{lemma} \begin{proof} By \Cref{lem::semisimple_determinant_general}, $\RepCt$ is semisimple if and only of the matrices $$ G^{(k)} = (t^{l(u^*,v)})_{u,v\in\mathcal C(k)} $$ have non-zero determinants for all $k\in\mathbb N$. We define the map $\phi:\mathcal C(k)\to \mathbb C, p\mapsto t^{\#p}$ and since $\#(u\wedge v)=l(u^*,v)$ for all $u,v\in \mathcal C(k)$, \Cref{lem::det_semilattice} implies that \begin{align*} \det(G^{(k)}) &= \det(\phi(u\wedge v))_{u,v\in \mathcal C(k)}) \\ &= \prod_{p\in \mathcal C(k)} \Big( \sum_{\substack{q\in \mathcal C(k) \\ q\leq p}} \mu_\mathcal C(q,p) \cdot \phi(q) \Big) \\ &= \prod_{p\in \mathcal C(k)} \Big( \sum_{\substack{q\in \mathcal C(k) \\ q\leq p}} \mu_\mathcal C(q,p) \cdot t^{\#q} \Big) \\ &= \Omega_k \end{align*} \end{proof} To compute the above-noted determinant, we will further factorise it. For this purpose we recall a definition of Knop's in the special case of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$. For any $k\in \mathbb N$ we set $\underline{k}:=\{1,\ldots,k\}$ and denote by $s_k\in P(k)$ the finest partition in $P(k)$, where each block is of size one. Moreover, we set $\underline{0}:=\emptyset$ and $s_0:=\ensuremath{\mathrm{id}}_0 \in P(0)$. \begin{definition}[See {\cite[§8]{Kn07}}] Let $k,l\in \mathbb N_0$ with $k\leq l$ and let $e:\underline{k}\hookrightarrow \underline{l}$ be an injective map. We define two maps \[ e_*:P(l) \to P(k) \text{ and } e^*:P(k) \to P(l)\] as follows. For any $p\in P(l)$ we label the points from the left to the right by $\underline{l}$. Then we define $e_*(p)\in P(k)$ as the restriction of $p$ to the points in $e(\underline{k})$. For any $q\in P(k)$ we define $e^*(q)\in P(l)$ as the partition with $e_*(e^*(q))=q$ such that all points in $\underline{l}\backslash e(\underline{k})$ are singletons. Moreover, we define a scalar \[ w_e := \sum_{\substack{q\in P(l) \\ e_*(q)=s_k}} \mu_P(q,s_l) \cdot t^{\#q-k} \in \mathbb C.\] \end{definition} Note that the sum runs over all partitions in $P(l)$ and, hence, $w_e$ is independent of the group-theoretical category of partitions we are considering. Before we go on, we consider this definition in two special cases. \begin{remark} \label{rem::k=0} We consider the case $k=0$ and $l\in \mathbb N_0$. Then there is just one map $e:\underline{0} \to \underline{l}$, since $\underline{0}=\emptyset$. Moreover, the set $P(0)$ consists of only one partition $s_0=\ensuremath{\mathrm{id}}_0$ and $\# s_0 = 0$. Thus it follows from the definition that \begin{align*} & e_*:P(l) \to P(0), p\mapsto s_0,\\ & e^*:P(0) \to P(l), s_0 \mapsto s_l,\\ & w_e = \sum_{q\in P(l)} \mu_P(q,s_l) \cdot t^{\#q}. \end{align*} \end{remark} \begin{lemma} \label{lem::w_e=1} Let $l\in \mathbb N_0$ and let $e:\underline{l}\to \underline{l}$ be a bijection. Then $w_e=1$. \end{lemma} \begin{proof} It follows from the definition that $e_*=e^*=\ensuremath{\mathrm{id}}_{P(l)}$ and hence the only partition $q\in P(l)$ with $e_*(q)=s_l$ is the partition $s_l$ itself. Hence, we have \[ w_e = \sum_{\substack{q\in P(l) \\ e_*(q)=s_l}} \mu_P(q,s_l) \cdot t^{\#q-l} = \mu_P(s_l,s_l) \cdot t^{l-l} = 1. \] \end{proof} \begin{lemma} \label{lem::determinant_in_we} Let $\mathcal C$ be an group-theoretical category of partitions. Then \[ \Omega_k = \prod_{p\in \mathcal C(k)} w_{\emptyset \hookrightarrow \underline{\#p}} \quad \text{for all } k\in \mathbb N.\] \end{lemma} \begin{proof} Let $p\in \mathcal C(k)$. Since $\mathcal C$ is a group-theoretical category of partitions, any coarsening of $p$ lies again in $\mathcal C$. Thus there is a natural bijection \[ f: \mathcal C_{\leq p} :=\{q\in \mathcal C(k)\mid q\leq p\} \to P(\#p) \] mapping a coarsening of $p$ to the partition indicating the fusion of the blocks of $p$. It is easy to check that \begin{itemize} \item $\mu_\mathcal C(q,q')=\mu_P(f(q),f(q'))$ for all $q,q'\in \mathcal C_{\leq p}$, \item $\#q=\#(f(q))$ for all $q\in \mathcal C_{\leq p}$ and \item $f(p)=s_{\#p}$. \end{itemize} Together with \Cref{rem::k=0} it follows that \begin{align*} \Omega_k &= \prod_{p\in \mathcal C(k)} \Big( \sum_{\substack{q\in \mathcal C(k) \\ q\leq p}} \mu_\mathcal C(q,p) \cdot t^{\#q} \Big) \\ &= \prod_{p\in \mathcal C(k)} \Big( \sum_{q\in P(\#p)} \mu_P(q,s_{\#p}) \cdot t^{\#q} \Big) \\ &= \prod_{p\in \mathcal C(k)} w_{\emptyset \hookrightarrow \underline{\#p}} \end{align*} \end{proof} Thus \Cref{lem::semisimple_determinant} and \Cref{lem::determinant_in_we} imply the following corollary. \begin{lemma} \label{cor::fac_determinant} Let $\mathcal C$ be an group-theoretical category of partitions. Then $\RepCt$ is semisimple if and only if $w_{\emptyset \hookrightarrow \underline{\#p}} \neq 0$ for all $k\in \mathbb N$ and $p\in \mathcal C(k)$. \end{lemma} In the following, we factorise the elements $w_{\emptyset \hookrightarrow \underline{\#p}}$ with $p\in \mathcal C(k)$. As they are independent of $\mathcal C$ we can apply \cite[Lemma 8.4.]{Kn07} in the special case of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$, which shows that the elements $w_e$ are multiplicative. \begin{lemma}[See {\cite[Lemma 8.4.]{Kn07}}] Let $k,l\in \mathbb N_0$ with $k\leq l$ and let $e:\underline{k}\hookrightarrow \underline{l}$ be an injective map. Then the pair $(e_*,e^*)$ is a Galois connection between $P(l)$ and $P(k)$, i.e.~$e_*(p)\leq q$ if and only if $p\leq e^*(q)$ for all $p\in P(l)$ and $q\in P(k)$. \end{lemma} \begin{proof} First, let $e_*(p)\leq q$. We consider two points $x,y\in \underline{l}$ of $e^*(q)$ which lie in the same block. As all points in $\underline{l}\backslash e(\underline{k})$ are singletons, we have $x,y\in e(\underline{k})$. Thus $e^{-1}(x)$ and $e^{-1}(y)$ lie in the same block of $q$ and as $e_*(p)\leq q$, they lie in the same block of $e_*(p)$. It follows that $x$ and $y$ lie in the same block of $p$ and thus $p\leq e^*(q)$.\\ Let $p\leq e^*(q)$. We consider two points $x,y\in \underline{k}$ of $q$ which lie in the same block. Thus $e(x)$ and $e(y)$ lie in the same block of $e^*(q)$ and as $p\leq e^*(q)$, they lie in the same block of $p$. It follows that $x$ and $y$ lie in the same block of $e_*(p)$ and thus $e_*(p)\leq q$.\\ \end{proof} In the following, let us extend the coarsening operation $\mathbb Z$-linearly to $\mathbb Z$-linear combinations of partitions. \begin{lemma}[See {\cite[Lemma 8.4.]{Kn07}}] \label{lem::we_multiplicative} Let $j,k,l\in \mathbb N_0$ with $j\leq k\leq l$ and let $\underline{j} \overset{\bar{e}}{\hookrightarrow} \underline{k}\overset{e}{\hookrightarrow} \underline{l}$ be injective maps. Then we have \[ w_{e\bar{e}} = w_e w_{\bar{e}} .\] \end{lemma} \begin{proof} By \cite[Lemma 7.2.]{Kn07} we have \[ \sum_{\substack{q\in P(l) \\ q\leq p}} \mu_P(q,p) q = \Big(\sum_{\substack{r\in P(k) \\ r\leq e_*(p)}} \mu_P (r,e_*(p)) e^*(r)\Big) \wedge \Big(\sum_{\substack{s\in P(l) \\ s\leq p \\ e_*(s)=e_*(p)}} \mu_P(s,p) s\Big) \] for all $p\in P(l)$. For $p=s_l$ we obtain \[ \sum_{q\in P(l)} \mu_P(q,s_l) q = \Big( \sum_{r\in P(k)} \mu_P (r,s_k) e^*(r)\Big) \wedge \Big( \sum_{\substack{s\in P(l) \\ e_*(s)=s_k}} \mu_P(s,s_l) s\Big) .\] We define a $\mathbb C$-linear map by the action on partitions as follows: \[ \varphi: \mathbb C P(l) \to \mathbb C, q \mapsto \left\{ \begin{array}{ll} t^{\#q-j} & (e\bar{e})_*(q)=s_j \\ 0 & \text{otherwise}\\ \end{array} \right. \] We apply $\varphi$ on both sides of the equation and obtain \[ \sum_{\substack{q\in P(l) \\ (e\bar{e})_*(q)=s_j}} \mu_P(q,s_l) t^{\#q-j} = \sum_{r\in P(k)} \sum_{\substack{s\in P(l) \\ e_*(s)=s_k}} \mu_P(r,s_k) \mu_P(s,s_l) \varphi(e^*(r)\wedge s) .\] Thus to prove that $w_{e\bar{e}} = w_e w_{\bar{e}}$, we will show \[ \varphi(e^*(r)\wedge s) = \left\{ \begin{array}{ll} (t^{\#r-j}) (t^{\#s-k})& \bar{e}_*(r)=s_j \\ 0 & \text{otherwise}\\ \end{array} \right. \] for all $r\in P(k),s\in P(l)$ with $e_*(s)=s_k$. Since $e_*(s)=s_k$ implies that \[ (e\bar{e})_*(e^*(r)\wedge s) = \bar{e}_*(r\wedge e_*(s)) = \bar{e}_*(r\wedge s_k) = \bar{e}_*(r), \] we have $(e\bar{e})_*(e^*(r)\wedge s)=s_j$ if and only if $\bar{e}_*(r)=s_j$. Since all parts of $e^*(r)$ involving the points $l\backslash e(\underline{k})$ are singletons and since $e_*(s)=s_k$, the common coarsening $e^*(r)\wedge s$ has exactly $\#r$ blocks which are connected to a point in $e(\underline{k})$ and $\#s-k$ blocks which are not connected to a point in $e(\underline{k})$. It follows that $\#(e^*(r)\wedge s)=\#r+\#s-k$ and hence \[ \varphi(e^*(r)\wedge s) = t^{\#r+\#s-k-j} = (t^{\#r-j}) (t^{\#s-k}).\] \end{proof} Let us illustrate the lemma above with an example. \begin{example} Let $\mathcal C$ be an group-theoretical category of partitions, $m\in \mathbb N$ and $p\in \mathcal C(m)$. We set $l=\# p$ and consider an arbitrary injective map $e:\underline{1} \hookrightarrow \underline{\# p}$. Then $\emptyset \hookrightarrow \underline{l}$ decomposes into \[ \emptyset \hookrightarrow \{1\} \overset{e}{\hookrightarrow} \underline{l}.\] We have \begin{align*} w_{\emptyset \hookrightarrow \{1\}} &= \sum_{q\in P(1)} \mu_P(q,s_1) \cdot t^{\#q} = \mu_P(s_1,s_1) \cdot t^{1} = t \\ \text{and }w_e &= \sum_{\substack{q\in P(l) \\ e_*(q)=s_1}} \mu_P(q,s_l) \cdot t^{\#q-1} = \sum_{q\in P(l)} \mu_P(q,s_l) \cdot t^{\#q -1} \end{align*} and hence \[ w_{\emptyset \hookrightarrow \underline{l}} = \sum_{q\in P(l)} \mu_P(q,s_l) \cdot t^{\#q} = w_{\emptyset \hookrightarrow \{1\}} \cdot w_e.\] \end{example} Now, we are ready to prove our first main theorem, see \Cref{thm::main_thm_1}. \begin{theorem}\label{thm-grouptheo-semisimple} Let $\mathcal C$ be a group-theoretical category of partitions. Then $\RepCt$ is semisimple if and only $t\notin \mathbb N_0$. \end{theorem} \begin{proof} By \Cref{cor::fac_determinant}, the category $\RepCt$ is semisimple if and only if $w_{\emptyset \hookrightarrow \underline{\#p}} \neq 0$ for all $m\in \mathbb N$, $p\in \mathcal C(m)$. Hence \Cref{lem::we_multiplicative} implies that $\RepCt$ is semisimple if and only if $w_e\neq 0$ for any map $e:\underline{k}\hookrightarrow \underline{l}$, $k,l\in\mathbb N_0$, which does not have a factorisation $e=e_1e_2$ with $e_1,e_2$ injective and not bijective maps, i.e.~for all $e:\underline{k}\hookrightarrow \underline{k+1}$, $k\in\mathbb N_0$. Let us describe $w_e$ for a given injective map $e:\underline{k}\hookrightarrow \underline{k+1}$. Set $l=k+1$. We can assume that $e(i)=i$ for any $i\in \underline{k}$, since this can be achieved by post-composing with an isomorphism $e':\underline{l}\to \underline{l}$ and $w_{e'}=1$ by \Cref{lem::w_e=1}. Thus $\{q\in P(l)\mid e_*(q)=s_k\}$ contains the partition $s_l\in P(l)$ and $X:= \{q\in P(l)\mid e_*(q)=s_k\} \backslash \{s_l\}$ contains exactly the $k$ partitions where the $l$-th point is in a block of size two and all other blocks have size one. It follows that \[ w_e = \mu_P(s_l,s_l) t^{l-k} + \sum_{q\in X} \mu_P(q,s_l) t^{k-k} .\] Since $s_l$ covers every partition $q\in X$, we can apply \Cref{lem::cover} and conclude that \[ w_e = 1\cdot t^{1} + \sum_{q\in X} (-1)t^{0} = t-k .\] This proves our assertion that $\RepCt$ is semisimple if and only if $t\not\in\mathbb N_0$. \end{proof} Together with \Cref{cor-criterion}, our previous result implies that there are negligible morphisms in $\RepCt$ as soon as $t\in\mathbb N_0$. To better understand negligible morphisms, we discuss some examples. \begin{definition} For any group-theoretical category of partitions $\mathcal C$, any $k,l\in \mathbb N_0$ and any partition $p\in\mathcal C(k,l)$, we define recursively $$ x_p := p - \sum_{q\lneq p} x_q \quad \in \ensuremath{\mathrm{Hom}}_\RepCt([k],[l]) . $$ \end{definition} \begin{remark}\label{rem-negligible-mor} If $t\in\mathbb N_0$, then by \cite[Rem.~3.22.]{CO11}, $x_p$ is negligible in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ if $p$ is a partition with more than $t$ parts (and in fact, those span the ideals of negligible morphisms in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}_0(S_t)$). This implies that such $x_p$ are negligible in $\RepCt$ for any group-theoretical $\mathcal C$. Subtracting such negligible morphisms we can see that modulo the tensor ideal of negligible morphisms, any morphism in $\RepCt$ is equivalent to a morphism which consists of partitions with at most $t$ parts each. \end{remark} \begin{example} \label{expl-xid} If $t\in \mathbb N_0$ and $\mathcal C$ is group-theoretical, then $x_{\ensuremath{\mathrm{id}}_{t+1}}$ is a non-trivial negligible endomorphism in $\RepCt$. \end{example} \section{Indecomposable objects} In this section, we take a look at indecomposable objects in $\RepCt$ for any category of partitions $\mathcal C$. Notions like $\ensuremath{\mathrm{End}}$ and $\ensuremath{\mathrm{Hom}}$ are meant with respect to the category $\RepCt$. We prove a classification result for indecomposable objects in $\RepCt$, \Cref{thm::main_thm_2}, which is uniform in $\mathcal C$: for each category of partitions $\mathcal C$, a distinguished set of projective partitions $\cP$ will be considered which defines a system of finite groups, the union of whose irreducible complex representations will be shown to correspond to the indecomposable objects in $\RepCt$. We also consider and derive results on the Grothendieck ring and the semisimplification of $\RepCt$. \subsection{From indecomposable objects to primitive idempotents}\label{ssec::nuk} In the following, we provide a strategy which reduces the problem of classifying indecomposable objects in $\RepCt$ to a classification of primitive idempotents in certain quotient algebras. Recall the following definitions. \begin{definition} Let $R$ be a ring. Two elements $a,b\in R$ are said to be \emph{conjugate} if there exists an invertible element $c\in R$ such that $a=cbc^{-1}$. An element $e\in R$ is called \emph{idempotent} if $e^2=e$. Two idempotents $e_1,e_2\in R$ are said to be \emph{orthogonal}, if $e_1e_2=e_2e_1=0$. An idempotent $e\in R$ is called \emph{primitive} if it is non-zero and can not be decomposed as a sum of two non-zero orthogonal idempotents. \end{definition} For $\mathcal C=P$, the following statements are discussed in \cite[Prop.~2.20]{CO11}. They follow in our more general situation from the fact that $\RepCt$ is a Karoubian category with finite-dimensional endomorphism algebras. For any object $A\in \RepCt$ and any idempotent $e\in \ensuremath{\mathrm{End}}(A)$ we denote the image of $e$ by $(A,e)$. \begin{lemma} \label{lem::lem_Krull-Schmidt} Let $\mathcal C$ be a category of partitions and $t\in \mathbb C$. \begin{enumerate}[label=(\roman*)] \item Let $k\in \mathbb N_0$ and let $e\in \ensuremath{\mathrm{End}}([k])$ be an idempotent. Then $([k],e)$ is indecomposable in $\RepCt$ if and only if $e$ is primitive. \item For any two idempotents $e,e'\in \ensuremath{\mathrm{End}}([k])$ the objects $([k],e)$ and $([k],e')$ are isomorphic if and only if $e$ and $e'$ are conjugate in $\ensuremath{\mathrm{End}}([k])$. \item For any indecomposable object $X$ of $\RepCt$ there exist a $k\in \mathbb N_0$ and a primitive idempotent $e\in \ensuremath{\mathrm{End}}([k])$ such that $X\cong ([k],e)$. \item (Krull--Schmidt property) Every object in $\RepCt$ is isomorphic to a direct sum of indecomposable objects, and this decomposition is unique up to the order of the indecomposables. \end{enumerate} \end{lemma} For any conjugacy class $c$ of idempotents in $\ensuremath{\mathrm{End}}([k])$, we denote by $([k],c)$ the corresponding isomorphism class of objects in $\RepCt$. However, we frequently identify a primitive idempotent with its conjugacy class and an object with its isomorphism class. The following well-known lemma allows us to classify primitive idempotents inductively. \begin{definition} For any algebra $B$, we denote by $\Lambda(B)$ the set of conjugacy classes of primitive idempotents of $B$. \end{definition} \begin{lemma}[{\cite[Lem.~3.3]{CO11}}] \label{lem::idempotent_corr} Let $A$ be a finite-dimensional $\mathbb C$-algebra, $\xi \in A$ an idempotent and $(\xi)=A\xi A$ the two-sided ideal of $A$ generated by $\xi$. Then there is a bijective correspondence \[ \Lambda(A) \overset{bij.}{\longleftrightarrow} \Lambda(\xi A \xi) \sqcup \Lambda(A/(\xi)) ;\] a primitive idempotent in $A$ corresponds to a primitive idempotent in the subalgebra $\xi A \xi$ as soon as it lies in $(\xi)$, otherwise, its image under the quotient map $A\to A/(\xi)$ is a primitive idempotent in $A/(\xi)$, and for each primitive idempotent in $A/(\xi)$, there is a unique lift (up to conjugation) in $A$. \end{lemma} Now we construct isomorphism between subobjects of $[k]$ and subobjects of $[l]$ with $k\neq l$. We will distinguish the cases ${\mathord{\uparrow}} \in \mathcal C$ and ${\mathord{\uparrow}} \notin \mathcal C$, as in the latter case we have the following useful feature. \begin{lemma} \label{lem::hom_different_parity} If ${\mathord{\uparrow}} \notin \mathcal C$, then $\ensuremath{\mathrm{Hom}}([k],[l])=\{0\}$ whenever $k \not\equiv l \mod 2$. \end{lemma} \begin{proof} Assume that there exists a partition $p\in \mathcal C(k,l)$ with $k \not\equiv l \mod 2$. By successive composition with $\Paa\ensuremath{\otimes} \cdots \ensuremath{\otimes} \Paa \ensuremath{\otimes} \LPartition{}{0.6:1,2}$ and $\Paa\ensuremath{\otimes} \cdots \ensuremath{\otimes} \Paa \ensuremath{\otimes} \UPartition{}{0.4:1,2}$ we would obtain the partition ${\mathord{\uparrow}}\in \mathcal C(0,1)$ or ${\mathord{\downarrow}} \in \mathcal C(1,0)$ and hence ${\mathord{\uparrow}} \in \mathcal C$. \end{proof} \begin{definition} For $t\neq 0$ we define the idempotents \begin{align*} \nu_0 := 0, \quad \nu_1 := \begin{cases} \frac1t \Pab & {\mathord{\uparrow}} \in \mathcal C \\ 0 & \text{else} \end{cases} , \quad \nu_k := \begin{cases} \frac1t ~ \ensuremath{\mathrm{id}}_{k-1} \ensuremath{\otimes} \Pab & {\mathord{\uparrow}} \in \mathcal C \\ \frac1t ~ \ensuremath{\mathrm{id}}_{k-2} \ensuremath{\otimes} \Paabb & \text{else} \end{cases} , \quad\text{for all }k\geq 2 \end{align*} in $\ensuremath{\mathrm{End}}_{\RepCt}([k])$, $k\in\mathbb N_0$. \end{definition} \begin{lemma} \label{lem::chi_isomorphisms} Set $d:=1$ if ${\mathord{\uparrow}}\in\mathcal C$, and $d:=2$ else. Then for $t\neq0$ and $k\geq d$, $$ ([k],\nu_k) \cong [k-d] \quad\text{in }\RepCt.$$ More precisely, for any $l<k$ with $k\equiv l \mod d$, there exists a partition $\nu \in \mathcal C(l,k)$ such that $([l],e)\cong ([k],t^{(l-k)/d} \nu e \nu^*)$, where $t^{(l-k)/d} \nu e \nu^*$ is an idempotent in the ideal $(\nu_k)$, for any idempotent $e\in \ensuremath{\mathrm{End}}([l])$. \end{lemma} \begin{proof} We set $l:=k-d$ and \[ \scalebox{.8}{\begin{tikzpicture} \coordinate [label=left:{\scalebox{1.25}{$\nu:=$}}](O) at (0,0.45); \coordinate [label=right:{\scalebox{1.25}{$\in\mathcal C(l,k),$}}](O) at (3,0.45); \coordinate [label=right:{$\ldots$}](O) at (0.35,0.5); \coordinate (A1) at (0,0); \coordinate (A2) at (1.5,0); \coordinate (A3) at (2,0); \coordinate (A4) at (2.5,0); \coordinate (B1) at (0,1); \coordinate (B2) at (1.5,1); \coordinate (B3) at (2,1); \fill (A1) circle (2.5pt); \fill (A2) circle (2.5pt); \fill (A3) circle (2.5pt); \fill (A4) circle (2.5pt); \fill (B1) circle (2.5pt); \fill (B2) circle (2.5pt); \fill (B3) circle (2.5pt); \draw (A1) -- (B1); \draw (A2) -- (B2); \draw (2.5,0.4) -- (A4); \draw (A3) -- (B3); \end{tikzpicture}} \] if ${\mathord{\uparrow}}\in\mathcal C$, or otherwise \[ \scalebox{.8}{\begin{tikzpicture} \coordinate [label=left:{\scalebox{1.25}{$\nu:=$}}](O) at (0,0.45); \coordinate [label=right:{\scalebox{1.25}{$\in\mathcal C(l,k).$}}](O) at (3.5,0.45); \coordinate [label=right:{$\ldots$}](O) at (0.35,0.5); \coordinate (A1) at (0,0); \coordinate (A2) at (1.5,0); \coordinate (A3) at (2,0); \coordinate (A4) at (2.5,0); \coordinate (A5) at (3,0); \coordinate (B1) at (0,1); \coordinate (B2) at (1.5,1); \coordinate (B3) at (2,1); \fill (A1) circle (2.5pt); \fill (A2) circle (2.5pt); \fill (A3) circle (2.5pt); \fill (A4) circle (2.5pt); \fill (A5) circle (2.5pt); \fill (B1) circle (2.5pt); \fill (B2) circle (2.5pt); \fill (B3) circle (2.5pt); \draw (A1) -- (B1); \draw (A2) -- (B2); \draw (A3) -- (B3); \draw (A5) -- (3,0.4) -- (2.5,0.4) -- (A4); \end{tikzpicture}} \] Then $\nu \nu^* = t~ \nu_k$ and $\nu^* \nu = t~\ensuremath{\mathrm{id}}_l$ and thus \begin{align*} \nu: ~&[l] \to ([k],\nu_k),\\ \frac1t \nu^*: ~&([k],\nu_k) \to [l] \end{align*} define mutually inverse isomorphisms, which also restrict to subobjects. An iterative application yields the claim. \end{proof} \begin{remark} The previous lemma implies that every object is isomorphic to a subobject of $[k]$ for some sufficiently large $k$, if ${\mathord{\uparrow}} \in \mathcal C$ (see \cite[Pf.~of.~Lem.~3.6]{CO11} for the case $\mathcal C=P$), while if ${\mathord{\uparrow}}\not\in\mathcal C$, then any object $X$ in $\RepCt$ is isomorphic a subobject of $[k]\oplus[k+1]$ for some sufficiently large $k$. As there are no non-zero morphisms between $[k]$ and $[k+1]$, the endomorphism algebra of $X$ is a direct summand in $\ensuremath{\mathrm{End}}([k]\oplus[k+1])$. So in particular, in both cases $\ensuremath{\mathrm{End}}(X)$ is semisimple if $\ensuremath{\mathrm{End}}([k])$ is semisimple for any $k\in \mathbb N_0$, which can be checked by verifying that $G^{(2k)}\neq0$ for all $k\in \mathbb N_0$ (see the proof of \Cref{lem::semisimple_determinant_general}). \end{remark} With \Cref{lem::chi_isomorphisms} we are now able to decide whether a given subobject of $[k]$ is isomorphic to a subobject of $[l]$ with $l\leq k$. \begin{lemma} \label{lem::Rk} Let $t\neq 0$, $k\in \mathbb N_0$ and $e\in \ensuremath{\mathrm{End}} ([k])$ a primitive idempotent. Then $([k],e)$ is isomorphic to a subobject of $[l]$ for some $l<k$ if and only if $e\in (\nu_k)$. \end{lemma} \begin{proof} Let $e\in (\nu_k)$. Then \Cref{lem::idempotent_corr} implies that $e$ is conjugated to some primitive idempotent in $\nu_k \ensuremath{\mathrm{End}} ([k]) \nu_k$ and hence we can assume that $e \in \nu_k \ensuremath{\mathrm{End}} ([k]) \nu_k$. Then $([k],e)$ is isomorphic to a subobject of $([k],\nu_k)$ and \Cref{lem::chi_isomorphisms} tells us that $([k],e)$ is isomorphic to a subobject of $[k-d]$. Now, let $e\notin (\nu_k)$. Consider an object of the form $([l],f)$ with $l<k$ and we assume that it is isomorphic to $([k],e)$. Then \Cref{lem::chi_isomorphisms} together with \Cref{lem::hom_different_parity} implies that there exists an idempotent $f'\in \ensuremath{\mathrm{End}}([k])$ with $([l],f)\cong ([k],f')$ and $f'\in (\nu_k)$. But since $e\notin (\nu_k)$, the idempotents $f'$ and $e$ cannot be conjugated. Hence $([l],f)$ is not isomorphic to $([k],e)$, which is a contradiction. \end{proof} We obtain our first general description of the indecomposable objects in interpolating partition categories. \begin{definition} \label{def::Lambda} For $t\neq 0$ we set $$ \Lambda_k := \Lambda(\ensuremath{\mathrm{End}}([k]) / (\nu_k)),$$ so $\Lambda_k$ is the set of conjugacy classes of primitive idempotents in the quotient algebras defined by the idempotents $\nu_k,k\in \mathbb N_0$. For any $e\in\Lambda_k$, we denote its unique (primitive idempotent) lift in $\Lambda(\ensuremath{\mathrm{End}}([k]))$ by $L_e$ (see \Cref{lem::idempotent_corr}). \end{definition} Note that $\Lambda_0=\{\ensuremath{\mathrm{id}}_0\}$ and $ \Lambda_1 = \begin{cases} \{ \ensuremath{\mathrm{id}}_1 - \frac{1}{t}\Pab, \frac{1}{t}\Pab \} & \text{if }\Pab\in\mathcal C(1,1) \\ \{ \ensuremath{\mathrm{id}}_1 \} & \text{else} \end{cases} $.\\ \begin{proposition} \label{thm::indecomp_obj} For any category of partitions $\mathcal C$ and $t\in \mathbb C \backslash \{0\}$ there is a bijection \begin{align*} \phantom{\qquad \qquad} \phi: \bigsqcup_{k\geq 0} \Lambda_k \to \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}, \Lambda_k\ni e \mapsto ([k], L_e). \end{align*} \end{proposition} \begin{proof} By \Cref{lem::lem_Krull-Schmidt} the isomorphism classes of non-zero indecomposable objects in $\RepCt$ are in bijection with the conjugacy classes of primitive idempotents in $\ensuremath{\mathrm{End}} ([k])$ for which $([k],e)$ is not a subobject of $[l]$ for any $l<k$. By \Cref{lem::Rk} these are exactly the conjugacy classes of idempotents in $\ensuremath{\mathrm{End}}([k]) \backslash (\nu_k)$. Now, \Cref{lem::idempotent_corr} implies that these coincide with $\Lambda_k$. \end{proof} \begin{remark} The case $t=0$ can be treated analogously by adjusting the definition of $\nu_k$ as follows \begin{align*} \nu_0 := \nu_1 := 0, \quad \nu_2:= \begin{cases} \Paaaa & {\mathord{\uparrow}}, \Paaaa \in \mathcal C \\ 0 & \text{else} \end{cases}, \quad \nu_k := \begin{cases} \ensuremath{\mathrm{id}}_{k-2} \ensuremath{\otimes} \Paaaa & {\mathord{\uparrow}}, \Paaaa \in \mathcal C \\ \ensuremath{\mathrm{id}}_{k-2} \ensuremath{\otimes} \Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1)} & \text{else} \end{cases}, \quad\text{for all }k\geq 3. \end{align*} Check for instance that every composition $([0],\ensuremath{\mathrm{id}}_0) \to ([l],e) \to ([0],\ensuremath{\mathrm{id}}_0)$ is a non-zero power of $t$, and hence zero, if $t=0$ and $l>0$. Thus, for $t=0$, we have to set $\nu_1=0$ and $\nu_2=0$, if ${\mathord{\uparrow}} \notin \mathcal C$. An analogous argument shows that the statement of \Cref{thm::indecomp_obj} is still true in the case $t=0$ with the given modifications for $\nu_k$. \end{remark} \subsection{Projective partitions} \label{ssec::projectives} In the previous subsection we reduced the problem of classifying indecomposable objects in $\RepCt$ to a classification of primitive idempotents in (certain quotients of) the endomorphism algebras. We will now provide a strategy which reduces the problem further to a combinatorial problem of computing equivalence classes of certain distinguished partitions. For the rest of this article we will assume that $t\neq 0$. Recall that we denote by $q\cdot p$ the partition obtained by the composition of $p$ and $q$ for two compatible partitions $p,q$, while we denote by $qp= t^{l(q,p)} q\cdot p$ the multiplication in $\RepCt$, where $l(p,q)$ is the number of connected components concentrated in the ``middle row'' of the vertical concatenation of $p$ and $q$. By assuming $t\neq 0$ we have $q\cdot p= t^{-l(q,p)} qp$. Note also that $p\cdot q$ is the composition in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(\mathcal C,1)$. We fix some $k\geq 0$ and denote $E:=\ensuremath{\mathrm{End}}_\mathcal C([k])=\mathbb C\mathcal C(k,k)$. We will use methods of \cite{FW16} and we start by recalling some definitions: \begin{definition} A block (= connected component) of a partition $p\in P(k,l)$ is called \emph{through-block} if it contains upper points as well as lower points. We denote the number of through-blocks by $t(p)$. Moreover, we denote by $$ I_T := ( q\in\mathcal C(k,k): t(q)< T ) \quad\text{in }E $$ the ideal generated (or equivalently, spanned) by all partitions with less than $T$ through-blocks, for any $T\geq 0$. \end{definition} \begin{definition}[{\cite[Def. 2.7]{FW16}}] \label{def::projPart} A partition $p\in P(k,k)$ is called \emph{projective}, if there exists a partition $p_0\in P(k,t(p))$ such that $p=p_0^*p_0$. For any category of partitions $\mathcal C$, we denote by ${\operatorname{Proj}_{\cC}(k)}$ the set of all projective partition in $\mathcal C(k,k)$. \end{definition} \begin{remark} Note that for a projective partition $p=p_0^*p_0$, $p_0$ is a partition in $P(k,t(p))$, but not necessarily in $\mathcal C(k,t(p))$. Moreover, by the structure of $p_0$, there can not be any loops in the composition $p_0^* \cdot p_0$ and hence $p_0^*p_0$ is indeed a partition, not a scalar multiple of one. By \cite[Lemma 2.11]{FW16}, a partition $p\in \mathcal C(k,k)$ is projective if and only if $p=p^*$ and $p=p\cdot p$. Thus, $t^{-l(p,p)} p$ is an idempotent in $\ensuremath{\mathrm{End}}_{\RepCt}([k])=\mathbb C \mathcal C(k,k)$. Moreover, the partitions $q\cdot q^*$ and $q^*\cdot q$ are projective for any partition $p\in \mathcal C(k,k)$. \end{remark} \begin{example} The partitions $\Paaaa \in P(2,2)$ and $\Paabb \in P(2,2)$ are projective, but $\Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1)} \in P(3,3)$ is not. \end{example} The following lemma shows that we can use projective partitions to compute primitive idempotents in $E$. \begin{lemma} \label{lem-ideals-p} For any $T\geq 0$ $$ I_T = \sum_{p\in {\operatorname{Proj}_{\cC}(k)}, t(p)< T} (p) $$ and, in particular, $$ E = \sum_{p\in {\operatorname{Proj}_{\cC}(k)}} (p). $$ \end{lemma} \begin{proof} Consider $q\in \mathcal C(k,k)$ with $t(q)< T$. We set $p:=q\cdot q^* \in \mathcal C(k,k)$. By \cite[Lemma 2.11]{FW16} the partition $p$ is projective, $q=p\cdot q$, and $t(p)\leq t(q)< T$. It follows that $q=p\cdot q = t^{-l(p,q)} pq \in (p)$. This proves the inclusions of the left-hand sides in the right-hand sides. The opposites inclusions follow from the fact that the number of through-blocks of a product is limited by the number of through-blocks of each factor. \end{proof} In \cite[Def. 4.1]{FW16}, Freslon and Weber associated to every projective partition a representation of the corresponding easy quantum group using the functor $\mathcal{F}$ described in \Cref{subsec::easyQG}. They observe that this representation is far from being irreducible, and go on to determine its irreducible components. Similarly, the ideals $(p)$ contain a lot of primitive idempotents with a complicated structure. Thus, using \Cref{lem::idempotent_corr}, we will break these sets up into smaller sets of primitive idempotents, which we understand. \begin{definition} For any $p\in\mathcal C(k,k)$ we denote by $$ I_p := p E p \cap I_{t(p)} = p I_{t(p)} p $$ the ideal in $p E p$ which is spanned by all partitions with less than $t(p)$ through-blocks. \end{definition} \renewcommand\L{\mathcal{L}} \begin{proposition} \label{lem::surjection} For any primitive idempotent $e\in\Lambda(pEp/I_p)$, there is a unique primitive idempotent lift $\L_e\in\Lambda(pEp)\subset\Lambda(E)$, and the mapping \[ \L: \bigsqcup_{p\in {\operatorname{Proj}_{\cC}(k)}} \Lambda(p E p / I_p ) \to \Lambda(E), \quad e\mapsto \L_e \] is surjective. \end{proposition} \begin{proof} Recall from \Cref{lem::idempotent_corr} that we can uniquely lift (primitive) idempotents modulo any ideal which is generated by an idempotent. Since $I_p$ is the sum of ideals generated by idempotents by \Cref{lem-ideals-p}, we can repeat this process to obtain a unique primitive idempotent lift $\L_e$ for any primitive idempotent $e\in\Lambda(pEp/I_p)$. Let $f\in E$ be a primitive idempotent. There there exists a projective partition $p\in \mathcal C(k,k)$ with $f\in pEp$, take for instance $p=\ensuremath{\mathrm{id}}_k$. We assume that $p$ is minimal in the sense that there does not exist a projective partition $q\in \mathcal C(k,k)$ with $f\in qEq$ and $t(q)<t(p)$. If we apply \Cref{lem::idempotent_corr} inductively for all projective partitions in $I_p$, it follows, together with \Cref{lem-ideals-p}, that there exists a primitive idempotent $e\in p E p / I_p$ such that its lift $\L_e$ is conjugated to $f$. Note, in particular, that idempotents made up of partitions with at most $T$ through-blocks can be obtained as lifts of idempotents in $(p)$ for a projective partition $p$ with the same number of through-blocks $T$, for any $T\geq0$. \end{proof} Thus, in order to understand indecomposables in $\RepCt$, we have to describe the primitive idempotents in the quotients $pEp/I_p$. It turns out, that this can be achieved using combinatorial ideals explained in \cite{FW16}. In particular, we will need a certain subgroup $S(p)$ of a symmetric group which we associate to any projective partition $p$. \begin{definition}[{\cite[Def.~4.7]{FW16}}] \label{def::GroupsSp} Let $p\in{\operatorname{Proj}_{\cC}(k)}$ be a projective partition with $T:=t(p)$ through-blocks and with a decomposition $p=p_0^*p_0$ with $p_0\in P(k,T)$. For any $\sigma\in S_T$ we define $p_\sigma := p_0^* \sigma p_0$ in $P(k,k)$ and $S(p) := \{ \sigma\in S_T \mid p_\sigma \in\mathcal C(k,k) \}$. \end{definition} Note that $p=p\cdot p=p_0^*(p_0\cdot p_0^*)p_0$ implies that $p_0 \cdot p_0^*\in P(T,T)$ is a partition with at least $T$ through-blocks, hence, it is a permutation. Due to its symmetric factorisation, we even get $p_0\cdot p_0^*=\ensuremath{\mathrm{id}}$. This implies that $p_\sigma \cdot p_\tau= p_{\sigma\tau}$ for $\sigma,\tau\in S_t$. As also $p_\ensuremath{\mathrm{id}}=p$, $S(p)$ is a subgroup of $S_T$. In fact, the subgroup is the same up to conjugation in $S_T$ for all choices of $p_0$. \begin{example}\label{ex::FW} If $\mathcal C=P$ is the category of all partition, we have $S(p)=S_{t(p)}$ for all $p\in \ProjC$. It is easy to check that the same holds for $\mathcal C=P_2$, the category of partitions with only blocks of size two, and $\mathcal C=\langle \Pabab, {\mathord{\uparrow}} \rangle$, the category of partitions with blocks of size one or two. If $\mathcal C \subseteq NC$ is a category of partitions in which all partitions are noncrossing, then $S(p)=\{ \ensuremath{\mathrm{id}} \}$ for all $p\in {\operatorname{Proj}_{\cC}(k)}$. \end{example} Let us compute the groups $S(p)$ for some more examples. \begin{lemma} \label{lem::FW} Let $\mathcal C \subseteq P_{even}$ be a group-theoretical category of partitions, $k_1,\dots,k_s\geq 0$, $k:=k_1 + 2k_2 + \dots + s k_s$, $T:=k_1+\dots+k_s$, and $$ q := \{\{1,1'\}\}^{\sqcup k_1} \sqcup \{\{1,2,1'\}\}^{\sqcup k_2} \sqcup \dots \sqcup \{\{1,\dots,s,1'\}\}^{\sqcup k_s} \qquad \in P(k, T). $$ Then $p:=e_{k_1,\dots,k_s}:= q^*q \in \mathcal C(k,k)$ is a projective partition and we have $$ S(e_{k_1,k_2,\dots,k_s}) \cong S(\ensuremath{\mathrm{id}}_{k_1+k_3+\ldots})\times S_{k_2+k_4+\dots}. $$ In particular, for $\mathcal C = P_{even}$ we get $S(p)\cong S_{k_1+k_3+\ldots +k_l} \times S_{k_2+k_4+\ldots +k_m}$. \end{lemma} \begin{proof} All blocks of partitions in $\mathcal C$ have an even size, as $\mathcal C \subseteq P_{even}$. Hence, if we consider the composition $q^*\sigma q$ for some $\sigma \in S_T$, then all strings of $\sigma$ connect either two blocks of even size or two blocks of odd size if $q^*\sigma q \in \mathcal C$. The partition $\Partition{\Pblock 0 to 0.25:1,2 \Pblock 1 to 0.75:2,3 \Pline (3,0) (1,1) \Pline (1.5,0.25) (2.5,0.75)} \in \mathcal C$ ensures that we stay in $\mathcal C$ if we shift pairs of adjacent points through a partition. One can check that this implies that $S(p) \cong S(p_1)\times S(p_2)$ with \begin{align*} &p_1=q_1^*q_1,\quad q_1 := \{\{1,1'\}\}^{\sqcup k_1} \sqcup \{\{1,2,3,1'\}\}^{\sqcup k_3} \sqcup \dots \\ &p_2=q_2^*q_2,\quad q_2 := \{\{1,2,1'\}\}^{\sqcup k_2} \sqcup \{\{1,2,3,4,1'\}\}^{\sqcup k_4} \sqcup \dots \end{align*} Since $\{\{1,\ldots ,m,1'\}\} \in \mathcal C$ for every group-theoretical category of partitions $\mathcal C$ and odd $m\in \mathbb N$, we have $S(p_1)= S(\ensuremath{\mathrm{id}}_{k_1+k_3+\ldots})$. Moreover, any partition $p_2^*\sigma p_2$ with $\sigma \in S_{k_2+k_4+\ldots}$ is a coarsening of $r^*r$ with $r := \{\{1,2\}\}^{\sqcup k_2} \sqcup \{\{1,2\},\{3,4\}\}^{\sqcup k_4} \sqcup \dots \in \mathcal C$ and as every group-theoretical category is closed under coarsening by \cite{RW14}, it follows that $S(p_2)=S_{k_2+k_4+\ldots}$. \end{proof} The next lemma is an abstraction of Proposition 4.15 in \cite{FW16}. \begin{lemma}\label{prop::FW} Let $p\in{\operatorname{Proj}_{\cC}(k)}$ be a projective partition. Then the map $\mathbb C S(p)\to p E p$, $\sigma\mapsto p_\sigma$, induces an algebra isomorphism between $\mathbb C S(p)$ and $p E p / I_p$. \end{lemma} \begin{proof} Due to the observed multiplicativity, the map is an algebra map. Now $p E p/I_p$ is spanned by $p\cdot q\cdot p+I_p$, where $p\cdot q\cdot p$ is a partition with $T:=t(p)$ through-blocks. As $p\cdot q \cdot p = p_0^* (p_0\cdot q\cdot p_0^*) p_0$, this means $p_0\cdot q\cdot p_0^*\in P(T,T)$ has at least $T$ through-blocks. Hence it is a permutation, and $p\cdot q\cdot p = p_{p_0\cdot q\cdot p_0^*}$ lies in the image of our map. We claim that $p_\sigma \neq p$ for any $\ensuremath{\mathrm{id}}\neq\sigma\in S_T$. Indeed, assume $p_\sigma=p$, then $$ \sigma = (p_0 \cdot p_0^*) \sigma (p_0 \cdot p_0^*) = p_0 \cdot (p_0^* \sigma p_0) \cdot p_0^* = p_0 \cdot p \cdot p_0^* = p_0 \cdot p_0^* \cdot p_0 \cdot p_0^* = \ensuremath{\mathrm{id}}_T, $$ as $p_0\cdot p_0^*=\ensuremath{\mathrm{id}}_T$. This implies that the $p_\sigma$ form a set of distinct partitions with exactly $T$ through-blocks. Hence, they are linearly independent even modulo $I_p$, and our map is bijective. \end{proof} In particular, the group algebra of the group $S(p)$ encodes the relevant information on primitive idempotents in the quotient $pEp/I_p$ for any fixed projective $p$. To investigate how primitive idempotents stemming from different projective idempotents $p$ and $q$ interact in $E$, let us make the following definition: \begin{definition} Let $p\in {\operatorname{Proj}_{\cC}(k)}$ be a projective partition. We denote by $$\Lambda_k^{(p)}=\{ \L_e \mid e \in \Lambda(pEp / I_p)\}$$ the set of conjugacy classes of (primitive idempotent) lifts of all idempotents in $\Lambda(pEp / I_p)$ into $E$. \end{definition} Now, we want to study under which conditions $\Lambda_k^{(p)}\cap \Lambda_k^{(q)}\neq \emptyset$ for projective partitions $p,q\in {\operatorname{Proj}_{\cC}(k)}$. It turns out that this is exactly the case if $p$ and $q$ are equivalent in the sense of \cite[Def. 4.17]{FW16} and then we have $\Lambda_k^{(p)}=\Lambda_k^{(q)}$. \begin{definition} \label{def::equivprojpart} Two projective partitions $p,q\in {\operatorname{Proj}_{\cC}(k)}$ are \emph{equivalent in $\mathcal C$}, denoted by $p\sim q$, if there exists a partition $r\in \mathcal C(k,k)$ such that $r\cdot r^*=p$ and $r^*\cdot r=q$. We denote the set of equivalence classes by ${\operatorname{Proj}_{\cC}(k)} / \sim$. \end{definition} Note that $p$ and $q$ being equivalent implies $t(p)=t(q)$ by \cite[Lemma 4.19]{FW16}. \begin{lemma} \label{lem::part_equiv} Two projective partitions $p,q\in {\operatorname{Proj}_{\cC}(k)}$ are equivalent if and only if the ideals $(p),(q)\unlhd E$ coincide. \end{lemma} \begin{proof} If $p$ and $q$ are equivalent, then $p=p\cdot p=r\cdot r^*\cdot r\cdot r^*=r\cdot q\cdot r^* = t^{-2l(r,q)} rqr^* \in (q)$. Similarly, we have $q\in (p)$ and hence $(p)=(q)$. Now, let $(p)=(q)$. Then we have $t(p)=t(q)$, which is largest number of through-blocks of any partition contained in the ideal, and there exist elements $a,b\in \mathbb C \mathcal C(k,k)$ with $p=aqb$. Since $p$ and $q$ are both partitions, we can assume that $a,b$ are partitions as well and $p=a\cdot q\cdot b$. Moreover, as $p$ and $q$ are symmetric partitions, we have $b=a^*$. Then $p=a\cdot q\cdot a^*=a\cdot q\cdot q\cdot a^*=(a\cdot q)\cdot (a\cdot q)^*$. Let $T:=t(p)=t(q)$, and write $q=q_0^*q_0$ for some $q_0\in P(k,T)$. As $p=p^*=p\cdot p$, we have $$ p = (a\cdot q\cdot a^*)\cdot (a\cdot q\cdot a^*)^* = (a \cdot q_0^* q_0 \cdot a^*) \cdot (a\cdot q_0^* q_0\cdot a^*) = (a \cdot q_0^*)(q_0\cdot a^* \cdot a\cdot q_0^*)(q_0\cdot a^*) . $$ Here, $q_0 \cdot a^* \cdot a \cdot q_0^*$ is a partition in $P(T,T)$ with at least $T$ through-blocks, so all blocks contain exactly one upper and one lower point. Moreover, it has a symmetric factorisation as $(q_0 \cdot a^*)\cdot (q_0 \cdot a^*)^*$, so it must be the identity partition. This means $(a\cdot q)^*\cdot (a\cdot q) = q_0^* (q_0 \cdot a^* \cdot a \cdot q_0^*) \cdot q_0 = q$, showing that $p$ and $q$ are equivalent, as desired. \end{proof} \begin{lemma} \label{lem::equiv_idemp_equiv_proj} Let $p,q\in {\operatorname{Proj}_{\cC}(k)}$ be two projective partitions. \begin{enumerate}[label=(\roman*)] \item If $p$ and $q$ are equivalent, then $\Lambda_k^{(p)}=\Lambda_k^{(q)}$. \item If $p$ and $q$ are not equivalent, then $\Lambda_k^{(p)}\cap \Lambda_k^{(q)}=\emptyset$. \end{enumerate} \end{lemma} \begin{proof} (i) By \Cref{lem::idempotent_corr} the set $\Lambda_k^{(p)}$ contains the conjugacy classes of primitive idempotents in $(p)$ but not in $I_p$. If $p$ and $q$ are equivalent, then $(p)=(q)$ and $t(p)=t(q)$. \smallskip (ii) Let $e$ be a primitive idempotent in $(p)\cap (q)$, but not in $I_p$ or $I_q$. Then we can assume that $e\in pEp$ and write \[ e = \sum_{r\in p\mathcal C(k,k)p\cap(q)} a_r r \] with $a_r\in \mathbb C$ for all $r\in \mathcal C(k,k)$. Here we use that $(q)$ is spanned by the partitions it contains. Since $e\notin I_p$, there exists a partition $r$ with $a_r\neq0$ and $t(p)$ through-blocks. By \Cref{prop::FW} $r$ lies in the span of partitions of the form $p_\sigma$ modulo $I_p$, but as both $r$ and $p_\sigma$ are partitions with $t(p)$ through-blocks, and as sets of distinct partitions are linearly independent, $r=p_{\sigma}$ for a permutation $\sigma \in S(t(p))$. This yields $p=p_{\ensuremath{\mathrm{id}}_{t(p)}}=p_{\sigma} \cdot p_{\sigma^{-1}} = r \cdot p_{\sigma^{-1}} = t^{-l(r,p_{\sigma^{-1}})}~ rp_{\sigma^{-1}} \in (q)$. Similarly, one can check that $q\in (p)$ and hence $(p)=(q)$. By \Cref{lem::part_equiv} this implies that $p$ and $q$ are equivalent. \end{proof} The previous lemma together with \Cref{lem::surjection} gives the following description of the primitive idempotents in $E$. \begin{lemma} \label{lem::proj_part_final_corr} The following mapping is a bijection \[ \L: \bigsqcup_{[p]\in {\operatorname{Proj}_{\cC}(k)}/ \sim} \Lambda(p E p / I_p ) \to \Lambda(E), \quad e\mapsto \L_e . \] \end{lemma} \subsection{Parametrising indecomposable objects} The previous subsection resulted in a description of all primitive idempotents in the endomorphism algebra $\ensuremath{\mathrm{End}} ([k])$ up to conjugation, and hence a description of the indecomposable objects of the form $([k],e)$ up to isomorphism, for a fixed $k\in \mathbb N_0$. Now, in order to describe all indecomposable objects in $\RepCt$ up to isomorphism, we apply the results of \Cref{ssec::nuk} to determine those primitive idempotents $e\in \ensuremath{\mathrm{End}} ([k])$ which do not yield subobjects of $[l]$ for some $l<k$. Let us define the subset $$ \cP := \{ p\in{\operatorname{Proj}_{\cC}(k)}: k\in \mathbb N_0, p\not\in b^* \cdot \Projl \cdot b \text{ for all } 0\leq l<k, b\in \mathcal C(k, l) \} $$ of $\ProjC$. Note that the equivalence relation $\sim$ induces one on $\cP$, because if $r\cdot r^*=b^*\cdot a\cdot b$ for some $r\in\mathcal C(k,k)$, $b\in\mathcal C(k,l)$, $a\in\Projl$, then $$ r^*\cdot r = r^*\cdot (r\cdot r^*)\cdot r = r^* \cdot (b^* \cdot a\cdot b)\cdot r = (b\cdot r)^* \cdot a \cdot (b\cdot r) . $$ \begin{lemma} \label{lem::Projk_Prokl} Let $k\in \mathbb N_0$ and let $p\in {\operatorname{Proj}_{\cC}(k)}$ be a projective partition. Then the following are equivalent: \begin{enumerate} \item $p\in {\operatorname{Proj}_{\cC}(k)} \backslash \cP$, \item $([k],\L_e)$ is isomorphic to a subobject of $[l]$ for some $l<k$ for all $e\in \Lambda_k^{(p)}$, \item $([k],\L_e)$ is isomorphic to a subobject of $[l]$ for some $l<k$ for some $e\in \Lambda_k^{(p)}$. \end{enumerate} \end{lemma} \begin{proof} $(1) \Rightarrow (2)$. Let $p\in {\operatorname{Proj}_{\cC}(k)} \backslash \cP$. Then exist $ l<k$, $b\in\mathcal C(k, l)$, and $q\in\Projl$ such that $p=b^*\cdot q\cdot b$. Replacing $q$ by $b\cdot b^*\cdot q\cdot b\cdot b^*$, we can assume $b\cdot b^*\cdot q=q=q\cdot b\cdot b^*$. Then we have $l(p,p)=l(q,q)=l(b,b^*)$ and we set $\alpha =t^{-l(p,p)}$, so $\alpha p$ and $\alpha q$ are idempotent endomorphisms of the objects $[k]$ and $[l]$, respectively. As in the proof of \Cref{lem::chi_isomorphisms}, we see that $q\cdot b:([k],\alpha p) \to ([l],\alpha q)$ and $b^*\cdot q:([l],\alpha q) \to ([k],\alpha p)$ yield isomorphisms between the objects $([k],\alpha p)$ and $([l],\alpha q)$ in $\RepCt$, and hence for arbitrary subobjects. $(2) \Rightarrow (3)$, clearly. $(3) \Rightarrow (1)$. We assume that $([k],\L_e)$ is isomorphic to a subobject of $[l]$ for some $0\leq l<k$ and some $e\in \Lambda_k^{(p)}$. By \Cref{lem::surjection} there exists a projective partition $q\in \Projl$ such that $([k],\L_e)\cong ([l],\L_f)$ for some $f\in \Lambda_l^{(q)}$ and by \Cref{lem::chi_isomorphisms} we have $([l],\L_f) \cong ([k],t^{-m} \nu \L_f \nu^*)$ for some partition $\nu \in \mathcal C (l,k)$ and $m\in\mathbb N_0$. But $t^{-m} \nu f \nu^* \in \Lambda_k^{(\nu q\nu^*)}$, hence $\Lambda_k^{(p)} \cap \Lambda_k^{(\nu q\nu^*)} \neq \emptyset$. Thus $p$ and $\nu q\nu^*$ are equivalent by \Cref{lem::equiv_idemp_equiv_proj}, so as $\nu^* q\nu \notin \cP$, it follows that $p\notin \cP$. \end{proof} \begin{remark} Recall that we used distinguished idempotents $\nu_k\in \ensuremath{\mathrm{End}} ([k])$, $k\in \mathbb N_0$, to establish a correspondence between indecomposables in $\RepCt$ and primitive idempotents in $\Lambda_k = \Lambda(E/ (\nu_k))$, see \Cref{thm::indecomp_obj}. The previous lemma together with \Cref{lem::Rk} implies $$\cP = \{ p\in{\operatorname{Proj}_{\cC}(k)} : k\geq 0, p\not\in(\nu_k) \}.$$ \end{remark} We are ready to prove \Cref{thm::main_thm_2}, which reduces the computation of indecomposable objects in $\RepCt$ to the computation of equivalence classes of projective partitions. Let us denote the isomorphism classes of irreducible complex representations of a group $G$ by $\Irr(G)$. \begin{theorem} \label{thm::indecompsable_obj_by_A_k} Let $\mathcal C$ be a category of partitions and $t\in \mathbb C\backslash \{0\}$. Then transferring and lifting idempotents yields a bijection \begin{align*} \bigsqcup_{[p]\in \cP/ \sim} \Irr(S(p)) \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}. \end{align*} \end{theorem} \begin{proof} By \Cref{lem::proj_part_final_corr} we have a bijection \[ \L: \bigsqcup_{[p]\in {\operatorname{Proj}_{\cC}(k)}/ \sim} \Lambda(p E p / I_p ) \to \Lambda(E), \quad e\mapsto \L_e.\] The isomorphisms classes of non-zero indecomposable objects in $\RepCt$ are in bijection with the conjugacy classes of primitive idempotents in $\ensuremath{\mathrm{End}} ([k])$ for which $([k],e)$ is not a subobject of $[l]$ for any $l<k$ by \Cref{lem::lem_Krull-Schmidt} and thus by \Cref{lem::Projk_Prokl} we have a bijection \begin{align*} \bigsqcup_{[p]\in \cP/ \sim} \Lambda(p E p / I_p ) \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}. \end{align*} By \Cref{prop::FW} the algebra $p E p / I_p $ is isomorphic to the group algebra $\mathbb C S(p)$ for any $p\in {\operatorname{Proj}_{\cC}(k)}$. Finally, the primitive idempotents of a complex group algebra up to conjugation correspond to the irreducible complex representations of the group, where the primitive idempotents can be interpreted as projection operators onto the respective irreducible subrepresentation inside the (semisimple) regular representation. \end{proof} \begin{example} \label{ex::P} In $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$, we have the decomposition $p = p_0^* \ensuremath{\mathrm{id}}_{t(p)} p_0$ for any projective partition $p=p_0^*p_0\in\operatorname{Proj}_P(k)$. Thus $p\in \cP$ if and only if $p=\ensuremath{\mathrm{id}}_k$. Now $S(\ensuremath{\mathrm{id}}_k)=S_k$, the full symmetric group, and the indecomposables are parametrised by Young diagrams of arbitrary size. This reproduces the known results from \cite{De07, CO11} (see also Halverson and Ram's survey on partition algebras \cite{HR04}) in this case. \end{example} More examples will be considered in \Cref{sec-examples}. \subsection{Grothendieck rings} The theorem yields a description of the Grothendieck group of the additive category $\RepCt$. Since the latter category also has a monoidal structure, we want to extend this to a description of the Grothendieck ring. \newcommand{\operatorname{Ind}}{\operatorname{Ind}} \newcommand{\operatorname{Res}}{\operatorname{Res}} \newcommand{\operatorname{Hom}}{\operatorname{Hom}} Let $\ProjC:=\bigsqcup_{k\geq0}{\operatorname{Proj}_{\cC}(k)}$ be the set of projective partitions in $\mathcal C$. We observe that $\ProjC$ is a semigroup with the operation $\otimes$ and the identity element being the empty partition $p_0\in\mathcal C(0,0)$. The equivalence relation $\sim$ induces an equivalence relations on $\ProjC$ such that two projective partitions can be equivalent only if they are elements in ${\operatorname{Proj}_{\cC}(k)}$ for some $k\geq 0$, and the semigroup operation $\otimes$ induces one on the equivalence classes $\ProjC/{\sim}$. We also observe that for any $p,q\in\ProjC$, we have an embedding $S(p)\times S(q)\to S(p\otimes q)$. For each $p\in\ProjC$, let us denote the Grothendieck group of $\ensuremath{\mathop{\mathrm{Rep}}}(S(p))$ by $K(S(p))$, that is, $K(S(p))$ is the abelian group whose elements are isomorphism classes $[V]$ of (complex) $S(p)$ representations with the operation $[V]+[W]=[V\oplus W]$ for any two $S(p)$ representations $V,W$. Recall that $\cP$ is a certain subset of $\ProjC$, and that $\sim$ defines an equivalence relation also in $\cP$. \begin{definition} We define the ring $$R :=\bigoplus_{[p]\in \cP/{\sim}} K(S(p)) $$ with the multiplication $$ [V]\cdot[W] := \begin{cases} [ \operatorname{Ind}_{S(p)\times S(q)}^{S(p\otimes q)} (V\boxtimes W) ] & p\otimes q\in\cP \\ 0 & \text{else} \end{cases} $$ for all $V\in\ensuremath{\mathop{\mathrm{Rep}}}(S(p))$ and $W\in\ensuremath{\mathop{\mathrm{Rep}}}(S(q))$, with the identity element corresponding to the one-dimensional representation of the trivial group $S(p_0)$. \end{definition} \begin{definition} Let us assign an element in $\mathbb N_0\times\mathbb N_0$ to all objects and morphisms in $\RepCt$: we assign any partition $p\in\mathcal C(k,k)$ with $t(p)$ through-blocks the pair of numbers $(k,t(p))$. This extends to linear combinations by taking the maximum, and to indecomposable objects by taking the minimum over all idempotents with isomorphic image, and to arbitrary objects by taking the maximum over all indecomposable summands, where we use the (total) lexicographic order. Let us denote the Grothendieck ring of $\RepCt$ by $K(\mathcal C,t)$. \end{definition} \begin{lemma} This defines an $\mathbb N_0\times\mathbb N_0$-filtration on $K(\mathcal C,t)$. \end{lemma} \begin{proof} It can be checked directly that the filtered subsets are additive subgroups which behave in the desired way under multiplication. \end{proof} We obtain the following analogue of \cite[Prop.~5.11]{De07}, a description of the associated graded of the Grothendieck ring for $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$. \newcommand\gr{\operatorname{gr}} \begin{proposition} \label{Grothendieck-ring} Let $\mathcal C$ be a category of partitions and $t\in \mathbb C \backslash \{0\}$. Then the mapping $\L$ induces a ring isomorphism between $R$ and the associated graded ring $\gr K(\mathcal C,t)$. \end{proposition} \begin{proof} \Cref{thm::indecompsable_obj_by_A_k} means that $\L$ induces a bijection of abelian groups. Consider $V_i\in\Irr(S(p_i))$ for $i\in\{1,2\},p_i\in\ProjC$. If $p_i\in\mathcal C(k_i,k_i)$ for $k_i\geq 0$, and if $p_1\otimes p_2\in(\nu_{k_1+k_2})$, then the object corresponding to $V_1\otimes V_2$ is isomorphic to one stemming from an idempotent of the object $[k']$ for some $0\leq k'<k_1+k_2$ (as discussed in the proof of \Cref{thm::indecomp_obj}). Hence, it has filtered degree less than $(k_1+k_2,t(p_1)+t(p_2))$, and the corresponding product in the associated graded of the Grothendieck ring is $0$. Otherwise, let $e_i$ be the primitive idempotents in $\mathbb C S(p_i)$ corresponding to $V_i$. Then the tensor product of the objects corresponding to $V_i$ in $\RepCt$ are the image of the tensor product of the idempotent lifts of the $e_i$. Modulo lower order terms in the filtration, they correspond to the idempotent $$ e:= e_1\otimes e_2 \in \mathbb C S(p_1)\otimes \mathbb C S(p_2) \subset \mathbb C S(p_1\otimes p_2) . $$ Let $(V_\lambda)_\lambda$ be a set of isomorphism classes of irreducible complex representations for $S(p_1\otimes p_2)$, with corresponding primitive idempotents $(e_\lambda)_\lambda$ in the group algebra. Then $e$ decomposes as a linear combination $e = \sum_\lambda n_\lambda e_\lambda$ with multiplicities $(n_\lambda)_\lambda$, where \begin{align*} n_\lambda &= \dim \operatorname{Hom}_{S(p_1)\times S(p_2)}(\operatorname{Res}_{S(p_1)\times S(p_2)} V_\lambda, V_1\boxtimes V_2) \\ &= \dim \operatorname{Hom}_{S(p_1\otimes p_2)}(V_\lambda, \operatorname{Ind}_{S(p_1)\times S(p_2)}^{S(p_1\otimes p_2)} V_1\boxtimes V_2) . \end{align*} This shows that the structure constants of the multiplication coincide in the two rings considered. \end{proof} We note that the ring $R$ does not depend on $t$ and the Grothendieck ring of $\RepCt$ can be viewed as a filtered deformation of $R$ with deformation parameter $t$. \begin{remark} We also note that the the operation $p\mapsto p\otimes\Paa$ for a projective partition $p$ defines a partial order and yields an embedding $S(p)\to S(p\otimes\Paa)$ which turns the groups $(S(p))_{p\in\cP}$ into an inverse system (whose underlying poset, however, might not be directed in general). For $\mathcal C=P$, this is the system of all symmetric groups $S_0\subset S_1\subset S_2\subset \dots$. \end{remark} \newcommand\wRepCt{\widehat{\RepCt}} \subsection{Semisimplification} Let us consider now a group-theoretical category of partitions $\mathcal C$, and let us recall (\Cref{thm-grouptheo-semisimple}) that $\RepCt$ is not semisimple if and only if $t\in\mathbb N_0$. For $t=0$, the semisimplification is trivial by \Cref{lem::semisimplification-t0}, so let us consider $t\geq 1$. In this case, we record some general observations about the semisimplification $\wRepCt$. For any $k\geq0$, $p\in{\operatorname{Proj}_{\cC}(k)}$, and $V$ in $\Irr(S(p))$, let us denote the primitive idempotent in $\mathcal C(k,k)$ corresponding to the indecomposable object $\L(V)$ according to \Cref{thm::indecompsable_obj_by_A_k} by $e_{k,p,V}$. \begin{lemma} If $t\in\mathbb N$, then $\L$ together with the quotient functor $\RepCt\to\wRepCt$ yields a bijection \begin{align*} \mathcal{V}\longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \widehat{\RepCt} \end{matrix} \right\}, \end{align*} where $\mathcal{V}$ is the set of isomorphism classes of those $V\in\Irr(S(p))$ for $k\geq0$, $[p]\in{\operatorname{Proj}_{\cC}(k)}/{\sim}$, $p\not\in(\nu_k)$, whose associated idempotent $e_{k,p,V}$ decomposes into a sum of primitive idempotents $(e_i)_i$ in $P(k,k)\supset \mathcal C(k,k)$ at least one of which has non-zero trace. \end{lemma} \begin{proof} By general results on the semisimplification (see \cite[Thm.~2.6]{EO18} or \Cref{rem::negl_morphisms}), the quotient functor induces a bijection between the isomorphism classes of indecomposable objects of non-zero dimension in the original category and the isomorphism classes of non-zero indecomposables in the semisimplification. The dimension in $\RepCt$ can be computed by decomposing the relevant idempotent in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P,t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ and summing the non-negative traces of the involved idempotents in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ (they correspond to dimensions of objects in $\ensuremath{\mathop{\mathrm{Rep}}}(S_t)=\widehat{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)}$). \end{proof} This allows us to describe at least a part of the semisimplification $\wRepCt$ uniformly for all group-theoretical $\mathcal C$. \begin{proposition} If $t\in\mathbb N$, then there is a unique isomorphism class of non-zero indecomposable objects in $\wRepCt$ for each isomorphism class in $\Irr(S(p))$ for all $p\in\cP$ with $t(p)\leq t/2$, i.e.~p has at most $t/2$ through-blocks. \end{proposition} \begin{proof} We record that if an idempotent $e$ in any ring lies in an ideal $I$, then any orthogonal decomposition will produce idempotents which are divisible by $e$, and hence, also contained in $I$. Taking $I$ to be the ideal spanned by all partitions with at most $t/2$ through-blocks in $P(k,k)$ implies that decomposing the idempotent for some $V\in\Irr(S(p))$ in $P(k,k)$ results in a sum of primitive idempotents all of which have at most $t/2$ through-blocks. Such primitive idempotents have non-zero traces by the description of the negligible primitive idempotents in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t)$ in \cite[Rem.~3.25]{CO11}. \end{proof} \newcommand\SurC{\operatorname{Sur}_\mathcal C} \newcommand\cQ{\mathcal{Q}} \subsection{An alternative description} Instead of using projective partitions, we note that one could alternatively consider their ``upper halves'', that is, the partitions $p_0$ appearing in (through-block) factorisations $p=p_0^*p_0$ of projective partitions $p$. Let us explain how this yields an equivalent description of indecomposable objects in interpolation partition categories. Let $\mathcal C$ be any category of partitions. \begin{definition} A partition $q\in P(k,l)$ is called \emph{surjective} if $t(q)=l$, i.e., $q$ has exactly $l$ through-blocks. For $k\in\mathbb N_0$, we set $$ \SurC(k) := \{ q\in P(k,l): l\in\mathbb N_0, t(q)=l, q^*q\in\mathcal C \} . $$ \end{definition} Note that $k\geq t(q)=l$ for any surjective partition $q\in\mathcal C(k,l)$. \begin{definition} A surjective partition $q\in\SurC(k)$ is called \emph{indecomposable surjective} if $q\notin \SurC(k')\cdot b$ for all $k'<k$ and all $b\in\mathcal C(k,k')$. We set $$ \cQ := \{ q\in\SurC(k): k\in\mathbb N_0, q\text{ indecomposable}\} . $$ \end{definition} \begin{definition} Two surjective partitions $q,q'\in P(k,l)\cap\SurC(k)$ are called \emph{equivalent} if there are partitions $r\in\mathcal C(k,k)$, $s\in P(l,l)$ such that $$ q = s\cdot q'\cdot r^* \quad\text{and}\quad q' = s^*\cdot q\cdot r . $$ \end{definition} We observe that any $s$ as in the definition must have $t(s)=l$ through-blocks, so it must be a permutation and $s^*=s^{-1}$. \begin{lemma} This defines equivalence relations $\sim$ on $\SurC(k)$ for all $k\geq 0$ and on $\cQ$. \end{lemma} \begin{proof} For the sets $\SurC(k)$, this can be verified directly. Moreover, for $q\in\SurC(k)\cap\cQ$ and an equivalent $q'\in\SurC(k)$, we see that $q'\in\cQ$, as well. \end{proof} \begin{definition} For any $q\in \SurC(k)$, we define the set $$ S_{1/2}(q) := \{\sigma\in S_{t(q)}: q^* \sigma q\in\mathcal C \} . $$ \end{definition} \begin{lemma} $S_{1/2}(q)$ is a subgroup of $S_l$ which really only depends on the equivalence class of a surjective partition $q$. \end{lemma} \begin{proof} This can be checked as in \Cref{ssec::projectives}. \end{proof} \begin{proposition} The mapping $q\mapsto q^*q$ induces a bijection between $\cQ/{\sim}$ and $\cP/{\sim}$, and $S_{1/2}(q) = S(q^*q)$ for each $q\in\cQ$. \end{proposition} \begin{proof} Most of the assertion follows directly from the definitions, but let us explain briefly, why two indecomposable surjective partitions $q,q'$ in $\cQ$ define the same projective partition up to equivalence only if they are equivalent. First we note that this can be reduced to the case where $q,q'$ define the same projective partition $p$. But then $$ p = p\cdot p = q^* q\cdot q'^* q' , $$ so in particular, $\sigma:=q\cdot q'^*$ must be a permutation. This implies $$ q = q\cdot (q^*q) = q\cdot (q^*\sigma q')= \sigma q' , $$ since $q\cdot q^*=\ensuremath{\mathrm{id}}_{t(q)}$ for any surjective $q$, and $q,q'$ are equivalent, as desired. \end{proof} From \Cref{thm::indecompsable_obj_by_A_k} we obtain immediately: \begin{corollary} The indecomposables in $\RepCt$ are parametrised by the irreducible complex representations of the system of finite groups $(S_{1/2}(q))_{q\in\cQ}$. \end{corollary} Compared to the set of projective partitions $\cP$, the set $\cQ$ contains their possible (upper) halves, so the partitions in $\cQ$ are potentially smaller. However, various ``upper halves'' can produces the same projective partition, which is reflected in the slightly more complicated equivalence relation. Beyond providing an alternative approach to the description of indecomposable objects in interpolating partition categories, the set $\cQ$ can be interpreted naturally in the more general framework of Knop's tensor envelopes (\cite{Kn07}). Such a generalisation is part of an ongoing research project. \section{Indecomposable objects for some concrete examples} \label{sec-examples} In this section, we compute concrete parameterisations for the indecomposable objects in $\RepCt$ up to isomorphism for all categories of partitions $\mathcal C$ which either contain the partition $\Pabab$ or in which all partitions are noncrossing. Recall \Cref{ssec::categories-of-partitions} for the classification of these categories of partitions, for the corresponding easy quantum groups see for instance \cite{We13}. We conclude the section with a comparison of our results with the well-known theory of Temperley--Lieb categories. \subsection{Indecomposable objects in 13 interpolating partition categories} The categories of partitions $\mathcal C$ with $\Pabab \in \mathcal C$ are exactly the following six categories and the corresponding easy quantum groups $G_n(\mathcal C)$, $n\in \mathbb N_0$, are all given by compact matrix groups. \begin{enumerate}[label=(\roman*)] \item $P$ is the category of all partitions and corresponds to the symmetric groups $G_n(P)=S_n$. \item $P_{even}$ is the category of partitions with even block size and corresponds to the hyperoctahedral groups $G_n(P_{even})=H_n=S_2 \wr S_n$. \item $P_2$ is the category of partitions with only blocks of size two and corresponds to the orthogonal groups $G_n(P_2)=O_n$. \item $P':=\langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle$ is the category of partitions with an even number of blocks of odd size and corresponds to the modified symmetric groups $G_n(\langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle)=S_n \times S_2=:S_n'$. \item $P_b:=\langle \Pabab, {\mathord{\uparrow}} \rangle$ is the category of partitions with blocks of size one or two and corresponds to the bistochastic groups $G_n(\langle \Pabab, {\mathord{\uparrow}} \rangle)=B_n$. \item $P'_b:=\langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle = \langle \Pabab, \Pabcb \rangle$ is the category of partitions with an arbitrary number of blocks of size two and an even number of block of size one and corresponds to the modified bistochastic groups $G_n(\langle \Pabab, {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle)=B_n \times S_2=:B_n'$. \end{enumerate} The categories of partitions $\mathcal C$ which contain only noncrossing partitions are exactly are exactly the following seven categories and all of them correspond to so-called free quantum groups. \begin{enumerate}[label=(\roman*)] \item $NC$ is the category of all noncrossing partitions and corresponds to the free symmetric quantum groups $G_n(NC)=:S_n^+$. \item $NC_{even}$ is the category of noncrossing partitions with even block size and corresponds to the hyperoctahedral quantum groups $G_n(NC_{even})=:H_n^+$. \item $NC_2$ is the category of noncrossing partitions with only blocks of size two and corresponds to the free orthogonal quantum groups $G_n(NC_2)=:O_n^+$. \item $NC':=\langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle$ is the category of noncrossing partitions with an even number of blocks of odd size and corresponds to the modified symmetric quantum groups $G_n(\langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle)=:S_n'^+$. \item $NC_b:=\langle {\mathord{\uparrow}} \rangle$ is the category of noncrossing partitions with blocks of size one or two and corresponds to the bistochastic quantum groups $G_n(\langle {\mathord{\uparrow}} \rangle)=B_n^+$, \item $NC^\#_b:=\langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle$ is the category of noncrossing partitions with an arbitrary number of blocks of size two and an even number of blocks of size one such that the number of points between any two connected points is even, considering all points naturally arranged in a circle, i.e. for instance the upper left point is next to the second left upper point and the lower left point. The corresponding quantum groups $G_n(\langle \Pabcb \rangle)=:B_n^{\#+}$ are called the freely modified bistochastic quantum groups. \item $NC'_b:=\langle \Pabcb \rangle$ is the category of noncrossing partitions with an arbitrary number of blocks of size two and an even number of block of size one and corresponds to the modified bistochastic quantum groups $G_n(\langle \Pabcb \rangle)=:B_n'^+$. \end{enumerate} In the following we will apply \Cref{thm::indecompsable_obj_by_A_k} to derive an explicit parametrisation of the indecomposable objects in $\RepCt$ up to isomorphism for all these categories. Recall that we have to determine equivalence classes of projective partitions in $$ \cP := \{ p\in{\operatorname{Proj}_{\cC}(k)}: k\geq0, p\not\in b^* \Projl b \text{ for all } 0\leq l<k, b\in \mathcal C(k, l) \} . $$ We already considered the case $\mathcal C=P$ in \Cref{ex::P}. The following lemma shows that some categories of partitions behave similarly. \begin{lemma} \label{lem::ex0} Let $\mathcal C \in \{P,P_2,P_b, NC, NC_2, NC_b \}$ and $t\in \mathbb C\backslash \{0\}$. Then $\cP = \{ \ensuremath{\mathrm{id}}_k \mid k\in \mathbb N_0 \}$. For $P$, $P_2$, and $P_b$, $S(p)=S_{t(p)}$ for all $p\in \ProjC$ and there exists a bijection \begin{align*} \phi: \left\{ \begin{matrix} \text{Young diagrams } \lambda \\ \text{of arbitrary size} \end{matrix} \right\} \to \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}. \end{align*} For $NC$, $NC_2$, and $NC_b$, $S(p)=\{\ensuremath{\mathrm{id}}\}$ for all $p\in \ProjC$ and there exists a bijection \begin{align*} \phi: \mathbb N_0 \to \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}. \end{align*} \end{lemma} \begin{proof} Consider a projective partition $p=p_0^*p_0\in {\operatorname{Proj}_{\cC}(k)}$. It is easy to check that one can choose $p_0\in \mathcal C$ and hence $p = p_0^* \ensuremath{\mathrm{id}}_{t(p)} p_0$ yields a decomposition of $p$ in $\mathcal C$. Hence $p\in \cP$ if and only if $p=\ensuremath{\mathrm{id}}_k$ for some $k\in\mathbb N_0$. We described the structure of $S(p)$ for all $p\in \ProjC$ in \Cref{ex::FW}. If $\Pabab \in \mathcal C$, then $S(p)=S_{t(p)}$ for all $p\in \ProjC$ and by \Cref{thm::indecompsable_obj_by_A_k} the indecomposables in $\RepCt$ up to isomorphism are in bijection with $\bigsqcup_{k\in \mathbb N_0} \Irr (S_k)$ and hence with Young diagrams of arbitrary size. If $\Pabab \notin \mathcal C$, then $S(p)=\{\ensuremath{\mathrm{id}}\}$ for all $p\in \ProjC$ and by \Cref{thm::indecompsable_obj_by_A_k} the indecomposables in $\RepCt$ up to isomorphism are in bijection with $\bigsqcup_{k\in \mathbb N_0} \Irr (\{ \ensuremath{\mathrm{id}} \})$ and hence with $\mathbb N_0$. \end{proof} \begin{remark} Our description reproduces the known results for $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (O_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P_2,t)$ from \cite{De07, CH17} (see also Wenzl's original article on the Brauer algebras \cite{Wenz87-brauer}). Moreover, our description for the Temperley--Lieb category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O^+_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC_2,t)$ reproduces known results as the indecomposable objects can be explicitly described using Jones--Wenzl idempotents. In \Cref{subs::Ot+andSt+} we will study this in more detail to obtain an explicit description of the indecomposables of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S^+_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC,t)$. \end{remark} Next we consider the categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t')=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P',t)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (B_t')=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P'_b,t)$, and their free versions $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t'^+)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC',t)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (B_t'^+)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC'_b,t)$. \begin{lemma} \label{lem::ex1} Let $\mathcal C\in \{P', P_b', NC', NC_b' \}$. Then \begin{align*} \cP = \{p \in {\operatorname{Proj}_{\cC}(k)} \mid~ &k\geq0, ~t(p)\geq k-1\} . \end{align*} \end{lemma} \begin{proof} Note that any partition $p\in \mathcal C$ has an even number of blocks of odd size. Let $p\in {\operatorname{Proj}_{\cC}(k)}$ with $t(p)<k-1$. Then there exist two upper points of $p$ which are both not in a through-block of size two. Let $b\in P(k-2,k)$ be the partition that arises from $p$ by removing these points. If both considered points are in the same block of $p$, then removing them does not change the parity of the size of the block. Otherwise, removing them changes the parity of two blocks. In both cases we obtain a partition that still has an even number of blocks of odd size. Moreover, $b$ is a noncrossing partition if $p$ is noncrossing, and $b$ has only blocks of size one or two if $p$ does. Hence $b\in \mathcal C$. As $p$ is projective, we have $p=b^* \cdot b$. We set $q:=b^* \cdot b\in \ProjC(k-2)$ and it follows that $p=p\cdot p=b\cdot b^*\cdot b\cdot b^*=b\cdot q\cdot b^* \notin \cP$. Now let $p\in {\operatorname{Proj}_{\cC}(k)}$ with $t(p)\geq k-1$. If $t(p)=k$, then $p=\ensuremath{\mathrm{id}}_k \in \cP$, so assume $t(p)=k-1$. Let us also assume that $p\notin \cP$. Then $p=b^* q b$ for some $0\leq l<k, b\in \mathcal C(k, l), q\in \Projl$. Since the number of through-blocks of a composition of partitions is less than or equal to the number of through-blocks of each composed partition, it follows that $l=k-1$, $q=\ensuremath{\mathrm{id}}_{k-1}$, and $t(b)=k-1$. But then $b$ has a total of $k$ or $k-1$ blocks exactly one of which is of odd size, which is a contradiction. \end{proof} Both the modified symmetric groups $S_n'=S_n\times S_2$ and the modified bistochastic groups $B_n'=B_n\times S_2$ are direct products involving $S_2$. Hence their irreducible representations are of the form $\rho \times \varphi$, where $\rho$ is an irreducible representation of $S_n$ or $B_n$, respectively, and $\varphi$ is one of the two irreducible representations of $S_2$. In the following we will see that the indecomposables in the corresponding interpolating partition categories have an analogous structure. \begin{lemma} Let $\mathcal C\in \{P', P_b'\}$ and $t\in \mathbb C\backslash \{0\}$. Then there exists a bijection \begin{align*} \phi: \left\{ \begin{matrix} \text{Young diagram} \\ \text{of arbitrary size} \end{matrix} \right\} \times \{1,-1\} \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}. \end{align*} \end{lemma} \begin{proof} We show that $E:=\{\ensuremath{\mathrm{id}}_k \mid k\in\mathbb N_0\} \cup \{ \ensuremath{\mathrm{id}}_k \ensuremath{\otimes} \Pab \mid k\in \mathbb N_0 \}$ is a set of representatives for all equivalence classes of projective partitions in $\cP$. Since equivalent projective partitions have the same number of through-blocks, $\ensuremath{\mathrm{id}}_k$ and $\ensuremath{\mathrm{id}}_{k-1} \ensuremath{\otimes} \Pab$ are not equivalent for any $k\in \mathbb N$ and hence all partitions of $E$ lie in different equivalence classes. Consider a partition $p\in \cP$. By \Cref{lem::ex1} we have $p=\ensuremath{\mathrm{id}}_k$ or $t(p)=k-1$, let us assume the latter. Since $\Pabab \in \mathcal C$, it is easy to check that $p$ is either equivalent to $\ensuremath{\mathrm{id}}_{k-1} \ensuremath{\otimes} \Pab$ or $\ensuremath{\mathrm{id}}_{k-2} \ensuremath{\otimes} \Paaaa$. In the latter case $\mathcal C=P'$, but $\ensuremath{\mathrm{id}}_{k-1} \ensuremath{\otimes} \Pab=r\cdot r^*$ and $\ensuremath{\mathrm{id}}_{k-2} \ensuremath{\otimes} \Paaaa=r^*\cdot r$ with $r=\ensuremath{\mathrm{id}}_{k-2} \ensuremath{\otimes} \Paaab$, hence these partitions are equivalent. Thus $E$ is a set of representatives for all equivalence classes of projective partitions in $\cP$ and we have $S(\ensuremath{\mathrm{id}}_k)=S_k$ and $S(\ensuremath{\mathrm{id}}_k \ensuremath{\otimes} \Pab)=S_k$ for all $k\in \mathbb N_0$. Then by \Cref{thm::indecompsable_obj_by_A_k}, the indecomposables in $\RepCt$ up to isomorphism are in bijection with $\bigsqcup_{k\in \mathbb N_0} (\Irr (S_k) \sqcup \Irr (S_k))$ and hence the claim follows. \end{proof} \begin{lemma} Let $\mathcal C\in \{NC', NC_b' \}$ and $t\in \mathbb C\backslash \{0\}$. Then there exists a bijection \begin{align*} \phi: \{(k,k) \mid k\in\mathbb N_0 \} \cup \{ (k+1,k) \mid k\in \mathbb N_0\} \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \RepCt \end{matrix} \right\}. \end{align*} \end{lemma} \begin{proof} We show again that $E:=\{\ensuremath{\mathrm{id}}_k \mid k\in\mathbb N_0 \} \cup \{ \ensuremath{\mathrm{id}}_k \ensuremath{\otimes} \Pab \mid k\in \mathbb N_0 \}$ is a set of representatives for all equivalence classes of projective partitions in $\cP$. As in the previous proof all partitions of $E$ lie in different equivalence classes. Let $k\in \mathbb N$. We consider the partitions $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l} \in \cP$ with $0\leq l\leq k-1$ and claim that they are all equivalent. Let $0\leq l<l'\leq k-1$ and we define $r:=\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \LPartition{0.6:1}{} \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{l'-l} \ensuremath{\otimes} \UPartition{0.4:1}{} \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l'} \in \mathcal C(k,k)$. Then we have $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l}=rr^*$ and $\ensuremath{\mathrm{id}}_{l'} \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l'}=r^*r$ and hence these partitions are equivalent. Now, if $\mathcal C=NC_b'=\langle \Pabcb \rangle$, then any partition in ${\operatorname{Proj}_{\cC}(k)} \cap \cP$ is either $\ensuremath{\mathrm{id}}_k$ or $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l}$ for some $0\leq l\leq k-1$ by \Cref{lem::ex1}. If $\mathcal C=NC'=\langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}}, \Paaaa \rangle$, then ${\operatorname{Proj}_{\cC}(k)} \cap \cP$ contains additionally the partitions $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \Paaaa \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-2-l}$ with $0\leq l\leq k-2$. But as in the proof of the previous lemma they are all equivalent to the partitions $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l}$ with $0\leq l\leq k-1$. Thus, in both cases, $E$ is a set of representatives for all equivalence classes of projective partitions in $\cP$. By \Cref{ex::FW} we have $S(p)=\{\ensuremath{\mathrm{id}}\}$ for all $p\in \ProjC$ and the claim follows from \Cref{thm::indecompsable_obj_by_A_k}. \end{proof} \begin{lemma} \label{lem::ex2} Let $\mathcal C=NC_b^\#$. Then \begin{align*} \cP = \{p \in {\operatorname{Proj}_{\cC}(k)} \mid~ &k\geq0, ~t(p)\geq k-1\} . \end{align*} \end{lemma} \begin{proof} Let $p\in {\operatorname{Proj}_{\cC}(k)}$ with $t(p)<k-1$. Then any non-through-block of size two encloses an even number of blocks of size one. Moreover, as $p$ is projective and hence symmetric, any through-block connects an upper point with the opposite lower point. Thus there exist two upper points that are either in the same non-through-block or adjacent points in blocks of size one. Let $b\in P(k-2,k)$ be the partition that arises from $p$ by removing these points. In both cases $b$ also has the described properties of $p$ and hence $b\in \mathcal C$. As $p$ is projective, we have $p=b^* \cdot b$ and hence $p=p\cdot p=b\cdot b^*\cdot b\cdot b^*=b\cdot q\cdot b^* \notin \cP$ with $q:=b^* \cdot b\in \ProjC(k-2)$. Now let $p\in {\operatorname{Proj}_{\cC}(k)}$ with $t(p)= k-1$ and we assume that $p\notin \cP$. Then $p=b^* q b$ for some $0\leq l<k, b\in \mathcal C(k, l), q\in \Projl$. It follows that $l=k-1$, $q=\ensuremath{\mathrm{id}}_{k-1}$, and $b$ has exactly one block of size one, which is a contradiction. \end{proof} \begin{lemma} Let $t\in \mathbb C\backslash \{0\}$. Then there exists a bijection \begin{align*} \phi: \{ (k,l) \mid k\in \mathbb N_0, 0\leq l\leq k\} \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \ensuremath{\mathop{\mathrm{\underline{Rep}}}} (B_t^{\#+}) \end{matrix} \right\}. \end{align*} \end{lemma} \begin{proof} By \Cref{lem::ex1} we have $\cP =\{\ensuremath{\mathrm{id}}_0\} \cup \{ \ensuremath{\mathrm{id}}_k, \ensuremath{\mathrm{id}}_{l} \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l} \mid k\in \mathbb N, 0\leq l\leq k-1\}$. One can check that $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l}$ and $\ensuremath{\mathrm{id}}_{l'} \ensuremath{\otimes} \Pab \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l'}$ are equivalent for $0\leq l<l'\leq k-1$ if and only if $\ensuremath{\mathrm{id}}_l \ensuremath{\otimes} \LPartition{0.6:1}{} \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{l'-l} \ensuremath{\otimes} \UPartition{0.4:1}{} \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{k-1-l'}\in \mathcal C(k,k)$. But this is not the case for $\mathcal C= NC_b^\#=\langle {\mathord{\uparrow}}\ensuremath{\otimes} {\mathord{\uparrow}} \rangle$ and hence none of the partitions in $\cP$ are equivalent. Then the claim follows from \Cref{thm::indecompsable_obj_by_A_k}. \end{proof} \medskip To study the structure of the indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (H_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (P_{even},t)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (H_t^+)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}} (NC_{even},t)$, again we first determine $\cP$. \begin{lemma} \label{lem::Ht(p)_Q} Let $\mathcal C \in \{P_{even} ,NC_{even} \}$ and $k\in \mathbb N_{\geq 2}$. Then \begin{align*} \cP = \{p \in {\operatorname{Proj}_{\cC}(k)} \mid~ &k\geq0, ~p \text{ has only through-blocks and any block} \\ &\text{has at most $2$ upper and at most $2$ lower points}\} . \end{align*} \end{lemma} \begin{proof} Let $p\in {\operatorname{Proj}_{\cC}(k)}$. At first we assume that $p$ has a block with more than $3$ upper points. Let $b\in P(k-2,k)$ be the partition that arises from $p$ by removing two of the upper points in the considered block. As $p$ is projective and the block has more than $3$ upper points, it follows that $p=bb^*$. Moreover, as $\mathcal C$ contains all partitions or all noncrossing partitions with even block size, $b$ lies also in $\mathcal C$. We set $q:=b^*b\in \ProjC(k-2)$ and thus we have $p=p^2=bb^*bb^*=bqb^* \notin \cP$. Now we assume that $p$ has an upper non-through block, say of size $l\in \mathbb N$. Let $b\in P(k-l,k)$ be the partition that arises from $p$ by removing the considered block. As $p$ is projective, it follows that $p=bb^*$ and again by the structure of $\mathcal C$ the partition $b$ lies also in $\mathcal C$. We set $q:=b^*b\in \ProjC(k-2)$ and thus we have $p=p^2=bb^*bb^*=bqb^* \notin \cP$. It remains to show that any partition $p\in {\operatorname{Proj}_{\cC}(k)}$ with only through-blocks that has only blocks with at most $2$ upper and at most $2$ lower points lies in $\cP$. Consider such a partition and assume that $p\notin \cP$. Then $p=b^* q b$ for some $0\leq l<k, b\in \mathcal C(k, l), q\in \Projl$, and we assume that $ l$ is chosen minimally. By the above considerations this implies that also $q$ has only through-blocks and only blocks with at most $2$ upper and at most $2$ lower points. But as $l<k$ and as all blocks have even size, the composition of partitions $b^* q b$ has to have a block that contains at least two more upper points then the corresponding block of $q$. This contradicts the assumptions on $p=b^* q b$. \end{proof} Now we compute all indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t)$ up to isomorphism for $t\in \mathbb C\backslash \{0\}$. It is well-known that for $n\in \mathbb N_0$, inequivalent irreducible representations of the hyperoctahedral group $H_n$ can be indexed by bipartitions of size $n$, i.e.~pairs $(\lambda_1,\lambda_2)$ of partitions of some $n_1\leq n$ and $n_2\leq n$, respectively, with $n=n_1+n_2$ (\cite{GK76}, see also \cite{orellana}). We show that this description extends to a description of the non-isomorphic indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(P_{even},t)$ by bipartitions of arbitrary size. Recall the definition of the partitions $$ e_{k_1,k_2}:= \text{id}_{k_1} \ensuremath{\otimes} \Paaaa \ensuremath{\otimes} \ldots \ensuremath{\otimes} \Paaaa \in P_{even}(k_1+2k_2,k_1+2k_2), $$ for any $k_1,k_2\in \mathbb N_0$ in \Cref{lem::FW}. \begin{proposition} \label{thm::indecomp_obj_Hn} \label{indecomposables_H} Let $t\in \mathbb C\backslash \{0\}$. Then there exists a bijection \begin{align*} \phi: \left\{ \begin{matrix} \text{bipartitions } \lambda=(\lambda_1,\lambda_2) \\ \text{of arbitrary size} \end{matrix} \right\} \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t) \end{matrix} \right\}. \end{align*} \end{proposition} \begin{proof} We start by showing that the set $E:=\{ e_{k_1,k_2} \mid k_1,k_2\in \mathbb N_0\}$ is a set of representatives for all equivalence classes of projective partitions in $\cP$. Let $k\in \mathbb N_0$. Since $t(e_{k_1,k_2})=k_1+k_2$ is different for all $k_1,k_2\in \mathbb N_0$ with $k_1+2k_2=k$, the partitions in $E\cap {\operatorname{Proj}_{\cC}(k)}=\{ e_{k_1,k_2} \mid k_1,k_2\in \mathbb N_0, k_1+2k_2=k\}$ are pairwise inequivalent. Let $p$ be projective partition in $\cP$. Then p has only through-blocks and any block has at most $2$ upper and at most $2$ lower points by \Cref{lem::Ht(p)_Q}. We denote by $k_1$ the number pf blocks of $p$ of size $2$ and by $k_2$ the number pf blocks of $p$ of size $4$. Since $\Pabab \in P_{even}(k,k)$, it is easy to check that $p$ and $e_{k_1,k_2}$ are equivalent. By \Cref{lem::FW} we have $S(e_{k_1,k_2})=S_{k_1}\times S_{k_2}$. Thus \Cref{thm::indecompsable_obj_by_A_k} yields a bijection between indecomposables in $\RepCt$ up to isomorphism and $\bigsqcup_{k_1,k_2\in \mathbb N_0} \Irr (S_{k_1}\times S_{k_2})$ and hence with bipartitions of arbitrary size. \end{proof} We conclude our discussion by computing all indecomposable objects in the interpolation categories $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t^+)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC_{even},t)$ up to isomorphism for $t\in \mathbb C\backslash \{0\}$. In \cite[Thm.~7.3.]{BV09}, Banica and Vergnioux showed that for any $n\in \mathbb N_0$, inequivalent irreducible representations of the free hyperoctahedral quantum group $H_n^+$ are indexed by finite binary sequences (of arbitrary length, independent of $n$). We show that also non-isomorphic indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H^+_t)$ are indexed by finite binary sequences. \begin{proposition} \label{indecomposables_Hp} Let $t\in \mathbb C\backslash \{0\}$. Then there exists a bijection \[ \phi: \bigcup_{b\in \mathbb N_0} \{1,2\}^b \longleftrightarrow \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(H_t^+) \end{matrix} \right\}.\] \end{proposition} \begin{proof} For any $b\in \mathbb N_0$ and $a=(a_1,\ldots,a_b)\in \{1,2\}^b$, we define a partition $e_a \in NC_{even}(k,k)$ with $k:=a_1+\dots+a_b$ by $e_a:=e_{a_1} \ensuremath{\otimes} e_{a_2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} e_{a_b}$ with $e_1=\ensuremath{\mathrm{id}}_1$ and $e_2=\Paaaa$. By \cite[Lemma 5.12.]{FW16}, the set $E:=\{ e_a \mid b\in \mathbb N_0, a\in \{1,2\}^b, \sum_{i=1}^b a_i=k\}$ is a set of representatives for all equivalence classes of projective partitions. Moreover, by \Cref{lem::Ht(p)_Q}, all of these partitions lie in $\cP$. Since $S(p)=\{ \ensuremath{\mathrm{id}}\}$ for all $p\in \ProjC$ by \Cref{ex::FW}, \Cref{thm::indecompsable_obj_by_A_k} yields a bijection between the indecomposables in $\RepCt$ up to isomorphism and $\bigsqcup_{e\in E} \Irr (\{ \ensuremath{\mathrm{id}} \})$ and hence the claim follows. \end{proof} \subsection{Temperley--Lieb categories as a special case} \label{subs::Ot+andSt+} In \Cref{lem::ex0} we have seen that the indecomposable objects of the Temperley--Lieb category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O^+_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC_2,t)$ as well as of the category $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S^+_t)=\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(NC,t)$ are indexed by the nonnegativ integers $\mathbb N_0$. The indecomposable objects of the Temperley--Lieb category have been studied in various settings and can be described using Jones--Wenzl idempotents, discovered by Jones \cite{Jo83}. Even though this is probably known to experts as the 'fattening' procedure, we give a proof that $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t^2}^+)$ is equivalent to a full subcategory of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ for any $t\in \mathbb C\backslash \{0\}$. Using this, we can also specify the indecomposable objects in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t}^+)$. The following inductive definition is due to Wenzl \cite{We87}. Set \begin{equation*} \mathcal{S} := \{2\cdot \cos \left(\frac{j \pi}{l}\right) \mid l\in \mathbb N_{\geq 2}, j\in \{1,\ldots,l-1\}\} , \end{equation*} then for any $t\notin\mathcal{S}$ and any $k\in \mathbb N_0$ the Jones--Wenzl idempotent $e_k\in \ensuremath{\mathrm{End}}_{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)}([k])$ is recursively defined via: \begin{align*} &~e_0 = \text{id}_0,~e_1 = \text{id}_1, \\ &\scalebox{0.8}{\begin{tikzpicture \end{align*} with $a_1=0$ and $a_k=(t-a_{k-1})^{-1}$ for all $k\geq 2$. \begin{example} For instance, $e_2 = \Paa\Paa - \tfrac 1t \Paabb$. \end{example} Using \Cref{thm::indecompsable_obj_by_A_k}, we recover a known result about the Temperley--Lieb categories. \begin{proposition} \label{prop::Indec_Otp} For any $t\in \mathbb C\backslash \mathcal{S}$ \[ \phi: \mathbb N_0 \to \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+) \end{matrix} \right\}, k\mapsto ([k],e_k) \] is a bijection. \end{proposition} \begin{proof} By the proof of \Cref{lem::ex0} the Jones--Wenzl idempotents are lifts of the idempotents $\ensuremath{\mathrm{id}}_k +I_k \in \ensuremath{\mathrm{End}} ([k]) / I_k$. The recursive definition implies that the identity partition appears with coefficient $1$, so the image of any Jones--Wenzl idempotent modulo $I_k$ is not zero. \end{proof} \begin{remark} If $t\in\mathcal{S}$, then only finitely many Jones--Wenzl idempotents are defined, and the last one of them generates the negligible morphisms in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}_0(NC_2, t)$ (see \cite{GW02}). Out of the infinitely many indecomposables in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O^+_t)$, only finitely many are not isomorphic to the zero object in the semisimplification of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O^+_t)$, the category obtained as a quotient by the tensor ideal of negligible morphisms; they correspond to the finitely many Jones--Wenzl idempotents, expect the last one (see, for instance, \cite{Ch14}). \end{remark} In the following we show that $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t^2}^+)$ is equivalent to a full subcategory of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ for any $t\in \mathbb C\backslash \{0\}$. \begin{definition} Let $t\in \mathbb C$. We denote by $\mathcal{D}(t)$ the full subcategory of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ with objects \[ \{ (A,e)\in \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+) \mid A=\bigoplus_{i=1}^l [k_i], k_i\in \mathbb N_0 \text{ even, for any } 1\leq i\leq l\} .\] \end{definition} Note that $\mathcal{D}(t)$ is the Karoubi envelope of the full subcategory of $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}_0(O_t^+)$ with objects $\{ [k]\mid k\in \mathbb N_0 \text{ even} \}$. \begin{definition}[{\cite[Ex. 9.42.]{NS06}}] Let $k,l\in \mathbb N$. To any partition $p\in NC_2(2k,2l)$ we associate a partition $\widehat{p}\in NC(k,l)$ as follows. For any odd upper point $m\in \{1,3,\ldots,2k-1\}$, we insert a new point to the right of $m$. Similarly, for any odd lower point $m'\in \{1',3',\ldots,(2k-1)'\}$, we insert a new point on the right of $m'$. Then $\widehat{p}\in NC(k,l)$ is the coarsest partition on all new points such that no strings of the nested partitions cross. Note that this definition is independent of the choice of the occurring diagrams. Moreover, we set $\widehat{\ensuremath{\mathrm{id}}_0}=\ensuremath{\mathrm{id}}_0$. \end{definition} \begin{example} The following diagram shows that, for $p=\Partition{\Pblock 0 to 0.25:3,4 \Pblock 1 to 0.75:2,3 \Pline (1,0) (1,1) \Pline (2,0) (4,1)}$, we have $\widehat{p}=\Partition{\Pblock 1 to 0.6:1,2 \Pline (1,0) (1,0.6) \Psingletons 0 to 0.25:2}$: \begin{center} \scalebox{.8}{\begin{tikzpicture} \coordinate (A1) at (0,0); \coordinate (A2) at (1,0); \coordinate (A3) at (2,0); \coordinate (A4) at (3,0); \coordinate (B1) at (0,1); \coordinate (B2) at (1,1); \coordinate (B3) at (2,1); \coordinate (B4) at (3,1); \coordinate (B5) at (0.5,0); \coordinate (B6) at (2.5,0); \coordinate (B7) at (0.5,1); \coordinate (B8) at (2.5,1); \fill (A1) circle (2.5pt); \fill (A2) circle (2.5pt); \fill (A3) circle (2.5pt); \fill (A4) circle (2.5pt); \fill (B1) circle (2.5pt); \fill (B2) circle (2.5pt); \fill (B3) circle (2.5pt); \fill (B4) circle (2.5pt); \fill (B5) circle (1.5pt); \fill (B6) circle (1.5pt); \fill (B7) circle (1.5pt); \fill (B8) circle (1.5pt); \draw (A1) -- (B1); \draw (A3) -- (2,0.3) -- (3,0.3) -- (A4); \draw (A2) -- (B4); \draw (B2) -- (1,0.8) -- (2,0.8) -- (B3); \draw[dashed] (B5) -- (B7); \draw (B6) -- (2.5,0.15); \draw[dashed] (0.5,0.5) -- (1.75,0.5) -- (2.5,0.875) -- (B8); \end{tikzpicture}}\end{center} \end{example} It is well-known that the map $NC_2(2k,2l)\to NC(k,l)$, $p\mapsto \widehat{p}$, called \emph{fattening operation}, is a bijection, see \cite[Ex. 9.42.]{NS06}. We will now show that, together with a suitable scaling, this map induces an equivalence of monoidal categories between $\mathcal{D}(t)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t^2}^+)$. \begin{definition} Let $t\in \mathbb C \backslash \{0\}$ and let $\sqrt{t}\in \mathbb C$ be any square root of $t$. We denote the trace in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ and $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t^2}^+)$ by $\text{tr}$ and $\text{tr}_2$, respectively. We set $$ \mathcal{G}([2k]) := [k] ,\quad \mathcal{G}(p) := a(p) \widehat{p} \quad \in NC(k,l) \qquad\text{for all }k,l\in\mathbb N_0, p\in NC_2(2k,2l) , $$ where \begin{align*} p' &:= \begin{cases} p\ensuremath{\otimes} \UPartition{}{0.4:1,2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} \UPartition{}{0.4:1,2} \in NC_2(2l,2l) & l\geq k \\ p\ensuremath{\otimes} \LPartition{}{0.6:1,2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} \LPartition{}{0.6:1,2} \in NC_2(2k,2k) & k\geq l \end{cases} , \\ a(p) &:= \left( \sqrt{t}\right)^{|k-l|} \frac{\text{tr}(p')}{\text{tr}_2(\widehat{p'})} . \end{align*} \end{definition} \begin{lemma} \label{lem::Rechenregeln_ap} We make the same assumptions as in the above lemma and let $p\in NC_2(2k,2l)$. \begin{enumerate}[label=(\roman*)] \item We have $a(p) = \left( \sqrt{t}\right)^{k-l} a(p')$. \item We have $a(p \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{2m})=a(p)$ for all $m\in \mathbb N_0$. \item If $k=l$, then $a(p \ensuremath{\otimes} r) = \frac{1}{t^y} a(p)$ with $r=\Paabb\ensuremath{\otimes} \cdots \ensuremath{\otimes} \Paabb \in P(2y,2y)$ for all $y\in\mathbb N_0$. \end{enumerate} \end{lemma} \begin{proof} \begin{enumerate}[label=(\roman*)] \item The claim follows directly from the definition of $\mathcal{G}$, since $a(p')=\frac{\text{tr}(p')}{\text{tr}_2(\widehat{p'})}$. \item Let $q=p \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{2m}$. If $k=l$, then we have $\widehat{q}=\widehat{p}\ensuremath{\otimes} \ensuremath{\mathrm{id}}_2$ and hence \[ a(q)= \frac{\text{tr}(q)}{\text{tr}_2(\widehat{q})} = \frac{\text{tr}(p) \cdot t^{2m}}{\text{tr}_2(\widehat{p}) \cdot (t^2)^{m} } = a(p).\] Now, let $k>l$. The case $k<l$ follows analogously. Without loss of generality we assume that $m=2$. By $(i)$ we have to show that $a(q')=a(p')$. We have \begin{center}\scalebox{0.8}{\begin{tikzpicture} \coordinate [label=left:{\scalebox{1.25}{$\ensuremath{\mathrm{tr}}(q')=\ensuremath{\mathrm{tr}}($}}](O) at (0,1); \coordinate [label=right:{\scalebox{1.25}{$)=\ensuremath{\mathrm{tr}}(p')$.}}](O) at (7,1); \coordinate [label=left:{\scalebox{1.25}{$p$}}](O) at (1.5,1); \coordinate [label=left:{$\ldots$}](O) at (5.2,0); \coordinate (A1) at (0,2); \coordinate (A2) at (5,2); \coordinate (A3) at (5.5,2); \coordinate (A4) at (6,2); \coordinate (B1) at (0,0); \coordinate (B2) at (2,0); \coordinate (B3) at (2.5,0); \coordinate (B4) at (3,0); \coordinate (B5) at (3.5,0); \coordinate (B6) at (4,0); \coordinate (B7) at (5.5,0); \coordinate (B8) at (6,0); \fill (A1) circle (2.5pt); \fill (A2) circle (2.5pt); \fill (A3) circle (2.5pt); \fill (A4) circle (2.5pt); \fill (B1) circle (2.5pt); \fill (B2) circle (2.5pt); \fill (B3) circle (2.5pt); \fill (B4) circle (2.5pt); \fill (B5) circle (2.5pt); \fill (B6) circle (2.5pt); \fill (B7) circle (2.5pt); \fill (B8) circle (2.5pt); \draw[dashed] (A1) -- (A2) -- (B2) -- (B1) -- (A1); \draw (A3) to [bend left=90] (7,2) -- (7,0) to [bend left=90] (B7) ; \draw (A4) to [bend left=90] (6.5,2) -- (6.5,0) to [bend left=90] (B8) ; \draw (A3) -- (B3); \draw (A4) -- (B4); \draw (B5) -- (3.5,0.2) -- (4,0.2) -- (B6); \draw (B7) -- (5.5,0.2) -- (6,0.2) -- (B8); \end{tikzpicture}} \end{center} Analogously one can check that $\ensuremath{\mathrm{tr}}_2 (\widehat{q'}) = \ensuremath{\mathrm{tr}}_2(\widehat{p'})$ and hence \[ a(q') = \frac{\text{tr}(q')}{\text{tr}_2(\widehat{q'})} = \frac{\text{tr}(q')}{\text{tr}_2(\widehat{q'})} a(p') .\] \item Since $k=l$, we have $\widehat{p\ensuremath{\otimes} r} = \widehat{p} \ensuremath{\otimes} \widehat{r}$ and $\widehat{r}=\Pab \ensuremath{\otimes} \cdots \ensuremath{\otimes} \Pab \in P(y,y)$. It follows that \[ a(p\ensuremath{\otimes} r)= \frac{\text{tr}(p\ensuremath{\otimes} r)}{\text{tr}_2(\widehat{p\ensuremath{\otimes} r})} = \frac{\text{tr}(p) \cdot t^{y}}{\text{tr}_2(\widehat{p}) \cdot (t^2)^{y} } = \frac{1}{t^y} a(p).\] \end{enumerate} \end{proof} \begin{lemma} \label{lem::equivalence_St+_Ot+} $\mathcal{G}$ defines an equivalence of monoidal categories $\mathcal{D}(t)\to \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t^2}^+)$ for all $t\in \mathbb C \backslash \{0\}$. \end{lemma} \begin{proof} It suffices to show that $\mathcal{G}$ is a monoidal functor since $\mathcal{G}$ is full, faithful and essentially surjective, as $p\mapsto \widehat{p}$ is a bijection. {Step 1:} We start by showing that $\mathcal{G}(q\circ p)=\mathcal{G}(q)\circ \mathcal{G}(p)$ for all $p\in NC_2(2k,2l)$ and $q\in NC_2(2l,2m)$. Comparing diagrams one can check that $\widehat{qp}=\widehat{q}\widehat{p}$. Together with \begin{align*} &\mathcal{G}(q\circ p) = t^{l(q,p)} \mathcal{G}(qp) = t^{l(q,p)} a(qp) \widehat{qp},\\ &\mathcal{G}(q)\circ \mathcal{G}(p) = a(p) q(p) (\widehat{q} \circ \widehat{p}) = a(p) q(p) (t^2)^{l(\widehat{q},\widehat{p})} (\widehat{q}\widehat{p}), \end{align*} it follows that it suffices to show that $t^{l(q,p)}a(qp)=(t^2)^{l(\widehat{q},\widehat{p})} a(p)a(q)$. {Step 1.1:} Kodiyalam and Sunder showed that for any $n\in \mathbb N_0$ the map \[ \ensuremath{\mathrm{End}}_{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)}([2n]) \to \ensuremath{\mathrm{End}}_{\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_{t^2}^+)}([n]), p\mapsto \mathcal{G}(p) \] is an algebra isomorphism (see {\cite[Thm.~4.2.]{KS08}}). Hence the claim follows for $k=l=m$. {Step 1.2:} For arbitrary $k,l,m\in \mathbb N_0$ we set $x:=\text{max}(k,l,m)$ and extend $p$ and $q$ to partitions in $P(x,x)$ as follows: \begin{align*} &\Bar{p}:= p\ensuremath{\otimes} \UPartition{}{0.4:1,2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} \UPartition{}{0.4:1,2} \ensuremath{\otimes} \LPartition{}{0.6:1,2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} \LPartition{}{0.6:1,2} \in P(x,x), \\ &\Bar{q}:= q\ensuremath{\otimes} \UPartition{}{0.4:1,2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} \UPartition{}{0.4:1,2} \ensuremath{\otimes} \LPartition{}{0.6:1,2} \ensuremath{\otimes} \cdots \ensuremath{\otimes} \LPartition{}{0.6:1,2} \in P(x,x). \end{align*} Step 1.1 implies that \begin{equation*} t^{l(\Bar{q},\Bar{p})} a(\Bar{q}\Bar{p}) = (t^2)^{l(\widehat{\Bar{p}},\widehat{\Bar{q}})}~ a(\Bar{q}) a(\Bar{p}) \end{equation*} Moreover, by construction we have \begin{align*} &t^{l(\Bar{q},\Bar{p})} = t^{l(q,p)} t^{x-l} \\ &(t^2)^{l(\widehat{\Bar{p}},\widehat{\Bar{q}})} = (t^2)^{l(\widehat{p},\widehat{q})} (t^2)^{x-l}. \end{align*} and thus \begin{equation} t^{l(q,p)} a(\Bar{q}\Bar{p}) = t^{x-l} (t^2)^{l(\widehat{p},\widehat{q})}~ a(\Bar{q}) a(\Bar{p}). \end{equation} {Step 1.3:} We claim that \begin{align} &a(p) = \left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-l} a(\Bar{p}),\\ &a(q) = \left(\sqrt{t}\right)^{x-l} \left(\sqrt{t}\right)^{x-m} a(\Bar{q}),\\ &a(qp) = \left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-m} a(\Bar{q}\Bar{p}). \end{align} We prove the first equation since the others follow analogously. If $x\in \{k,l\}$, then we have $\Bar{p}=p'$ and hence \Cref{lem::Rechenregeln_ap}(i) implies $a(p)=\left( \sqrt{t}\right)^{|k-l|} a(p') =\left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-l} a(\Bar{p})$.\\ If $x=m$, then we have $\Bar{p}=p'\ensuremath{\otimes} r$ with $r=\Paabb\ensuremath{\otimes} \cdots \ensuremath{\otimes} \Paabb \in P(2y,2y)$. \Cref{lem::Rechenregeln_ap}(iii) implies that $a(p')= t^{y} a(\Bar{p})$ and together with \Cref{lem::Rechenregeln_ap}(i) it follows that $a(p)=\left( \sqrt{t}\right)^{|k-l|} a(p') = \left( \sqrt{t}\right)^{|k-l|} t^{y} a(\Bar{p}) = \left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-l} a(\Bar{p})$. {Step 1.4:} We are ready to show that $t^{l(q,p)}a(qp)=(t^2)^{l(\widehat{q},\widehat{p})} a(p)a(q)$. We have \begin{align*} &t^{l(q,p)} a(qp) \\ \overset{(4)}{=}~~~& \left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-m} t^{l(q,p)}~ a(\Bar{q} \Bar{p}) \\ \overset{(1)}{=}~~~& \left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-m}~t^{x-l}~ (t^2)^{l(\widehat{p},\widehat{q})}~ a(\Bar{q}) a(\Bar{p}) \\ =~~~& (t^2)^{l(\widehat{p},\widehat{q})} \left(\left(\sqrt{t}\right)^{x-l} \left(\sqrt{t}\right)^{x-m} a(\Bar{q})\right) \left(\left(\sqrt{t}\right)^{x-k} \left(\sqrt{t}\right)^{x-l} a(\Bar{p})\right) \\ \overset{(2),(3)}{=}~& (t^2)^{l(\widehat{p},\widehat{q})} a(q) a(p). \end{align*} {Step 2:} It remains to show that $\mathcal{G}(p\ensuremath{\otimes} q)=\mathcal{G}(p)\ensuremath{\otimes} \mathcal{G}(q)$ for all $p\in NC_2(2k,2l), q\in NC_2(2m,2n)$. Again by comparing diagrams one can check that $\widehat{p\ensuremath{\otimes} q}=\widehat{p}\ensuremath{\otimes} \widehat{q}$ and thus we have to show that $a(p\ensuremath{\otimes} q)=a(p)a(q)$. By Step 1 we have \begin{align*} &a(p\ensuremath{\otimes} q) \\ =~& a( (\ensuremath{\mathrm{id}}_{2l} \ensuremath{\otimes} q)(p \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{2m}) ) \\ =~& a(\ensuremath{\mathrm{id}}_{2l} \ensuremath{\otimes} q) a(p \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{2m}). \end{align*} and since $a(\ensuremath{\mathrm{id}}_{2l} \ensuremath{\otimes} q)=a(q)$ and $a(p \ensuremath{\otimes} \ensuremath{\mathrm{id}}_{2m})=a(p)$ by \Cref{lem::Rechenregeln_ap}(ii), the claim follows. \end{proof} Since there are no non-zero morphisms in $\ensuremath{\mathop{\mathrm{\underline{Rep}}}}(O_t^+)$ between subobjects of $[k_1]$ with $k_1$ even and subobjects of $[k_2]$ with $k_2$ odd, \Cref{prop::Indec_Otp} and \Cref{lem::equivalence_St+_Ot+} imply the following: \begin{proposition} \label{lem::indecomposables_St+} If $t\in \mathbb C \backslash \{4\cdot \cos \left(\frac{j \pi}{l}\right)^2 \mid l\in \mathbb N_{\geq 2}, j\in \{1,\ldots,l-1\}\}$, then \[ \phi: \mathbb N_0 \to \left\{ \begin{matrix} \text{isomorphism classes of non-zero} \\ \text{indecomposable objects in } \ensuremath{\mathop{\mathrm{\underline{Rep}}}}(S_t^+) \end{matrix} \right\}, k\mapsto ([k],\mathcal{G}(e_{2k})) \] is a bijection. \end{proposition}
proofpile-arXiv_067-8960
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Many classical theorems in extremal graph theory concern the maximum number of copies of a fixed graph $H$ in an $n$-vertex graph in some class $\mathcal{G}$. Here, a \emph{copy} means a subgraph isomorphic to $H$. For example, Tur\'an's Theorem determines the maximum number of copies of $K_2$ (that is, edges) in an $n$-vertex $K_t$-free graph~\citep{Turan41}. More generally, Zykov's Theorem determines the maximum number of copies of a given complete graph $K_s$ in an $n$-vertex $K_t$-free graph~\citep{Zykov49}. The excluded graph need not be complete. The Erd\H{o}s--Stone Theorem~\citep{ES46} determines, for every non-bipartite graph $X$, the asymptotic maximum number of copies of $K_2$ in an $n$-vertex graph with no $X$-subgraph. Analogues of the Erd\H{o}s--Stone Theorem for copies of $K_s$ have recently been studied by \citet{AS19,AS16}. See~\citep{MaQiu20,AKS18,Timmons19,GMV19,GP19,GSTZ19,EMSG19,GKPP18,Luo18,NesOss11} for recent related results. This paper studies similar questions when the class $\mathcal{G}$ consists of the graphs that embed\footnote{See~\citep{MoharThom} for background about graphs embedded in surfaces. For $h\geq 0$, let $\mathbb{S}_h$ be the sphere with $h$ handles. For $c\geq 0$, let $\mathbb{N}_c$ be the sphere with $c$ cross-caps. Every surface is homeomorphic to $\mathbb{S}_h$ or $\mathbb{N}_c$. The \emph{Euler genus} of $\mathbb{S}_h$ is $2h$. The \emph{Euler genus} of $\mathbb{N}_c$ is $c$. A graph $H$ is a \emph{minor} of a graph $G$ if a graph isomorphic to $H$ can be obtained from a subgraph of $G$ by contracting edges. If $G$ embeds in a surface $\Sigma$, then every minor of $G$ also embeds in $\Sigma$. } in a given surface $\Sigma$ (rather than being defined by an excluded subgraph). For a graph $H$ and surface $\Sigma$, let $C(H,\Sigma,n)$ be the maximum number of copies of $H$ in an $n$-vertex graph that embeds in $\Sigma$. This paper determines the asymptotic behaviour of $C(H,\Sigma,n)$ as $n\rightarrow\infty$ for any fixed surface $\Sigma$ and any fixed graph $H$ (which we assume is non-empty). Before stating our theorem, we mention some related results that determine $C(H,\mathbb{S}_0,n)$ for specific planar graphs $H$ where the surface is the sphere $\mathbb{S}_0$. \citet{AC84} determined $C(H,\mathbb{S}_0,n)$ precisely if $H$ is either a complete bipartite graph or a triangulation without non-facial triangles. \citet{HS79} studied $C(C_k,\mathbb{S}_0,n)$ where $C_k$ is the $k$-vertex cycle; they proved that $C(C_3,\mathbb{S}_0,n)=3n-8$ and $C(C_4,\mathbb{S}_0,n)=\frac12(n^2+3n-22)$. See~\citep{HS82,HHS01} for more results on $C(C_3,\mathbb{S}_0,n)$ and see~\citep{Alameddine80} for more results on $C(C_4,\mathbb{S}_0,n)$. \citet{GPSTZb} proved that $C(C_5,\mathbb{S}_0,n)=2n^2-10n+12$ (except for $n\in\{5,7\}$). \citet{GPSTZa} determined $C(P_4,\mathbb{S}_0,n)$ precisely, where $P_k$ is the $k$-vertex path. \citet{AC84} and independently \citet{Wood-GC07} proved that $C(K_4,\mathbb{S}_0,n)=n-3$. More generally, \citet{Wormald86} proved that if $H$ is a fixed 3-connected planar graph then $C(H,\mathbb{S}_0,n) = O(n)$. This result was independently proved by \citet{Eppstein93}, who noted the converse also holds: If $H$ is planar and $C(H,\mathbb{S}_0,n) = O(n)$ then $H$ has no $(\leq 2)$-separation. \citet{Eppstein93} asked the following two open problems: \begin{itemize} \item Characterise the subgraphs occurring $O(n)$ times in graphs of given genus. \item Characterise the subgraphs occurring a number of times which is a nonlinear function of $n$. \end{itemize} This paper answers both these questions (and more). We start with the following natural question: when is $C(H,\Sigma,n)$ bounded by a constant depending only on $H$ and $\Sigma$ (and independent of $n$)? We prove that $H$ being 3-connected and non-planar is a sufficient condition. In fact we prove a stronger result that completely answers the question. We need the following standard definitions. A \emph{$k$-separation} of a graph $H$ is a pair $(H_1, H_2)$ of edge-disjoint subgraphs of $H$ such that $H_1 \cup H_2=H$, $V(H_1) \setminus V(H_2) \neq \emptyset$, $V(H_2) \setminus V(H_1) \neq \emptyset$, and $|V(H_1 \cap H_2)|=k$. A $k'$-separation for some $k'\leq k$ is called a \emph{$(\leq k)$-separation}. If $(H_1,H_2)$ is a separation of $H$ with $X=V(H_1)\cap V(H_2)$, then let $H_i^-$ and $H_i^+$ be the simple graphs obtained from $H_i$ by removing and adding all edges between vertices in $X$, respectively. A graph $H$ is \emph{strongly non-planar} if $H$ is non-planar and for every $(\leq 2)$-separation $(H_1, H_2)$ of $H$, both $H_1^+$ and $H_2^+$ are non-planar. Note that every 3-connected non-planar graph is strongly non-planar. The following is our first main contribution. It says that $C(H,\Sigma,n)$ is bounded if and only if $H$ is strongly non-planar. \begin{theorem} \label{StronglyNonPlanar} There exists a function $c_{\ref{StronglyNonPlanar}}(h, g)$ such that for every strongly non-planar graph $H$ with $h$ vertices and every surface $\Sigma$ of Euler genus $g$, \begin{equation*} C(H, \Sigma, n) \leq c_{\ref{StronglyNonPlanar}}(h, g). \end{equation*} Conversely, for every graph $H$ that is not strongly non-planar and for every surface $\Sigma$ in which $H$ embeds, there is a constant $c>0$ such that for all $n\geq 4|V(H)|$, there is an $n$-vertex graph that embeds in $\Sigma$ and contains at least $cn$ copies of $H$; that is, $C(H,\Sigma,n)\geq cn$. \end{theorem} There are two striking observations about \cref{StronglyNonPlanar}. First, the characterisation of graphs $H$ does not depend on the surface $\Sigma$. Indeed, the only dependence on $\Sigma$ is in the constants. Second, \cref{StronglyNonPlanar} shows that $C(H,\Sigma,n)$ is either bounded or $\Omega(n)$. \cref{StronglyNonPlanar} is in fact a special case of the following more general theorem. The next definition is a key to describing our results. A \emph{flap} in a graph $H$ is a $(\leq 2)$-separation $(A,B)$ such that $A^+$ is planar. Separations $(A,B)$ and $(C,D)$ of $H$ are \emph{independent} if $E(A^-) \cap E(C^-) = \emptyset$ and $(V(A) \setminus V(B)) \cap (V(C) \setminus V(D))=\emptyset$. If $H$ is planar and with no $(\leq 2)$-separation, then the \emph{flap-number} of $H$ is defined to be 1. Otherwise, the \emph{flap-number} of $H$ is defined to be the maximum number of pairwise independent flaps in $H$. Let $f(H)$ denote the flap-number of $H$. \begin{theorem} \label{Main} For every graph $H$ and every surface $\Sigma$ in which $H$ embeds, \begin{equation*} C(H,\Sigma,n) = \Theta( n^{f(H)} ). \end{equation*} \end{theorem} It is immediate from the definitions that $f(H)=0$ if and only if $H$ is strongly non-planar. So \cref{StronglyNonPlanar} follows from the $f(H) \leq 1$ cases of \cref{Main}. As an aside, note that \cref{Main} can be restated as follows: for every graph $H$ and every surface $\Sigma$ in which $H$ embeds, \begin{equation*} \lim_{n\to\infty} \frac{ \log C(H,\Sigma,n)}{ \log n} = f(H) . \end{equation*} The lower bound in \cref{Main} is proved in \cref{LowerBound}. \cref{Tools} introduces some tools from the literature that are used in the proof of the upper bound. \cref{StronglyNonPlanar} is proved in \cref{BoundedNumberCopies}. The upper bound in \cref{Main} is then proved in \cref{MainProof}. \cref{CompleteGraphs} presents more precise bounds on $C(H,\Sigma,n)$ when $H$ is a complete graph $K_s$. \cref{MinorClosedClasses} considers the maximum number of copies of a graph $H$ in an $n$-vertex graph in a given minor-closed class. \cref{Homomorphism} reinterprets our results in terms of homomorphism inequalities, and presents some open problems that arise from this viewpoint. Before continuing, to give the reader some more intuition about \cref{Main}, we now asymptotically determine $C(T,\Sigma,n)$ for a tree $T$. \begin{corollary} \label{TreeCopies} For every fixed tree $T$, let $\beta(T)$ be the size of a maximum stable set in the subforest $F$ of $T$ induced by the vertices with degree at most $2$. Then for every fixed surface $\Sigma$, \begin{equation*} C(T,\Sigma,n) = \Theta( n^{\,\beta(T)} ). \end{equation*} \end{corollary} \begin{proof} By \cref{Main}, it suffices to show that $\beta(T)=f(T)$. Let $I=\{v_1,\dots,v_{\beta(T)}\}$ be a maximum stable set in $F$. Let $x_i$ (and possibly $y_i$) be the neighbours of $v_i$. Let $A_i:=T[\{v_i,x_i,y_i\}]$ and $B_i:=T-v_i$. Then $(A_i,B_i)$ is a flap of $T$. Since $I$ is a stable set, for each $v_i\in I$ neither $x_i$ nor $y_i$ are in $I$, implying that $E(A_i^-)\cap E(A_j^-)=\emptyset$ for distinct $i,j\in [\beta(T)]$. Moreover, $V(A_i) \setminus V(B_i)=\{v_i\}$, so $(V(A_i) \setminus V(B_i)) \cap (V(A_j) \setminus V(B_j))=\emptyset$ for all distinct $i,j$. Hence $(A_1,B_1),\dots,(A_{\beta(T)},B_{\beta(T)})$ are pairwise independent flaps in $T$. Thus $\beta(T) \leq f(T)$. \cref{Main} then implies that $C(T,\Sigma,n) = \Omega( n^{\,\beta(T)} )$. This lower bound is particularly easy to see when $T$ is a tree. Let $G$ be the graph obtained from $T$ by replacing each vertex $v_i \in I$ by $\floor{\frac{n-|V(T)|}{\beta(T)}}$ vertices with the same neighbourhood as $v_i$, as illustrated in \cref{TreeCopies}. Then $G$ is planar with at most $n$ vertices and at least $(\frac{n-|V(T)|}{\beta(T)})^{\beta(T)}$ copies of $T$. Thus $C(T,\Sigma,n) \geq C(T,\mathbb{S}_0,n) = \Omega( n^{\beta(T)} )$ for fixed $T$. For the converse, let $(A_1,B_1),\dots,(A_{f(T)},B_{f(T)})$ be pairwise independent flaps in $T$. Choose $(A_1,B_1),\dots,(A_{f(T)},B_{f(T)})$ to minimise $\sum_{i=1}^{f(T)} |V(A_i)|$. A simple case-analysis shows that $|V(A_i)\setminus V(B_i)|=1$, and if $v_i$ is the vertex in $V(A_i)\setminus V(B_i)$, then $N(v_i)= V(A_i)\cap V(B_i)$, implying $v_i$ has degree 1 or 2 in $T$. Moreover, $v_iv_j\not\in E(T)$ for distinct $i,j\in[f(T)]$ as otherwise $E(A_i^-)\cap E(A_j^-)\neq\emptyset$. Hence $\{v_1,\dots,v_{f(T)}\}$ is a stable set of vertices in $T$ all with degree at most 2. Hence $\beta(T)\geq f(T)$. \end{proof} \begin{figure}[!ht] \centering \includegraphics{TreeCopies} \caption{(a) A tree $T$ with $\beta(T)=5$. (b) A planar graph with $\Omega(n^5)$ copies of $T$. \label{fig:TreeCopies}} \end{figure} \section{Lower Bound} \label{LowerBound} Now we prove the lower bound in \cref{Main}. Let $H$ be an $h$-vertex graph with flap-number $k$. Let $\Sigma$ be a surface in which $H$ embeds. Our goal is to show that $C(H,\Sigma,n) = \Omega( n^k )$ for all $n\geq 4|V(H)|$. We may assume that $k \geq 2$ and $H$ is connected. Let $(A_1,B_1),\dots,(A_k,B_k)$ be pairwise independent flaps in $H$. If $(A_i,B_i)$ is a 1-separation, then let $v_i$ be the vertex in $A_i\cap B_i$. If $(A_i,B_i)$ is a 2-separation, then let $v_i$ and $w_i$ be the two vertices in $A_i\cap B_i$. Let $H'$ be obtained from $H$ as follows: if $(A_i,B_i)$ is a 2-separation, then delete $A_i-V(B_i)$ from $H$, and add the edge $v_iw_i$ (if it does not already exist). Note that $H'$ is a minor of $H$, since we may assume that whenever $(A_i,B_i)$ is a 2-separation, there is a $v_iw_i$-path in $A_i$ (otherwise $(A_i,B_i)$ can be replaced by a $(\leq 1)$-separation). Since $H$ embeds in $\Sigma$, so does $H'$. By assumption, $A_i^+$ is planar for each $i$. Fix an embedding of $A_i^+$ with $v_i$ and $w_i$ (if it exists) on the outerface (which exists since $v_iw_i$ is an edge of $A_i^+$ in the case of a 2-separation). Let $G$ be the graph obtained from an embedding of $H'$ in $\Sigma$ by pasting $q:= \floor{ \frac{n}{|V(H)|}-1}$ copies of $A_i^+$ onto $v_i$ (if $(A_i,B_i)$ is a 1-separation) and onto $v_iw_i$ (if $(A_i,B_i)$ is a 2-separation). These copies of $A_i^+$ can be embedded into a face of $H'$, as illustrated in \cref{QuadraticCopies}. Since $(V(A_i)\setminus V(B_i)) \cap (V(A_j)\setminus V(B_j)) = \emptyset$ for distinct $i,j\in[k]$, $$|V(G)| = |V(H)| + q \sum_i|V(A_i) \setminus V(B_i) | \leq (q+1) |V(H)| \leq n.$$ By construction, $G$ has at least $q^k \geq ( \frac{n}{|V(H)|}-2 )^k $ copies of $H$. Hence $C(H,\Sigma,n) = \Omega(n^k)$. \begin{figure}[!ht] \centering \includegraphics{QuadraticCopies} \caption{(a) A graph $H$ with flap-number 2. (b) A graph with $\Omega(n^2)$ copies of $H$. \label{QuadraticCopies}} \end{figure} \section{Tools} \label{Tools} To prove the upper bound in \cref{Main} we need several tools from the literature. The first is the following theorem of \citet{Eppstein93}. \begin{theorem}[\citep{Eppstein93}] \label{EppsteinCor} There exists a function $c_{\ref{EppsteinCor}}(h,g)$ such that for every planar graph $H$ with $h$ vertices and no $(\leq 2)$-separation, and every surface $\Sigma$ of Euler genus $g$, \[ C(H,\Sigma, n) \leq c_{\ref{EppsteinCor}}(h,g) n. \] \end{theorem} A second key tool is the following result by \citet{Miller-JCTB87} and \citet{Archdeacon-JGT86}. \begin{theorem}[Additivity of Euler genus~\citep{Miller-JCTB87,Archdeacon-JGT86}] \label{Additivity} For all graphs $G_1$ and $G_2$, if $|V(G_1)\cap V(G_2)|\leq 2$ then the Euler genus of $G_1\cup G_2$ is at least the Euler genus of $G_1$ plus the Euler genus of $G_2$. \end{theorem} We also use the following result of \citet{ER60}; see~\citep{ALWZ} for a recent quantitative improvement. A \emph{$t$-sunflower} is a collection $\mathcal{S}$ of $t$ sets for which there exists a set $R$ such that $X\cap Y=R$ for all distinct $X,Y\in\mathcal{S}$. The set $R$ is called the \emph{kernel} of $\mathcal{S}$. \begin{lemma}[Sunflower Lemma~\citep{ER60}] \label{sunflower} There exists a function $c_{\ref{sunflower}}(h,t)$ such that every collection of $c_{\ref{sunflower}}(h,t)$ many $h$-subsets of a set contains a $t$-sunflower. \end{lemma} Finally, we mention some well-known corollaries of Euler's Formula that we use implicitly. Every graph with $n\geq 3$ vertices and Euler genus $g$ has at most $3(n+g-2)$ edges. Moreover, for bipartite graphs the above bound is $2(n+g-2)$. For example, this implies that the complete bipartite graph $K_{3,2g+3}$ has Euler genus greater than $g$. \section{Strongly Non-Planar Graphs} \label{BoundedNumberCopies} Now we prove the following quantitative version of the upper bound in \cref{StronglyNonPlanar}. \begin{theorem} \label{BoundedCopies} For every strongly non-planar graph $H$ with $h$ vertices, for every surface $\Sigma$ with Euler genus $g$, if $q:=1+ g + (g+1) \binom{h-1}{2} + (2g+2)\binom{h-1}{3}$ then $C(H,\Sigma,n) < h!c_{\ref{sunflower}}(h,q)$. \end{theorem} \begin{proof} Let $\mathcal{H}$ be the multiset of the vertex sets of all the copies of $H$ in $G$. Assume for the sake of contradiction that $|\mathcal{H}|\geq h!c_{\ref{sunflower}}(h,q)$. Since there are at most $h!$ copies of $H$ on each $h$-subset of vertices of $G$, there is a subset $\mathcal{H}'$ of $\mathcal{H}$ of size $c_{\ref{sunflower}}(h,q)$ such that all members of $\mathcal{H}'$ are distinct. By the Sunflower Lemma, $\mathcal{H}'$ contains a $q$-sunflower $\mathcal{S}$. Let $R$ be the kernel of $\mathcal{S}$. Thus $V(H_1)\cap V(H_2)=R$ for all distinct copies $H_1$ and $H_2$ of $H$ in $\mathcal{S}$. Let $Z_1,\dots,Z_t$ be the components of the subgraphs of $G$ obtained by deleting $R$ from each copy of $H$ in $\mathcal{S}$. Since $q>1$, $|R| < h$. Therefore, each copy of $H$ contributes at least one such component. Thus $t \geq q$. Since $R$ is the kernel, $Z_1,\dots,Z_t$ are pairwise disjoint. Suppose that at least $g+1$ of the $Z_i$ have at most one neighbour in $R$. Since $H$ is strongly non-planar, these $Z_i$ are non-planar, and by the additivity of Euler genus on ($\leq 1$)-separations (\cref{Additivity}), $G$ has Euler genus at least $g+1$, which is a contradiction. Now assume that at most $g$ of the $Z_i$ have at most one neighbour in $R$. Suppose that more than $(g+1) \binom{|R|}{2}$ of the $Z_i$ have exactly two neighbours in $R$. Then at least $g+2$ of the $Z_i$ have the same two neighbours $x,y \in R$. Label these $Z_i$ by $Y_1,\dots,Y_{g+2}$. Let $G'$ be obtained from $G$ by contracting $G[V(Y_{g+2})\cup\{x,y\}]$ to form an edge on $xy$. For each $i\in [g+1]$, let $X_i$ be the subgraph of $G'$ induced by $V(Y_i)\cup\{x,y\}$, including the edge $xy$. By the definition of strongly non-planar, each $X_i$ is non-planar. By \cref{Additivity} again, $\bigcup_{i=1}^{g+1}X_i$ and thus $G'$ has Euler genus at least $g+1$, which is a contradiction since $G'$ is a minor of $G$. Thus at most $(g+1) \binom{|R|}{2}$ of the $Z_i$ have exactly two neighbours in $R$. Suppose that more than $(2g+2)\binom{|R|}{3}$ of the $Z_i$ have at least three neighbours in $R$. Then at least $2g+3$ of the $Z_i$ have the same three neighbours in $R$. Contract each such $Z_i$ to a single vertex, to obtain a $K_{3,2g+3}$ minor of $G$, which is a contradiction. Now assume that at most $(2g+2)\binom{|R|}{3}$ of the $Z_i$ have at least three neighbours in $R$. Thus $q\leq t \leq g + (g+1) \binom{|R|}{2} + (2g+2) \binom{|R|}{3} \leq g + (g+1) \binom{h-1}{2} + (2g+2)\binom{h-1}{3} \leq q-1$, which is a contradiction. \end{proof} \section{Proof of Main Theorem} \label{MainProof} The proof of our main theorem uses a variant of the SPQR tree, which we now introduce. \subsection{SPQRK Trees} \label{SPQRK} The \emph{SPQR tree} of a $2$-connected graph $G$ is a tree that displays all the $2$-separations of $G$. Since we need to consider graphs which are not necessarily $2$-connected, we use a variant of the SPQR tree which we call the \emph{SPQRK tree}. Let $G$ be a connected graph. The \emph{SPQRK tree $T_G$} of $G$ is a tree, where each node $a\in V(T_G)$ is associated with a multigraph $H_a$ which is a minor of $G$. Each vertex $x \in V(H_a)$ is a vertex of $G$, that is, $V(H_a) \subseteq V(G)$. Each edge $e \in E(H_a)$ is classified either as a \emph{real} or \emph{virtual} edge. By the construction of an SPQRK tree each edge $e\in E(G)$ appears in exactly one minor $H_a$ as a real edge, and each edge $e\in E(H_a)$ which is classified real is an edge of $G$. The SPQRK tree $T_G$ is defined recursively as follows. % \begin{enumerate} \item If $G$ is $3$-connected, then $T_G$ consists of a single \emph{$R$-node} $a$ with $H_a := G$. All edges of $H_a$ are real in this case. \item If $G$ is a cycle, then $T_G$ consists of a single \emph{$S$-node} $a$ with $H_a := G$. Again, all edges of $H_a$ are real in this case. \item If $G$ is isomorphic to $K_1$ or $K_2$, then $T_G$ consists of a single \emph{$K$-node} $a$ with $H_a := G$. Again, all edges of $H_a$ are real in this case. \item If $G$ is $2$-connected and has a cutset $\{x,y\}$ such that the vertices $x$ and $y$ have degree at least $3$, we construct $T_G$ inductively as follows. Let $C_1,\dots, C_r$ ($r\geq 2$) be the connected components of $G - \{x,y\}$. First add a \emph{$P$-node} $a$ to $T_G$, for which $H_a$ is the graph with $V(H_a):=\{x,y\}$ consisting of $r$ parallel virtual edges and one additional real edge if $xy$ is an edge of $G$. Next let $G_i$ be the graph $G[V(C_i)\cup \{x,y\}]$ with the additional edge $xy$ if it is not already there. Since we include the edge $xy$, each $G_i$ is $2$-connected and we can construct the corresponding SPQRK tree $T_{G_i}$ by induction. Let $a_i$ be the (unique) node in $T_{G_i}$ for which $xy$ is a real edge in $H_{a_i}$. In order to construct $T_G$, we make $xy$ a virtual edge in the node $a_i$, and connect $a_i$ to $a$ in $T_G$. \item If $G$ has a cut-vertex $x$ and $C_1, \dots, C_s$ ($s\geq 2$) are the connected components of $G - x$, then construct $T_G$ inductively as follows. First, add a \emph{$Q$-node} $a$ to $T_G$, for which $H_a$ is the graph consisting of the single vertex $x$. For each $i \in [s]$, let $G_i := G[V(C_i)\cup \{x\}]$. Since $G_i$ is connected, we can construct the corresponding SPQRK tree $T_{G_i}$ by induction. If there is a unique node $b_i \in V(T_{G_i})$ such that $x \in V(H_{b_i})$, then make $a$ adjacent to $b_i$ in $T_{G}$. If $x$ is in at least two nodes of $V(T_{G_i})$, then $x \in V(C) \cap V(D)$ for some $(\leq 2)$-separation $(C,D)$ of $G_i$. Since $G_i - x$ is connected, there must be a $P$-node $b_i$ in $T_{G_i}$ such that $x \in V(H_{b_i})$. Note that $b_i$ is not necessarily unique. Choose one such $b_i$ and make $a$ adjacent to $b_i$ in $T_G$. \end{enumerate} As a side remark, note that the SPQRK tree $T_G$ of $G$ is in fact not unique---there is some freedom in choosing $b_i$ in the last point in the definition above---however, for our purposes we do not need uniqueness, we only need that $T_G$ displays all the $(\leq 2)$-separations of $G$. The next lemma is the crux of the proof. Let $J$ and $G$ be graphs and $X$ and $Y$ be cliques in $J$ and $G$ respectively, with $|X|=|Y|$. Let $J'$ be a copy of $J$ in $G$. We say that \emph{$J'$ fixes $X$ at $Y$} if there is an isomorphism $f:V(J) \to V(J')$ such that $f(X)=Y$. \begin{lemma} \label{lem:rootedclique} There exists a function $c_{\ref{lem:rootedclique}}(j,g)$ with the following property. Let $\Sigma$ be a surface of Euler genus $g$. Let $X$ be a clique with $|X| \leq 2$ in a planar graph $J$ with $j$ vertices, such that there does not exist independent flaps $(A,B)$ and $(C,D)$ of $J$ with $X \subseteq V(B \cap D)$. Then for every $n$-vertex graph $G$ embeddable in $\Sigma$ and every clique $Y$ in $G$ with $|Y|=|X|$, there are at most $c_{\ref{lem:rootedclique}}(j,g) n$ copies of $J$ in $G$ with $X$ fixed at $Y$. \end{lemma} \begin{proof} Let $c_{\ref{lem:rootedclique}}(j,g) := \max\{12(g+1), 2c_{\ref{EppsteinCor}}(j,g),j!(3g+3)c_{\ref{sunflower}}(j, \tbinom{j}{3}(2g+3)) \}$. Let $G$ be an $n$-vertex graph embedded in a surface $\Sigma$ of Euler genus $g$ and $Y$ be a clique in $G$ with $|Y|=|X|$. Let $(\star)$ be the property that there do not exist independent flaps $(A,B)$ and $(C,D)$ of $J$ with $X \subseteq V(B \cap D)$. If $X=\emptyset$, then $(\star)$ implies that $J$ has no $(\leq 2)$-separation. Thus, we are done by \cref{EppsteinCor}. Henceforth, we may assume that $|X| \in \{1,2\}$. Suppose $(J_1,J_2)$ is a $0$-separation of $J$ with $X \subseteq V(J_2)$. If $V(J_2) \neq X$, then $(J_2,J_1')$ is a $(\leq 2)$-separation of $J$, where $J_1'$ is obtained from $J_1$ by adding the vertices of $X$ as isolated vertices. Therefore, $(J_1,J_2)$ and $(J_2,J_1')$ contradict $(\star)$. Thus, $V(J_2)=X$. If $(A_1, A_2)$ is a $(\leq 2)$-separation of $J_1$, then $(A_1, A_2 \cup J_2)$ and $(A_2, A_1 \cup J_2)$ are two $(\leq 2)$-separations of $J$ contradicting $(\star)$. Hence, $J_1$ has no $(\leq 2)$-separations. Since $V(J_2)=X$, the number of copies of $J$ in $G$ with $X$ fixed at $Y$ is at most twice the number of copies of $J_1$ in $G$ (since there at most two ways of fixing $X$ at $Y$). By \cref{EppsteinCor}, this is at most $2c_{\ref{EppsteinCor}}(j,g) \leq c_{\ref{lem:rootedclique}}(j,g)$. Thus, we may assume that $J$ is connected. Let $T_J$ be the SPQRK tree of $J$. Suppose $V(T_J)=\{a\}$. If $a$ is a $K$-node, then there are at most $\max\{n, 3(n+g-2)\} \leq c_{\ref{lem:rootedclique}}(j,g) n$ copies of $J$ in $G$. If $a$ is an $R$-node, then there are at most $c_{\ref{EppsteinCor}}(j,g) n \leq c_{\ref{lem:rootedclique}}(j,g) n$ copies of $J$ in $G$. If $a$ is an $S$-node and $|X|=1$, then $J\cong C_3$. If $a$ is an $S$-node and $|X|=2$, then $J\cong C_3$ or $J\cong C_4$. In either case, there is a unique maximal clique $X'$ of $J$ with $X' \cap X=\emptyset$ and $|X'| \leq 2$. Since there at most $\max\{|V(G)|,|E(G)|\}$ choices for $X'$, there are at most $4\max\{n, 3(n+g-2)\} \leq 12(g+1)n \leq c_{\ref{lem:rootedclique}}(j,g)n$ copies of $J$ in $G$ with $X$ fixed at $Y$. We may therefore assume $|V(T_J)| \geq 2$. Moreover, by the above argument we may also assume $|V(J)| \geq 4$. Let $W$ be the set of $K$-, $S$-, and $R$-nodes of $V(T_J)$. Let $U$ be a non-empty proper subset of $W$. Define $H_U := \bigcup_{a \in U} H_a$, $\bd(H_U) := V(H_U \cap H_{W \setminus U})$, $\lambda(U) :=|\bd(H_U)|$, and $\sep(U):=(H_U, H_{W \setminus U})$. The next two claims follow from $(\star)$. \begin{claim} $T_J$ is a path such that $X \subseteq V(H_\ell)$ and $X \setminus \bd(H_\ell) \neq \emptyset$ for some leaf $\ell$ of $T_J$. \end{claim} \begin{claim} \label{claim:lambda3} Let $r$ be the other leaf of $T_J$. Then for all non-empty $U \subseteq W \setminus \{\ell, r\}$ such that $U$ is not a single $K$-node, $\lambda(U) \geq 3$. \end{claim} The next claim also follows from $(\star)$. For completeness, we include the proof. \begin{claim} \label{claim:degree2} Let $S:=\{s \in V(J) \setminus X \mid \deg_J(s) \leq 2\}$. Then $|S| \leq 2$, $S \subseteq V(H_r)$, and if $|S|=2$, then the two vertices in $S$ are adjacent in $J$. \end{claim} \begin{proof} Since $|V(J)| \geq 4$, for each $s \in S$, $(\delta(s), J - s)$ is a flap with $X \subseteq V(J - s)$, where $\delta(s)$ is the subgraph of $J$ induced by the edges incident to $s$. Thus, by $(\star)$, $S$ is a clique in $J$, and therefore $|S| \leq 3$. Moreover, $|S|=3$ is impossible, since $|V(J)| \geq 4$ and $J$ is connected. Thus, $|S| \leq 2$. Since $(A,B)=\sep(\{r\})$ is a flap with $X \subseteq V(B)$, $(\star)$ also implies $S \subseteq V(H_r)$. \end{proof} By Claim~\ref{claim:degree2}, there exists an edge $e=uv \in E(H_r)$ such that $S \subseteq \{u,v\}$. Among all such edges, choose $e=uv$ so that $|\{u,v\} \cap \bd(H_r)|$ is minimum. \begin{claim} \label{claim:3paths} For all $w \in V(J) \setminus (X \cup \{u,v\})$, there are three internally disjoint paths in $J$ from $w$ to $X \cup \{u,v\}$, whose ends in $X \cup \{u,v\}$ are distinct. \end{claim} \begin{proof} Suppose not. By Menger's theorem, there is a $(\leq 2)$-separation $(J_1,J_2)$ of $J$ with $w \in V(J_1) \setminus V(J_2)$ and $X^+:=X \cup \{u,v\} \subseteq V(J_2)$. Since $\deg_J(x) \geq 3$, for all $x \in V(J) \setminus X^+$, it follows that $(J_1,J_2)=\sep(U)$ for some $U \subseteq W$ or $V(J_1) \cap V(J_2)$ contains a cut-vertex $c$ of $J$. Suppose the former holds. Since $X^+ \subseteq V(J_2)$, we have $U \subseteq W \setminus \{\ell, r\}$. This is a contradiction since $\lambda(U) \geq 3$ by Claim~\ref{claim:lambda3}. Thus, $V(J_1) \cap V(J_2)$ contains a cut-vertex $c$ of $J$. Let $(J_1',J_2')$ be the $1$-separation of $J$ with $V(J_1') \cap V(J_2')=\{c\}$, $X \subseteq V(J_1')$ and $\{u,v\} \subseteq V(J_2')$. For all $a,b\in [2]$, let $J_{a,b}=(J_a \cap J_b', J_{3-a} \cup J_{3-b}')=:(J_{a,b}^1, J_{a,b}^2)$. We claim that for all $a,b \in [2]$, \[ 3 \geq |V(J_1) \cap V(J_2)|+|V(J_1') \cap V(J_2')| \geq |V(J_{a,b}^1) \cap V(J_{a,b}^2)|+|V(J_{3-a}^1) \cap V(J_{3-b}^2)|. \] The first inequality is immediate since $(J_1, J_2)$ is a $(\leq 2)$-separation and $(J_1', J_2')$ is a $1$-separation. For the second inequality, consider a vertex $v \in V(J_{a,b}^1) \cap V(J_{a,b}^2)$. By definition, $v \in V(J_a) \cap V(J_b')$; and $v \in V(J_{3-a})$ or $v \in V(J_{3-b}')$. If $v \in V(J_{3-a})$, then $v \in V(J_a) \cap V(J_{3-a})$; and if $v \in V(J_{3-b}')$, then $v \in V(J_b') \cap V(J_{3-b}')$. Thus, each vertex that is counted on the RHS is also counted on the LHS. Moreover, if $v$ is counted twice on the RHS, then $v \in V(J_a) \cap V(J_b') \cap V(J_{3-a}) \cap V(J_{3-b}')$. Thus, $v$ is also counted twice on the LHS. Since $c \in V(J_{a,b}^1) \cap V(J_{a,b}^2)$ for all $a,b \in [2]$, we have $|V(J_{a,b}^1) \cap V(J_{a,b}^2)| \leq 2$ for all $a,b \in [2]$. We say that $J_{a,b}$ is \emph{proper} if $V(J_{a,b}^1) \setminus V(J_{a,b}^2) \neq \emptyset$ and $V(J_{a,b}^2) \setminus V(J_{a,b}^1) \neq \emptyset$. Thus, if $J_{a,b}$ is proper, then $J_{a,b}$ is a $(\leq 2)$-separation. Since $X \subseteq V(J_{2,1}^1)$, at most one of $J_{1,1}, J_{1,2}$, and $J_{2,2}$ is proper by $(\star)$. Suppose $J_{2,2}$ is proper. Thus, neither $J_{1,1}$ nor $J_{1,2}$ are proper. Since $V(J_2) \setminus V(J_1) \neq \emptyset$, this implies $V(J_{1,1}^1) \setminus V(J_{1,1}^2)=\emptyset$ and $V(J_{1,2}^1) \setminus V(J_{1,2}^2)=\emptyset$. Let $d \in V(J_1) \setminus V(J_2)$. Note that $d \neq c$, since $c \in V(J_1) \cap V(J_2)$. Also, $d\in V(J_{1,1}^1)$ or $d\in V(J_{1,2}^1)$, since $d\in V(J_{1})$. First suppose that $d\in V(J_{1,1}^1)$. Then $d\notin V(J'_2)$, because $d \neq c$. Since $d\notin V(J_2)$, we deduce that $d\notin V(J_{1,1}^2)$. Hence, $d\in V(J_{1,1}^1) \setminus V(J_{1,1}^2)$, contradicting $V(J_{1,1}^1) \setminus V(J_{1,1}^2)=\emptyset$. Next, assume that $d\in V(J_{1,2}^1)$. Then $d\notin V(J'_1)$, because $d \neq c$. Since $d\notin V(J_2)$, we deduce that $d\notin V(J_{1,2}^2)$. Hence, $d\in V(J_{1,2}^1) \setminus V(J_{1,2}^2)$, contradicting $V(J_{1,2}^1) \setminus V(J_{1,2}^2)=\emptyset$. Suppose $J_{2,2}$ is not proper. Since $V(J_1) \setminus V(J_2) \neq \emptyset$, this implies $V(J_{2,2}^1) \setminus V(J_{2,2}^2)=\emptyset$. Since $\{u,v\} \subseteq V(J_{2,2}^1)$, we have $\{u,v\} \subseteq V(J_{2,2}^1) = V(J_{2,2}^1) \cap V(J_{2,2}^2) \subseteq V(J_1) \cap V(J_2)$. Thus, $V(J_1) \cap V(J_2)=\{u,v\}$, and $c \in \{u,v\}$. By symmetry, we may assume $c=u$. Note that $H_r \neq K_2$, because otherwise $J-\{u,v\}$ is connected. Suppose $c \in S=\{s \in V(J) \setminus X \mid \deg_J(s) \leq 2\}$. Since $H_r \neq K_2$, $v$ is a cut-vertex of $J$. However, by the definition of SPQRK trees, the only cut vertex of $J$ contained in $V(H_r)$ is $c$. Thus, $c \notin S$. Since $H_r \neq K_2$, this contradicts the minimality of $|\{u,v\} \cap \bd(H_r)|$ in our choice of the edge $uv$, since we could have chosen an edge of $H_r$ not incident to $c$ instead. \end{proof} Let $f=u'v'$ be an edge of $G$ and $c_f$ be the number of copies of $J$ in $G$ with $X$ fixed at $Y$ and $e$ fixed at $f$. Let $Y^+=Y \cup \{u',v'\}$. Suppose $c_f \geq j!c_{\ref{sunflower}}(j, \binom{j}{3}(2g+3))$ for some $f \in E(G)$. Since there are at most $j!$ copies of $J$ on each $j$-subset of $V(G)$, there is a family $\mathcal{V}:=\{V_1, \dots, V_t\}$ of distinct subsets of $V(G)$ each corresponding to a copy of $J$, where $t \geq c_{\ref{sunflower}}(j, \binom{j}{3}(2g+3))$ and $Y^+ \subseteq V_i$ for all $i \in [t]$. By \cref{sunflower}, $\mathcal{V}$ contains an $s$-sunflower $\mathcal F$, where $s \geq \binom{j}{3}(2g+3)$. Let $Z$ be the kernel of $\mathcal{F}$. By construction, $Y^+ \subseteq Z$. For each $F \in \mathcal{F}$ let $w_F \in F \setminus Z$. By Claim~\ref{claim:3paths}, there are three internally disjoint paths from $w_F$ to $Y^+$ in $G[F]$ whose ends in $Y^+$ are distinct for all $F \in \mathcal {F}$. For each $F \in \mathcal {F}$ let $Z_F$ be the set consisting of the first vertices of $Z$ on each of these three paths. Since $s \geq \binom{j}{3}(2g+3)$, $Z_F$ is the same for at least $2g+3$ sets in $\mathcal {F}$. Thus, $G$ contains a subdivision of $K_{3, 2g+3}$. However, this is impossible, since $K_{3, 2g+3}$ does not embed in $\Sigma$. It follows that $c_f \leq j!c_{\ref{sunflower}}(j, \binom{j}{3}(2g+3))$ for all $f \in E(G)$. Since there are at most $3(n+g-2) \leq (3g+3) n$ choices for $f$, there are at most \[ j!c_{\ref{sunflower}}(j, \tbinom{j}{3}(2g+3)) \cdot (3g+3) n \leq c_{\ref{lem:rootedclique}}(j,g) n \] copies of $J$ in $G$ with $X$ fixed at $Y$. \end{proof} The final ingredient we need is the following `flap reduction' lemma. \begin{lemma} \label{lem:flapreduction} Let $H$ be a graph with flap-number $k \geq 1$ and $(A_1,B_1), \dots, (A_k,B_k)$ be independent flaps in $H$ such that $A_1$ is maximal (w.r.t.\ subgraph inclusion). Then $B_1^+$ has flap-number at most $k-1$. \end{lemma} \begin{proof} First suppose that $B_1^+$ is $3$-connected. If $B_1^+$ is non-planar, then $f(B_1^+)=0$ and we are done. If $B_1^+$ is planar, then $f(B_1^+)=1$. Moreover, $H$ is planar and $k \geq 2$ since $(A_1, B_1)$ and $(B_1, A_1)$ are independent flaps in $H$. We may hence assume that $B_1^+$ is not $3$-connected. Towards a contradiction let $(C_1, D_1), \dots , (C_k, D_k)$ be independent flaps in $B_1^+$. There must be some $\ell \in [k]$ such that $X:=V(A_1 \cap B_1)$ is not contained in $V(D_\ell)$; otherwise $H$ has flap-number at least $k+1$. (Note that this implies in particular that $X\neq \emptyset$.) By relabelling, we may assume $\ell=1$. Since $(V(C_1) \setminus V(D_1)) \cap X \neq \emptyset$ and $(V(C_1) \setminus V(D_1)) \cap (V(C_i) \setminus V(D_i))=\emptyset$ for all $i > 1$, we have $X \subseteq V(D_i)$ for all $i>1$. Let $(C_1', D_1')$ be obtained from $(C_1,D_1)$ by gluing $A_1$ to $C_1$ along $X$, and for $i > 1$, let $(C_i', D_i')$ be obtained from $(C_i,D_i)$ by gluing $A_1$ to $D_i$ along $X$. Then, $(C_1', D_1'), \dots , (C_k', D_k')$ are independent flaps in $H$. Since $A_1$ is strictly contained in $C_1'$, this contradicts the maximality of $A_1$. \end{proof} We now complete the proof of the upper bound in \cref{Main}. \begin{theorem} \label{main} There exists a function $c_{\ref{main}}(h,g)$ with the following property. For every graph $H$ with $h$ vertices and every surface $\Sigma$ of Euler genus $g$ in which $H$ embeds, \[ C(H,\Sigma, n) \leq c_{\ref{main}}(h,g)n^{f(H)}. \] \end{theorem} \begin{proof} We define $c_{\ref{main}}(h,g)$ by induction on $h$. Set $c_{\ref{main}}(1,g):=1$ for all $g$. For $h>1$, let \[ c_{\ref{main}}(h,g):=\max (c_{\ref{StronglyNonPlanar}}(h,g), c_{\ref{EppsteinCor}}(h,g), \max \{c_{\ref{main}}(h_0,g)c_{\ref{lem:rootedclique}}(j,g) \mid h_0, j < h \}). \] We proceed by induction on $k:=f(H)$. If $k=0$, then $H$ is strongly non-planar. By \cref{StronglyNonPlanar}, $C(H,\Sigma, n) \leq c_{\ref{StronglyNonPlanar}}(h,g) \leq c_{\ref{main}}(h,g)$. Thus, we may assume $k \geq 1$. If $H$ is $3$-connected, then $k=1$ and $H$ is planar. By Theorem~\ref{EppsteinCor}, $C(H,\Sigma, n) \leq c_{\ref{EppsteinCor}}(h,g)n \leq c_{\ref{main}}(h,g)n$. We may hence assume that $H$ is not $3$-connected. Let $(A_1,B_1), \dots, (A_k,B_k)$ be independent flaps in $H$ such that $A_1$ is maximal. Let $H_0=B_1^+$, $h_0=|V(B_1^+)|$, and $G$ be an $n$-vertex graph embedded in $\Sigma$. By \cref{lem:flapreduction}, $H_0$ has flap-number at most $k-1$. Therefore, by induction, there are $c \leq c_{\ref{main}}(h_0,g)n^{k-1}$ copies of $H_0$ in $G$. For each $i \in [c]$, let $H_0^i$ be the corresponding copy of $H_0$ in $G$ and let $Y^i \subseteq V(G)$ be the image of $X:=V(A_1 \cap B_1)$ in $H_0^i$. Let $J:=A_1^+$ and $j:=|V(J)|$. Since $(A_1,B_1)$ is a flap, $J$ is planar. Moreover, there do not exist independent flaps $(A,B)$ and $(C,D)$ of $J$ with $X \subseteq V(B \cap D)$; otherwise $H$ has flap-number at least $k+1$. By \cref{lem:rootedclique}, for each $i \in [c]$, there are at most $c_{\ref{lem:rootedclique}}(j,g) n$ copies of $J$ in $G$ with $X$ fixed at $Y^i$. Therefore, there are at most $(c_{\ref{main}}(h_0,g)n^{k-1})(c_{\ref{lem:rootedclique}}(j,g) n) \leq c_{\ref{main}}(h,g)n^k$ copies of $H$ in $G$, as required. \end{proof} \section{Copies of Complete Graphs} \label{CompleteGraphs} This section studies the maximum number of copies of a given complete graph $K_s$ in an $n$-vertex graph that embeds in a given surface $\Sigma$. The flap-number of $K_s$ equals $1$ if $s\leq 4$ and equals $0$ if $s\geq 5$. Thus \cref{Main} implies that $C(n,K_s,\Sigma)=\Theta(n)$ for $s\leq 4$ and $C(n,K_s,\Sigma)=\Theta(1)$ for $s\geq 5$. The bounds obtained in this section are much more precise than those given by \cref{Main}. Our method follows that of \citet{DFJSW}, who characterised the $n$-vertex graphs that embed in a given surface $\Sigma$ and with the maximum number of complete subgraphs (in total), and then derived an upper bound on this maximum. A \emph{triangulation} of a surface $\Sigma$ is an embedding of a graph in $\Sigma$ in which each facial walk has three vertices and three edges with no repetitions. Let $G$ be a triangulation of $\Sigma$. An edge $vw$ of $G$ is \emph{reducible} if $vw$ is in exactly two triangles in $G$. And $G$ is \emph{irreducible} if no edge of $G$ is reducible~\citep{BE-IJM88,BE-IJM89,CDP-CGTA04,Sulanke06,Sulanke-KleinBottle,Sulanke-Generating,LawNeg-JCTB97,NakaOta-JGT95, JoretWood-JCTB10,Lavrenchenko}. \citet{BE-IJM88,BE-IJM89} proved that each surface has a finite number of irreducible triangulations. For $\mathbb{S}_h$ with $h\leq 2$ and $\mathbb{N}_c$ with $c\leq 4$ the list of all irreducible triangulations is known~\citep{Lavrenchenko,Sulanke-KleinBottle,LawNeg-JCTB97,Sulanke-Generating}. In general, the best known upper bound on the number of vertices in an irreducible triangulation of a surface with Euler genus $g\geq 1$ is $13g-4$, due to \citet{JoretWood-JCTB10}. Let $vw$ be a reducible edge of a triangulation $G$ of $\Sigma$. Let $vwx$ and $vwy$ be the two faces incident to $vw$ in $G$. As illustrated in \cref{ContractionSplitting}, let $G/vw$ be the graph obtained from $G$ by \emph{contracting} $vw$; that is, delete the edges $vw,wy,wx$, and identify $v$ and $w$ into $v$. $G/vw$ is a simple graph since $x$ and $y$ are the only common neighbours of $v$ and $w$. Indeed, $G/vw$ is a triangulation of $\Sigma$. Conversely, we say that $G$ is obtained from $G/vw$ by \emph{splitting} the path $xvy$ at $v$. If, in addition, $xy\in E(G)$, then we say that $G$ is obtained from $G/vw$ by \emph{splitting} the triangle $xvy$ at $v$. Note that $xvy$ need not be a face of $G/vw$. In the case that $xvy$ is a face, splitting $xvy$ is equivalent to adding a new vertex adjacent to each of $x,v,y$. \begin{figure}[!h] \centering \includegraphics{ContractionSplitting} \caption{\label{ContractionSplitting}Contracting a reducible edge.} \end{figure} \subsection{Copies of Triangles} For graphs $H$ and $G$, let $C(H,G)$ be the number of copies of $H$ in $G$. In this section, we consider the case $H=K_3$, and define the \emph{excess} of a graph $G$ to be $C(K_3,G)-3|V(G)|$. \begin{lemma} \label{TriangulationK3} For each surface $\Sigma$, every graph embeddable in $\Sigma$ with maximum excess is a triangulation of $\Sigma$. \end{lemma} \begin{proof} Let $G$ be a graph embedded in $\Sigma$ that maximises the excess. We claim that $G$ is a triangulation. Suppose on the contrary that $F$ is a non-triangular facial walk in $G$. Suppose that two vertices in $F$ are not adjacent. Then there are vertices $v$ and $w$ at distance 2 in the subgraph induced by $F$. Thus adding the edge $vw$ `across' the face increases the number of triangles and the excess. This contradicts the choice of $G$. Now assume that $F$ induces a clique. Suppose that $F$ has at least four distinct vertices. Let $G'$ be the embedded graph obtained from $G$ by adding one new vertex `inside' the face adjacent to four distinct vertices of $F$. Thus $G'$ is embeddable in $\Sigma$, has $|V(G)|+1$ vertices, has at least $C(K_3,G)+\binom{4}{2}=C(K_3,G)+6$ triangles, and thus has excess at least the excess of $G$ plus $3$. This contradicts the choice of $G$. Now assume that $F$ has at most three distinct vertices. By \cref{ThreeDistinctVertices} below, $F=(u,v,w,u,v,w)$. Let $G'$ be the graph obtained from $G$ by adding two new adjacent vertices $p$ and $q$, where $p$ is adjacent to the first $u,v,w$ sequence in $F$, and $q$ is adjacent to the second $u,v,w$ sequence in $F$. So $G'$ is embeddable in $\Sigma$ and has $|V(G)|+2$ vertices. If $S$ is a non-empty subset of $\{p,q\}$ and $T\subseteq \{u,v,w\}$ with $|S|+|T|=3$, then $S\cup T$ is a triangle of $G'$ but not of $G$. There are $\binom{2}{1}\binom{3}{2}+\binom{2}{2}\binom{3}{1}=6+3=9$ such triangles. Thus $C(K_3,G')\geq C(K_3,G)+9$ and the excess of $G'$ is at least the excess of $G$ plus 3, which contradicts the choice of $G$. Hence no face of $G$ has repeated vertices, and $G$ is a triangulation of $\Sigma$. \end{proof} \begin{lemma} \label{ThreeDistinctVertices} Let $F$ be a facial walk in an embedded graph, such that $F$ has exactly three distinct vertices that are pairwise adjacent. Then $F=(u,v,w)$ or $F=(u,v,w,u,v,w)$. \end{lemma} \begin{proof} Say $u,v,w$ are three consecutive vertices in $F$. Then $u\neq v$ and $v\neq w$ (since there are no loops). And $u\neq w$, since if $u=w$ then $\deg(v)=1$ (since there are no parallel edges), which is not possible since $v$ is adjacent to the two other vertices in $F$. So any three consecutive vertices in $F$ are pairwise distinct. If $F$ has no repeated vertex, then $F$ is the 3-cycle $(u,v,w)$. Otherwise, $F=(u,v,w,u,\dots)$. Again, since any three consecutive vertices in $F$ are pairwise distinct, $F=(u,v,w,u,\dots)$. Repeating this argument, $F=(u,v,w,u,v,w,\dots)$. Each edge is traversed at most twice; see~\citep[Sections 3.2 and 3.3]{MoharThom}. Thus $F=(u,v,w,u,v,w)$. \end{proof} \begin{theorem} \label{ExtremalK3} Let $\phi$ be the maximum excess of an irreducible triangulation of $\Sigma$. Let $X$ be the set of irreducible triangulations of $\Sigma$ with excess $\phi$. Then the excess of every graph $G$ embeddable in $\Sigma$ is at most $\phi$. Moreover, the excess of $G$ equals $\phi$ if and only if $G$ is obtained from some graph in $X$ by repeatedly splitting triangles. \end{theorem} \begin{proof} We proceed by induction on $|V(G)|$. By \cref{TriangulationK3}, we may assume that $G$ is a triangulation of $\Sigma$. If $G$ is irreducible, then the claim follows from the definition of $X$ and $\phi$. Otherwise, some edge $vw$ of $G$ is in exactly two triangles $vwx$ and $vwy$. By induction, the excess of $G/vw$ is at most $\phi$. Moreover, the excess of $G/vw$ equals $\phi$ if and only if $G$ is obtained from some graph $H\in X$ by repeatedly splitting triangles. Observe that every triangle of $G$ that is not in $G/vw$ is in $\{A\cup\{w\}:A\subseteq\{x,v,y\}, |A|=2\}$. Thus $C(K_3,G)\leq C(K_3,G/vw)+3$. Moreover, equality holds if and only if $xvy$ is a triangle. It follows from the definition of excess that the excess of $G$ is at most $\phi$. If the excess of $G$ equals $\phi$, then the excess of $G/vw$ equals $\phi$, and $xvy$ is a triangle and $G$ is obtained from $H$ by repeatedly splitting triangles. Conversely, if $G$ is obtained from some $H\in X$ by repeatedly splitting triangles, then $xvy$ is a triangle and $G/vw$ is obtained from $H$ by repeatedly splitting triangles. By induction, the excess of $G/vw$ equals $\phi$, implying the excess of $G$ equals $\phi$. \end{proof} In general, since every irreducible triangulation of a surface $\Sigma$ with Euler genus $g$ has $O(g)$ vertices~\citep{JoretWood-JCTB10,NakaOta-JGT95}, \cref{ExtremalK3} implies that $C(K_3,\Sigma,n)\leq 3n+O(g^3)$. We now show that $C(K_3,\Sigma,n)=3n+\Theta(g^{3/2})$. The following elementary fact will be useful. For integers $s\geq 2$ and $m\geq 2$, \begin{align} \label{NewSumReciprocals} \sum_{i\geq m}\frac{1}{i^s} \leq \int_{m-1}^{\infty} i^{-s} di = \frac{1}{(s-1)(m-1)^{s-1}}. \end{align} \begin{theorem} \label{CopiesK3} For every surface $\Sigma$ of Euler genus $g$, $$3n+ (\sqrt{6}-o(1)) g^{3/2} \leq C(K_3,\Sigma,n) \leq 3n+ \frac{21}{2} g^{3/2} + O(g\log g),$$ where the lower bound holds for all $n\geq\sqrt{6g}$ and the upper bound holds for all $n$. \end{theorem} \begin{proof} First we prove the lower bound. Because of the $o(1)$ term we may assume that $g\geq 4$. Let $p:=\floor{\frac12 (7+\sqrt{24g+1})}$. Note that $p\geq 8$ and $p-\frac52>\sqrt{6g}$. The Map Colour Theorem~\citep{Ringel74} says that $K_p$ embeds in $\Sigma$. To obtain a graph with $n$ vertices embedded in $\Sigma$ repeat the following step $n-p$ times: choose a face $f$ and add a new vertex `inside' $f$ adjacent to all the vertices on the boundary of $f$. Each new vertex creates at least three new triangles. Thus $C(K_3,\Sigma,n)\geq 3(n-p) + \binom{p}{3}$ for $n\geq p$. Since $p\geq 8$ we have $\binom{p}{3} - 3p \geq \frac16(p-\frac52)^3 \geq \sqrt{6}g^{3/2}$. Thus $C(K_3,\Sigma,n)\geq 3n + \sqrt{6}g^{3/2}$. To prove the upper bound, by \cref{TriangulationK3}, it suffices to consider an $n$-vertex triangulation $G$ of $\Sigma$. First suppose that $n>13g$. Then $G$ contains an edge $e$ so that $G/e$ is another triangulation~\citep{JoretWood-JCTB10}. Then $C(K_3,G)\leq C(K_3,G/e)+3$. Since $G/e$ has $n-1$ vertices, the result follows by induction. Now assume that $n\leq 13g$. Let $v_1,\dots,v_n$ be a vertex ordering of $G$, where $v_i$ has minimum degree in $G_i:=G[\{v_1,\dots,v_i\}]$. By Euler's formula, $i\cdot \deg_{G_i}(v_i) \leq 2|E(G_i)| \leq 6(i+g)$, implying $$\deg_{G_i}(v_i)\leq 6\left(1+ \frac{g}{i}\right).$$ Let $m:=\ceil{3\sqrt{g}}$. The number of triangles $v_av_bv_i$ with $a<b<i\leq m$ is at most $\binom{m}{3} \leq \binom{3\sqrt{g}+1}{3} \leq \frac{9}{2} g^{3/2}$. Charge each triangle $v_av_bv_i$ with $a<b<i$ and $i\geq m+1$ to vertex $v_i$. For $m+1\leq i\leq n$, the number of triangles charged to $v_i$ is at most $$\binom{\deg_{G_i}(v_i)}{2}<18\left(1+\frac{g}{i}\right)^2=18\left(1+\frac{2g}{i}+\frac{g^2}{i^2}\right).$$ Thus \begin{align*} C(K_3,G) & \leq \frac{9}{2}g^{3/2} + 18\sum_{i=m+1}^{n} \left(1+\frac{2g}{i}+\frac{g^2}{i^2} \right)\\ & \leq \frac{9}{2}g^{3/2} + 18n + 36g(\ln(n)+1) + 18g^2 \sum_{i\geq m+1}\frac{1}{i^2}. \end{align*} By \eqref{NewSumReciprocals} with $s=2$, \begin{equation*} C(K_3,G) \leq \frac{9}{2}g^{3/2} + 18n + 36g + 36g\ln(n) + \frac{18g^2}{m} . \end{equation*} Since $m\geq 3\sqrt{g}$ and $n\leq 13g$, \begin{equation*} C(K_3,G) \leq \frac{9}{2} g^{3/2} + 270g + 36g\ln(13g) + 6g^{3/2} = \frac{21}{2} g^{3/2} + 270g + 36g\ln(13g).\qedhere \end{equation*} \end{proof} \subsection{Copies of $K_4$} In this section, we consider the case $H=K_4$, and define the \emph{excess} of a graph $G$ to be $C(K_4,G)-|V(G)|$. \begin{lemma} \label{TriangulationK4} For each surface $\Sigma$, every graph embeddable in $\Sigma$ with maximum excess is a triangulation of $\Sigma$. \end{lemma} \begin{proof} Let $G$ be a graph embedded in $\Sigma$ with maximum excess. We claim that $G$ is a triangulation. Suppose that some facial walk $F$ contains non-adjacent vertices $v$ and $w$. Let $G'$ be the graph obtained from $G$ by adding the edge $vw$. Thus $C(K_4,G')\geq C(K_4,G)$. If two common neighbours of $v$ and $w$ are adjacent, then $C(K_4,G+vw)>C(K_4,G)$, implying that the excess of $G+vw$ is greater than the excess of $G$, which contradicts the choice of $G$. Now assume that no two common neighbours of $v$ and $w$ are adjacent. Let $G'':=G'/vw$. Every $K_4$ subgraph in $G'$ is also in $G''$. Thus $C(K_4,G'')\geq C(K_4,G')\geq C(K_4,G)$. Since $|V(G'')|<|V(G)|$, the excess of $G''$ is greater than the excess of $G$, which contradicts the choice of $G$. Now assume that every facial walk induces a clique in $G$. Suppose that some facial walk $F$ has at least four distinct vertices. Let $G'$ be the embedded graph obtained from $G$ by adding one new vertex `inside' the face adjacent to four distinct vertices of $F$. Thus $G'$ is embeddable in $\Sigma$, has $|V(G)|+1$ vertices, has at least $C(K_4,G)+\binom{4}{3}=C(K_4,G)+4$ triangles, and thus has excess at least the excess of $G$ plus $3$. This contradicts the choice of $G$. Now assume that every facial walk in $G$ has at most three distinct vertices. Suppose that some facial walk $F$ is not a triangle. By \cref{ThreeDistinctVertices}, $F=(u,v,w,u,v,w)$. Let $G'$ be the graph obtained from $G$ by adding two new adjacent vertices $p$ and $q$, where $p$ is adjacent to the first $u,v,w$ sequence in $F$, and $q$ is adjacent to the second $u,v,w$ sequence in $F$. So $G'$ is embeddable in $\Sigma$ and has $|V(G)|+2$ vertices. If $S$ is a non-empty subset of $\{p,q\}$ and $T\subseteq \{u,v,w\}$ with $|S|+|T|=4$, then $S\cup T$ induces a copy of $K_4$ in $G'$ but not in $G$. There are $\binom{2}{2}\binom{3}{2}+\binom{2}{1}\binom{3}{3}=3+2=5$ such copies. Thus $C(K_4,G')\geq C(K_4,G)+5$ and the excess of $G'$ is at least the excess of $G$ plus 3, which contradicts the choice of $G$. Therefore $G$ is a triangulation of $\Sigma$. \end{proof} \begin{theorem} \label{ExtremalK4} Let $\phi$ be the maximum excess of an irreducible triangulation of $\Sigma$. Let $X$ be the set of irreducible triangulations of $\Sigma$ with excess $\phi$. Then the excess of every graph $G$ embeddable in $\Sigma$ is at most $\phi$. Moreover, the excess of $G$ equals $\phi$ if and only if $G$ is obtained from some graph in $X$ by repeatedly splitting triangles. \end{theorem} \begin{proof} We proceed by induction on $|V(G)|$. By \cref{TriangulationK4}, we may assume that $G$ is a triangulation of $\Sigma$. If $G$ is irreducible, then the claim follows from the definition of $X$ and $\phi$. Otherwise, some edge $vw$ of $G$ is in exactly two triangles $vwx$ and $vwy$. By induction, the excess of $G/vw$ is at most $\phi$. Moreover, the excess of $G/vw$ equals $\phi$ if and only if $G$ is obtained from some graph $H\in X$ by repeatedly splitting triangles. Observe that every clique of $G$ that is not in $G/vw$ is in $\{A\cup\{w\}:A\subseteq\{x,v,y\}\}$. Thus $C(K_4,G)\leq C(K_4,G/vw)+1$. Moreover, equality holds if and only if $xvy$ is a triangle. It follows from the definition of excess that the excess of $G$ is at most $\phi$. If the excess of $G$ equals $\phi$, then the excess of $G/vw$ equals $\phi$, and $xvy$ is a triangle, and $G$ is obtained from $H$ by repeatedly splitting triangles. Conversely, if $G$ is obtained from some $H\in X$ by repeatedly splitting triangles, then $xvy$ is a triangle and $G/vw$ is obtained from $H$ by repeatedly splitting triangles. By induction, the excess of $G/vw$ equals $\phi$, implying the excess of $G$ equals $\phi$. \end{proof} Since every irreducible triangulation of a surface $\Sigma$ with Euler genus $g$ has $O(g)$ vertices~\citep{NakaOta-JGT95,JoretWood-JCTB10}, \cref{ExtremalK4} implies that $C(K_4,\Sigma,n)\leq n+O(g^4)$. We now show that $C(K_4,\Sigma,n)=n+\Theta(g^{2})$. \begin{theorem} \label{CopiesK4} For every surface $\Sigma$ of Euler genus $g$, $$n+\frac32 g^{2} \leq C(K_4,\Sigma,n) \leq n + \frac{283}{24}g^2 + O(g^{3/2}),$$ where the lower bound holds for $g\geq 1$ and $n\geq\sqrt{6g}$, and the upper bound holds for all $n$. \end{theorem} \begin{proof} First we prove the lower bound. If $\Sigma=\mathbb{N}_2$ then let $p:=6$. Otherwise, let $p:=\floor{\frac12 (7+\sqrt{24g+1})}$. Since $g\geq 1$ we have $p\geq 6$. The Map Colour Theorem~\citep{Ringel74} says that $K_p$ embeds in $\Sigma$. To obtain a graph with $n$ vertices embedded in $\Sigma$ repeat the following step $n-p$ times: choose a face $f$ and add a new vertex `inside' $f$ adjacent to all the vertices on the boundary of $f$. Each new vertex creates at least one new copy of $K_4$ (since the boundary of each face is always a clique on at least three vertices). Thus $C(K_4,\Sigma,n)\geq n-p + \binom{p}{4}$ for $n\geq p$. Since $\binom{p}{4}-p\geq \frac{1}{24}(p-\frac52)^4$ and $p-\frac52>\sqrt{6g}$ we have $C(K_4,\Sigma,n)\geq n + \frac{1}{24}(\sqrt{6g})^4=n + \frac32 g^2$. Now we prove the upper bound. The claim is trivial for $g=0$, so now assume that $g\geq 1$. By \cref{TriangulationK4}, it suffices to consider an irreducible triangulation $G$. \citet{JoretWood-JCTB10} proved that $n:=|V(G)|\leq 13g$. Let $v_1,\dots,v_n$ be a vertex ordering of $G$, where $v_i$ has minimum degree in $G_i:=G[\{v_1,\dots,v_i\}]$. By Euler's formula, $$i\cdot \deg_{G_i}(v_i) \leq 2|E(G_i)| \leq 6(i+g),$$ and $$\deg_{G_i}(v_i) \leq 6\left(1+\frac{g}{i}\right).$$ Define $m:=\ceil{4\sqrt{g}}$. The number of copies $v_av_bv_cv_i$ with $a<b<c<i\leq m$ is at most $\binom{m}{4} \leq \binom{4\sqrt{g}+1}{4} \leq \frac{32}{3}g^2$. Charge each copy $v_av_bv_cv_i$ with $a<b<c<i$ and $i\geq m+1$ to vertex $v_i$. For $m+1\leq i\leq n$, the number of copies charged to $v_i$ is at most $$\binom{\deg_{G_i}(v_i)}{3}<36\left(1+\frac{g}{i}\right)^3 = 36 \left( \left(\frac{g}{i}\right)^3 + 3 \left(\frac{g}{i}\right)^2 + 3 \left(\frac{g}{i}\right) + 1 \right).$$ In total, $$C(K_4,G) \leq \frac{32}{3}g^2 + 36 \sum_{i=m+1}^n \left(\frac{g}{i}\right)^3 + 3 \left(\frac{g}{i}\right)^2 + 3 \left(\frac{g}{i}\right) + 1. $$ By \eqref{NewSumReciprocals} with $s=2$ and $s=3$, \begin{equation*} C(K_4,G) \leq \frac{32}{3}g^2 + 36 \left( \frac{g^3}{2m^2} + \frac{3g^2}{m} + 3g( \ln n + 1) + n \right). \end{equation*} Since $m\geq 4\sqrt{g}$ and $n\leq 13g$, \begin{align*} C(K_4,G) & \leq \frac{32}{3}g^2 + 36 \left( \frac{g^2}{32} + \frac{3g^{3/2}}{4} + 3g( \ln (13g) + 1) + 13g \right)\\ & = \frac{283}{24}g^2 + 27 g^{3/2} + 108 g( \ln (13g) + 1) + 468 g. \qedhere \end{align*} \end{proof} \subsection{General Complete Graph} Now consider the case when $H=K_s$ for some $s\ge 5$. \cref{Main} shows that $C(K_s,\Sigma,n)$ is bounded for fixed $s$ and $\Sigma$. We now show how to determine $C(K_s,\Sigma,n)$ more precisely. \begin{theorem} \label{ExtremalCompleteGraph} For every integer $s\geq5$ and surface $\Sigma$ there is an irreducible triangulation $G$ such that $C(K_s,G)=\max_n C(K_s,\Sigma,n)$. \end{theorem} \begin{proof} Let $q:=\max_n C(K_s,\Sigma,n)$. Let $G_0$ be a graph embedded in $\Sigma$ with $C(K_s,G_0)=q$. As described in the proof of \cref{TriangulationK3} we can add edges and vertices to $G_0$ to create a triangulation $G$ of $\Sigma$. Adding edges and vertices does not remove copies of $K_s$. Thus $C(K_s,G)=q$. If $G$ is irreducible, then we are done. Otherwise, some edge $vw$ of $G$ is in exactly two triangles $vwx$ and $vwy$. Let $G':=G/vw$. Then $G'$ is another triangulation of $\Sigma$. Observe that every clique of $G$ that is not in $G'$ is in $\{A\cup\{w\}:A\subseteq\{x,v,y\}\}$. Each such clique has at most four vertices. Thus $C(K_s,G')=C(K_s,G)=q$. Repeat this step to $G'$ until we obtain an irreducible triangulation $G''$ with $C(K_s,G'')=q$. \end{proof} We now prove a precise bound on $C(K_s,\Sigma,n)$, making no effort to optimise the constant 300. \begin{theorem} \label{CopiesKs} For every integer $s\geq 5$ and surface $\Sigma$ of Euler genus $g$ and for all $n$, $$\left(\frac{\sqrt{6 g}}{s}\right)^s \leq C(K_s,\Sigma,n) \leq \left(\frac{300 \sqrt{g} }{s}\right)^s,$$ where the lower bound holds for all $n\geq\sqrt{6g}\geq s$ and the upper bound holds for all $n$. \end{theorem} \begin{proof} For the lower bound, it follows from the Map Colour Theorem~\citep{Ringel74} that $K_p$ embeds in $\Sigma$ where $p:=\ceil{\sqrt{6g}}$. Thus, for $n\geq p\geq s$, $$C(K_s,\Sigma,n)\geq \binom{\sqrt{6g}}{s}\geq\left(\frac{\sqrt{6g}}{s}\right)^s.$$ Now we prove the upper bound. The claim is trivial for $g=0$, so assume that $g\geq 1$. By \cref{ExtremalCompleteGraph}, it suffices to consider an irreducible triangulation $G$ of $\Sigma$. \citet{JoretWood-JCTB10} proved that $n:=|V(G)|\leq 13g$. Let $v_1,\dots,v_n$ be a vertex ordering of $G$, where $v_i$ has minimum degree in $G_i:=G[\{v_1,\dots,v_i\}]$. By Euler's formula, $$i\cdot \deg_{G_i}(v_i) \leq 2|E(G_i)| \leq 6(i+g)\leq 6(n+g)\leq 84 g.$$ Define $m:=\ceil{\sqrt{g}}$. The number of copies of $K_s$ in $G[\{v_1,\dots,v_m\}]$ is at most $$\binom{m}{s}\leq \left(\frac{2e \sqrt{g} }{s}\right)^s\leq \left(\frac{2e}{s}\right)^s g^{s/2}.$$ Charge every other copy $X$ of $K_s$ to the rightmost vertex in $X$ (with respect to $v_1,\dots,v_n$). For $m+1\leq i\leq n$, the number of copies of $K_s$ charged to $v_i$ is at most \begin{equation*} \binom{\deg_{G_i}(v_i)}{s-1} \leq \left( \frac{e\deg_{G_i}(v_i)}{s-1} \right)^{s-1} \leq \left(\frac{84eg}{i(s-1)}\right)^{s-1}. \end{equation*} In total, \begin{equation*} C(K_s,G)\leq \left(\frac{2e}{s}\right)^s g^{s/2} + \left(\frac{84eg}{s-1}\right)^{s-1} \sum_{i\geq m+1}\frac{1}{i^{s-1}}. \end{equation*} By \eqref{NewSumReciprocals}, \begin{equation*} C(K_s,G) \leq \left(\frac{2e}{s}\right)^s g^{s/2} + \left(\frac{84eg}{s-1}\right)^{s-1} \frac{1}{(s-2)m^{s-2}} . \end{equation*} Since $m\geq\sqrt{g}$, \begin{equation*} C(K_s,G) \leq \left(\frac{2e}{s}\right)^s g^{s/2} + \left(\frac{84eg}{s-1}\right)^{s-1} \!\!\! \frac{1}{(s-2)\,g^{(s-2)/2}} \leq \left(\frac{300\sqrt{g}}{s}\right)^s. \hfill\qedhere \end{equation*} \end{proof} \subsection{Computational Results} For $\Sigma\in\{\mathbb{S}_0,\mathbb{S}_1, \mathbb{S}_2,\mathbb{N}_1, \mathbb{N}_2, \mathbb{N}_3,\mathbb{N}_4\}$, we use \cref{TriangulationK3,TriangulationK4,ExtremalCompleteGraph}, the lists of all irreducible triangulations~\citep{Lavrenchenko,Sulanke-KleinBottle,LawNeg-JCTB97,Sulanke-Generating}, and an elementary computer program to count cliques to obtain the exact results for $C(K_s,\Sigma,n)$ shown in \cref{Exact}. \begin{table}[H] \caption{\label{Exact}The maximum number of copies of $K_s$ in an $n$-vertex graph embeddable in surface $\Sigma$.} \medskip \begin{tabular}{c|ccccccccc|c} \hline $\Sigma$ & $s=0$ & $s=1$ & $s=2$ & $s=3$ & $s=4$ & $s=5$ & $s=6$ & $s=7$ & $s=8$ & total \\ \hline $\mathbb{S}_0$ & 1 & $n$ & $3n-6$ & $3n-8$ & $n-3$ & & & & & $8n-16$ \\ $\mathbb{S}_1$ & 1 & $n$ & $3n$ & $3n+14$ & $n+28$ & $21$ & $7$ & $1$ & & $8n+72$\\ $\mathbb{S}_2$ & 1 & $n$ & $3n+6$ & $3n+38$ & $n+68$ & $58$ & $28$ & $8$ & $1$ & $8n+208$ \\ $\mathbb{N}_1$ & 1 & $n$ & $3n-3$ & $3n+2$ & $n+9$ & $6$ & $1$ & & & $8n+16$ \\ $\mathbb{N}_2$ & 1 & $n$ & $3n$ & $3n+12$ & $n+21$ & $12$ & $2$ & & & $8n+48$\\ $\mathbb{N}_3$ & 1 & $n$ & $3n+3$ & $3n+24$ & $n+40$ & $27$ & $8$ & $1$ & & $8n+104$ \\ $\mathbb{N}_4$ & 1 & $n$ & $3n+6$ & $3n+39$ & $n+71$ & $61$ & $29$ & $8$ & $1$ & $8n+216$\\ \hline \end{tabular} \end{table} Let $C(G)$ be the total number of complete subgraphs in a graph $G$; that is $C(G)=\sum_{s\geq 0}C(K_s,G)$. For a surface $\Sigma$, let $C(\Sigma,n)$ be the maximum of $C(G)$ taken over all $n$-vertex graphs $G$ embeddable in $\Sigma$. \citet{DFJSW} proved that $C(\Sigma,n)-8n$ is bounded for fixed $\Sigma$, which is implied by \cref{CopiesK3,CopiesK4,CopiesKs}. The following conjectures have been verified for each of $\mathbb{S}_0$, $\mathbb{S}_1$, $\mathbb{S}_2$, $\mathbb{N}_1$, $\mathbb{N}_2$, $\mathbb{N}_3$, $\mathbb{N}_4$. \begin{conjecture} For every surface $\Sigma$ and integer $n$, $$C(\Sigma,n)=\sum_{s\geq 0} C(K_s,\Sigma,n).$$ \end{conjecture} \begin{conjecture} If $C(G)=C(\Sigma,n)$ for some $n$-vertex graph $G$ embeddable in a surface $\Sigma$, then for $s\geq 0$, $$C(K_s,G) = C(K_s,\Sigma,n).$$ \end{conjecture} Conversely, we conjecture that maximising the number of triangles is equivalent to maximising the total number of complete subgraphs. More precisely: \begin{conjecture} \label{DeterminedByTriangles} If $C(K_3,G)=C(K_3,\Sigma,n)$ for some $n$-vertex graph $G$ embeddable in a surface $\Sigma$, then $$C(G) = C(\Sigma,n).$$ \end{conjecture} Note that $K_3$ cannot be replaced by some arbitrary complete graph in \cref{DeterminedByTriangles}. For example, every graph embeddable in $\mathbb{N}_3$ contains at most one copy of $K_7$, but there are irreducible triangulations $G$ of $\mathbb{N}_3$ that contain $K_7$ and do not maximise the total number of cliques (that is, $C(G)<8|V(G)|+104$). Similarly, every graph embeddable in $\mathbb{N}_4$ contains at most $8$ copies of $K_7$, but there are irreducible triangulations $G$ of $\mathbb{N}_4$ for which $C(K_7,G)=8$ and $C(G)<8|V(G)|+216$. \section{Minor-Closed Classes} \label{MinorClosedClasses} Consider the following natural open problem extending our results for graphs on surfaces: For graphs $H$ and $X$ and an integer $n$, what is the maximum number of copies of $H$ in an $n$-vertex $X$-minor-free graph? This problem has been extensively studied when $H$ and $X$ are complete graphs~\citep{Wood16,FOT10,LO15,FW17,NSTW06,RW09}. \citet{Eppstein93} proved the following result when $X$ is a complete bipartite graph and $H$ is highly connected. \begin{theorem}[\cite{Eppstein93}] \label{flopnumber0} Fix positive integers $s\leq t$ and a $K_{s,t}$-minor-free graph $H$ with no $(\leq s-1)$-separation. Then every $n$-vertex $K_{s,t}$-minor-free graph contains $O(n)$ copies of $H$. \end{theorem} What happens when $H$ is not highly connected? We have the following lower bound. Fix positive integers $s\leq t$ and a $K_{s,t}$-minor-free graph $H$. If $H$ has no $(\leq s-1)$-separation, then let $k:=1$; otherwise, let $k$ be the maximum number of pairwise independent $(\leq s-1)$-separations in $H$. The construction in \cref{LowerBound} generalises to give $n$-vertex $K_{s,t}$-minor-free graphs containing $\Theta(n^k)$ copies of $H$. The following question naturally arises: Does every $n$-vertex $K_{s,t}$-minor-free graph contain $O(n^k)$ copies of $H$? By \cref{flopnumber0}, the answer is `yes' if $k=1$. The methods presented in this paper show the answer is `yes' if $s\leq 3$. We omit the proof, since it is the same as for graphs embedded on a surface, except that in the $k=1$ case we use \cref{flopnumber0} instead of the additivity of Euler genus (\cref{Additivity}). When $H$ is a tree, this problem specialises as follows: Fix a tree $T$ and positive integers $s\leq t$. Let $\beta(T)$ be the size of the largest independent set of vertices in $T$, each with degree at most $s-1$. The construction in \cref{TreeCopies} generalises to give $n$-vertex $K_{s,t}$-minor-free graphs containing $\Omega(n^{\beta(T)})$ copies of $T$. Does every $n$-vertex $K_{s,t}$-minor-free graph contain $O(n^{\beta(T)})$ copies of $T$? \section{Homomorphism Inequalities} \label{Homomorphism} This section reinterprets the results of this paper in terms of homomorphism inequalities, and presents some open problems that arise from this viewpoint. For two graphs $H$ and $G$, a \emph{homomorphism} from $H$ to $G$ is a function $\phi:V(H)\rightarrow V(G)$ that preserves adjacency; that is, $\phi(v)\phi(w)$ is an edge of $G$ for each edge $vw$ of $H$. Let $\hom(H,G)$ be the number of homomorphisms from $H$ to $G$. For example, $\hom(H,K_t)>0$ if and only if $H$ is $t$-colourable. In the other direction, $\hom(K_1, G)$ is the number of vertices in $G$, and $\hom(K_2, G)$ is twice the number of edges in $G$, and $\hom(K_3,G)$ is 6 times the number of triangles in $G$. Homomorphism inequalities encode bounds on the number of copies of given graphs in a host graph. Much of extremal graph theory can be written in terms of homomorphism inequalities, and a beautiful theory has recently developed that greatly simplifies the task of proving such inequalities; see~\citep{Lovasz12}. Consider the following concrete example. \citet{Mantel07} proved that every $n$-vertex graph with more than $\frac{n^2}{4}$ edges has a triangle, which is tight for the complete bipartite graph $K_{n/2,n/2}$. \citet{Goodman59} strengthened Mantel's Theorem by providing a lower bound of $\frac{m}{3} ( \frac{4m}{n} - n )$ on the number of triangles in an $n$-vertex $m$-edge graph. Goodman's Theorem can be rewritten as the following homomorphism inequality: \begin{equation} \label{Goodman} \hom(K_1, G)\hom(K_3, G) \geq \hom(K_2, G)(2\hom(K_2, G) - \hom(K_1, G)^2). \end{equation} In a celebrated application of the flag algebra method, \citet{Razborov08} generalised \eqref{Goodman} by determining the minimum number of triangles in an $n$-vertex $m$-edge graph. The minimum number of copies of $K_r$ in an $n$-vertex $m$-edge graph (the natural extension of Turan's Theorem) was a notoriously difficult question~\citep{LovSim76,LovSim83}, recently solved for $r=4$ by \citet{Nikiforov11} and in general by \citet{Reiher16}. All of these results can be written in terms of homomorphism inequalities. The results of this paper show that for every fixed graph $H$ with flap-number $k$, and for every graph $G$ that embeds in a fixed surface $\Sigma$, $$\hom(H,G) \leq c_1 \hom(K_1,G)^k;$$ and if $H$ embeds in $\Sigma$, then $\hom(H,G) \geq c_2 \hom(K_1,G)^k$ for infinitely many graphs $G$ that also embed in $\Sigma$. Here is another example of a homomorphism inequality for graphs on surfaces. Euler's Formula implies\footnote{Let $G$ be a graph with $n$ vertices, $m$ edges and $c$ components. Let $\Sigma$ be a surface with Euler genus $g$. Assume that $G$ embeds in $\Sigma$ with $t$ triangular faces and $f$ non-triangular faces. By Euler's formula, $n-m+t+f=1+c-g$. Double-counting edges, $3t+4f \leq 2m$. Thus $4(m-n-t + 1+c-g) = 4f \leq 2m -3t$ and $t \geq 2m-4n + 4+4c - 4g \geq 2(m-2n + 4 - 2g)$, as claimed.} that the number of triangles in an $n$-vertex $m$-edge graph with Euler genus $g$ is at least $2(m-2n+4-2g)$. This result is an analogue of Goodman's Theorem for graphs $G$ of Euler genus $g$, and can be written as the following homomorphism inequality: \begin{equation*} \hom(K_3,G) \geq 6\hom(K_2,G)-24\hom(K_1,G) + 48 - 24g. \end{equation*} We consider it an interesting line of research to prove similar homomorphism inequalities in other minor-closed classes. The following open problems naturally arise. \begin{itemize} \item Is there a method (akin to flag algebras~\citep{Razborov08} or graph algebras~\citep{Lovasz12}) for systematically proving homomorphism inequalities in minor-closed classes? \item \citet{HN11} proved that it is undecidable to test the validity of a linear homomorphism inequality. In which minor-closed classes is it decidable to test the validity of a linear homomorphism inequality? \end{itemize} These questions are open even for forests; see~\citep{BEMS16,BL16,CSW17} for related results. Closely related to the study of graph homomorphisms is the theory of graph limits and graphons~\citep{Lovasz12}. While this theory focuses on dense graphs, a theory of graph limits for sparse graphs is emerging. For example, results are known for bounded degree graphs~\citep{BCKL13,HLS14}, planar graphs~\citep{IO01,GN13}, and bounded tree-depth graphs~\citep{NO20}. The above questions regarding graph homomorphisms parallel the theory of graph limits in sparse classes. \subsection*{Acknowledgement} Thanks to Casey Tompkins for pointing out reference~\citep{GPSTZc}. \citet{GPSTZc} prove \cref{TreeCopies} in the case $\Sigma=\mathbb{S}_0$, and conjecture that $C(H,\Sigma_0,n)=\Theta(n^k)$ for some integer $k=k(H)$, which is implied by \cref{Main}. \def\soft#1{\leavevmode\setbox0=\hbox{h}\dimen7=\ht0\advance \dimen7 by-1ex\relax\if t#1\relax\rlap{\raise.6\dimen7 \hbox{\kern.3ex\char'47}}#1\relax\else\if T#1\relax \rlap{\raise.5\dimen7\hbox{\kern1.3ex\char'47}}#1\relax \else\if d#1\relax\rlap{\raise.5\dimen7\hbox{\kern.9ex \char'47}}#1\relax\else\if D#1\relax\rlap{\raise.5\dimen7 \hbox{\kern1.4ex\char'47}}#1\relax\else\if l#1\relax \rlap{\raise.5\dimen7\hbox{\kern.4ex\char'47}}#1\relax \else\if L#1\relax\rlap{\raise.5\dimen7\hbox{\kern.7ex \char'47}}#1\relax\else\message{accent \string\soft \space #1 not defined!}#1\relax\fi\fi\fi\fi\fi\fi}
proofpile-arXiv_067-8979
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} It is well known that many interesting automorphic $L$-functions $L(\pi, s)$ have $p$-adic counterparts; and that these can often be extended to multi-variable $p$-adic $L$-functions, in which the automorphic representation $\pi$ itself also varies in a $p$-adic family of some kind. In the literature so far, the $p$-adic families considered have been \emph{Hida families}, or more generally \emph{Coleman families} -- families of automorphic representations which are principal series at $p$, together with the additional data of a ``$p$-refinement'' (a choice of one among the Weyl-group orbit of characters from which $\pi_p$ is induced). In Galois-theoretic terms, this corresponds to a full flag of subspaces in the local Galois representation at $p$ (or in its $(\varphi, \Gamma)$-module, for Coleman families). The parameter spaces for these families are known as \emph{eigenvarieties}. The aim of this note is to give an example of a $p$-adic $L$-function varying in a family of a rather different type: it arises from a family of automorphic representations of $\GL_2\times \GL_2$, but the parameter space for this family (arising from Galois deformation theory) has strictly bigger dimension than the eigenvariety for this group -- it has dimension 4, while the eigenvariety in this case has dimension 3. We also sketch some generalisations of the result which can be proved by the same methods. This corresponds to the fact that a $p$-refinement is a little more data than is actually needed to define a $p$-adic $L$-function: rather than a full flag, it suffices to have a single local subrepresentation of a specific dimension (a Panchishkin subrepresentation), which is a weaker condition and hence permits variation over a larger parameter space. We conclude with some speculative conjectures whose aim is to identify the largest parameter spaces on which $p$-adic $L$-functions and Euler systems can make sense. We conjecture that, given a reductive group $G$ and parabolic subgroup $P$ (and appropriate auxiliary data), there should be two natural $p$-adic formal schemes, the \emph{big} and \emph{small} $P$-nearly-ordinary eigenvarieties. These coincide if $P$ is a Borel subgroup, but not otherwise; if $G = \GL_2$ and $P$ is the whole of $G$, then the big eigenvariety is the 3-dimensional Galois deformation space of a modular mod $p$ representation (with no local conditions at $p$). In general, we expect that the ``natural home'' of $p$-adic $L$-functions -- and also of Euler systems -- should be a big ordinary eigenvariety for an appropriate parabolic subgroup. \subsubsection*{Acknowledgements} It is a pleasure to dedicate this article to Bernadette Perrin-Riou, in honour of her immense and varied contributions to number theory in general, and to $p$-adic $L$-functions in particular, which have been an inspiration to me throughout my career. I would also like to thank Daniel Barrera Salazar, Yiwen Ding, and Chris Williams for informative discussions in connection with this paper, and Sarah Zerbes for her feedback on an earlier draft. \section{Families of Galois representations} \subsection{The Panchishkin condition} Let $L$ be a finite extension of $\mathbf{Q}_p$ and let $V$ be a finite-dimensional vector space with a continuous linear action of $\Gamma_{\mathbf{Q}} = \Gal(\overline{\mathbf{Q}}/\mathbf{Q})$. Recall that $V$ is said to be \emph{geometric} if it is unramified at all but finitely many primes and de Rham at $p$; in particular it is Hodge--Tate at $p$, so we may consider its Hodge--Tate weights. (In this paper, we adopt the common, but not entirely universal, convention that the cyclotomic character has Hodge--Tate weight $+1$.) \begin{definition} \label{def:panch} We say $V$ satisfies the \textbf{Panchishkin condition} if it is geometric, and the following conditions hold: \begin{enumerate} \item We have \[ (\text{number of Hodge--Tate weights $\ge 1$ of $V$}) = \dim V^{(c = +1)}\] where $c \in \Gamma_\mathbf{Q}$ is (any) complex conjugation. \item There exists a subspace $V^+ \subseteq V$ stable under $\Gamma_{\mathbf{Q}_p}$ such that $V^+$ has all Hodge--Tate weights $\ge 1$, and $V / V^+$ has all Hodge--Tate weights $\le 0$. \end{enumerate} \end{definition} \begin{remark} \ \begin{enumerate}[(i)] \item Note that $V^+$ is unique if exists; we call it the \emph{Panchishkin subrepresentation} of $V$ at $p$. \item If $V$ is the $p$-adic realisation of a motive $M$, then condition (1) is equivalent to requiring that $L(M, 0)$ is a critical value of the $L$-function $L(M, s)$ in the sense of \cite{deligne79}. \item The Panchishkin condition is closely related to the concept of \emph{near-ordinarity}: a representation $V$ is said to be \emph{nearly-ordinary} if it is geometric, and there exists a full flag of subspaces of $V$ such that the Hodge--Tate weights of the graded pieces are in increasing order. However, we want to emphasise here that near-ordinarity is an unnecessarily restrictive hypothesis for the study of $p$-adic $L$-functions.\qedhere \end{enumerate} \end{remark} \subsection{Panchishkin families} By a ``Panchishkin family'', we mean a family of $p$-adic Galois representations \emph{equipped with a family of Panchishkin subobjects}. For simplicity, we shall suppose here that $p > 2$, so that the action of complex conjugation is diagonalisable. Let $\mathcal{O}$ be the ring of integers of $L$, and $\mathbf{F}$ its residue field. We let $\underline{\mathrm{CNL}}_{\cO}$ be the category of complete Noetherian local $\mathcal{O}$-algebras with residue field $\mathbf{F}$. \begin{definition} \label{def:panchfam} Let $\mathcal{R}$ be an object of $\underline{\mathrm{CNL}}_{\cO}$. A \emph{Panchishkin family of Galois representations} over $\mathcal{R}$ consists of the following data: \begin{itemize} \item a finite free $\mathcal{R}$-module $\mathcal{V}$ with an $R$-linear continuous action of $\Gamma_{\mathbf{Q}}$, unramified at almost all primes. \item an $\mathcal{R}$-direct-summand $\mathcal{V}^+ \subseteq \mathcal{V}$ stable under $\Gamma_{\mathbf{Q}_p}$, of $\mathcal{R}$-rank equal to that of $\mathcal{V}^{c = 1}$, \end{itemize} satisfying the following condition: \begin{itemize} \item The set $\Sigma(\mathcal{V}, \mathcal{V}^+)$ of maximal ideals $x$ of $R[1/p]$ such that $\mathcal{V}_x$ satisfies the Panchishkin condition and $\mathcal{V}^+_x$ is its Panchishkin subrepresentation is dense in $\operatorname{Spec} R[1/p]$. \end{itemize} \end{definition} \begin{example}[Cyclotomic twists of a fixed representation] The original examples of Panchishkin families are those of the following form. Let $V$ be an $L$-linear representation of $\Gamma_{\mathbf{Q}}$ satisfying the Panchishkin condition, and $V^\circ$ a $\mathcal{O}$-lattice in $V$ stable under $\Gamma_{\mathbf{Q}}$. We let $\Lambda$ denote the Iwasawa algebra $\mathcal{O}[[\ZZ_p^\times]]$, and $\mathbf{j}$ the canonical character $\mathbf{Z}_p^\times \to \Lambda^\times$. If $\dim V^{c=1} = \dim V^{c=-1}$, then we can take $\mathcal{R}$ to be the localisation of $\Lambda$ at any of its $(p-1)$ maximal ideals, corresponding to characters $\ZZ_p^\times \to \mathbf{F}^\times$; otherwise, we need to assume our maximal ideal corresponds to a character trivial on $-1$. We can then let $\mathcal{V} = V^\circ \otimes (\chi_{\mathrm{cyc}})^{\mathbf{j}}$, and $\mathcal{V}^+ = V^{\circ+} \otimes (\chi_{\mathrm{cyc}})^{\mathbf{j}}$, where $V^{\circ+} = V^\circ \cap V^+$. By construction, $\Sigma(\mathcal{V}, \mathcal{V}^+)$ contains all points of $\operatorname{Spec} \mathcal{R}[1/p]$ corresponding to characters of the form\footnote{We use additive notation for characters, so $j + \chi$ is a shorthand for the character $z \mapsto z^j \chi(z)$.} $j + \chi$, where $\chi$ is of finite order and $j$ is an integer in some interval containing 0 (depending on the gap between the Hodge--Tate weights of $V^+$ and $V/V^+$). In particular, it is Zariski-dense, as required. \end{example} The following conjecture is due to Coates--Perrin-Riou \cite{coatesperrinriou89} and Panchishkin \cite{panchishkin94} in the case of cyclotomic twists of a fixed representation. The generalisation to families as above is ``folklore''; we have been unable to locate its first appearance, but is (for instance) a special case of more general conjectures of Fukaya and Kato \cite{fukayakato06} (who have also investigated the case of non-commutative base rings $\mathcal{R}$, which we shall not attempt to consider here). \begin{conjecture} \label{conj:main} For $(\mathcal{V}, \mathcal{V}^+)$ as above, there exists an element $\mathcal{L}(\mathcal{V}, \mathcal{V}^+) \in \operatorname{Frac} \mathcal{R}$ such that for all $x \in \Sigma(\mathcal{V}, \mathcal{V}^+)$ we have \[ \mathcal{L}(\mathcal{V}, \mathcal{V}^+)(x) = (\text{Euler factor}) \cdot \frac{L(M_x, 0)}{(\text{period})},\] where $M_x$ is the motive whose realisation is $\mathcal{V}_x$. \end{conjecture} If $\mathcal{V}_x$ is semistable at $p$, the expected form of the Euler factor is \[ \det\left[ (1 - \varphi) : \mathbf{D}_{\mathrm{cris}}(V^+)\right] \cdot \det\left[ (1 - p^{-1}\varphi^{-1}): \mathbf{D}_{\mathrm{cris}}(V/V^+)\right]. \] We refer to \cite{fukayakato06} for more details of the interpolation factors involved. \subsection{Euler systems} \label{sect:euler} In \cite{LZ20-localconds}, Zerbes and the present author formulate a slightly more general version of the Panchishkin condition, depending on an integer $r$ with $0 \le r \le \dim V^{c = 1}$, which we call the ``$r$-Panchishkin condition''; the usual Panchishkin condition is the case $r = 0$. The definitions of the previous section extend naturally to give a notion of an \emph{$r$-Panchishkin family} $(\mathcal{V}, \mathcal{V}^+)$. We conjectured in \emph{op.cit.}~that when $\mathcal{V}$ is the family of cyclotomic twists of a fixed representation, the $r$-Panchishkin condition was the ``correct'' condition for a family of Euler systems of rank $r$ to exist, taking values in the Galois cohomology of the Tate dual $\mathcal{V}^*(1)$ and satisfying a local condition at $p$ determined by $\mathcal{V}^+$. This extends the conjectures formulated by Perrin-Riou in \cite{perrinriou95}, which correspond to taking $r$ to be the maximal value $\dim V^{c = 1}$ (in which case $\{0\}$ is a Panchishkin subrepresentation). It is also consistent with the above conjectures of Coates--Perrin-Riou and Panchishkin for $r = 0$, if we understand a ``rank 0 Euler system'' to be a $p$-adic $L$-function. It seems natural to expect that an analogoue of Conjecture \ref{conj:main} should hold for arbitrary $r$-Panchishkin families; and, as in the rank 0 case, one can show that this would follow as a consequence of the very general conjectures of \cite{fukayakato06}. \begin{remark} There are a number of (unconditional) results concerning the variation of Euler systems for families of Galois representations arising from Hida families of automorphic representations, which are examples of nearly-ordinary families; see e.g.~\cite{ochiai05} for Kato's Euler system, and \cite{LLZ14} for the $\GL_2 \times \GL_2$ Beilinson--Flach Euler system. However, the above conjecture predicts that Euler systems should vary in more general families, which are not nearly-ordinary but are still $r$-Panchishkin. Some examples of cyclotomic twist type for $r = 1$ are discussed in \cite{LZ20-localconds}. A much more sophisticated example due to Nakamura, in which $\mathcal{R}$ is the universal deformation space of a 2-dimensional modular Galois representation, is discussed in \S \ref{sect:nakamura} below. \end{remark} \section{Examples from \texorpdfstring{$\GL_2$}{GL(2)}} \subsection{The universal deformation ring} Let $\bar{\rho}: \Gamma_{\mathbf{Q}} \to \GL(\overline{V}) \cong \GL_2(\mathbf{F})$ be a 2-dimensional, odd, irreducible (hence, by Khare--Wintenberger, modular) representation. We shall assume $\bar{\rho}$ satisfies the following: \begin{itemize} \item $\bar{\rho}|_{\Gamma_K}$ is irreducible, where $K = \mathbf{Q}(\zeta_p)$ (Taylor--Wiles condition). \item if $\bar{\rho}|_{\Gamma_{\mathbf{Q}_p}}$ is not absolutely irreducible, with semisimplification $\chi_{1, p} \oplus \chi_{2, p}$ (after possibly extending $\mathbf{F}$), then we have $\chi_{1,p} / \chi_{2, p} \notin \{ 1, \varepsilon_p^{\pm 1}\}$ where $\varepsilon_p$ is the mod $p$ cyclotomic character. \item $\bar{\rho}$ is unramified away from $p$. \end{itemize} Note that the first two assumptions are essential to our method (because they are hypotheses for major theorems which we need to quote). On the other hand, the third is imposed solely for convenience and could almost certainly be dispensed with. \begin{definition} Let $\mathcal{R}(\bar{\rho}) \in \underline{\mathrm{CNL}}_{\cO}$ be the universal deformation ring over $\mathcal{O}$ parametrising deformations of $\bar{\rho}$ as a $\Gamma_{\mathbf{Q}, \{p\}}$-representation, and $\rho: \Gamma_{\mathbf{Q}, \{p\}} \to \GL_2(\mathcal{R}(\bar{\rho}))$ the universal deformation. Let $\mathfrak{X}(\bar{\rho}) = \Spf \mathcal{R}(\bar{\rho})$. \end{definition} \begin{theorem}[B\"ockle, Emerton] \ \label{thm:BE} \begin{itemize} \item The ring $\mathcal{R}(\bar{\rho})$ is a reduced complete intersection ring, and is flat over $\mathcal{O}$ of relative dimension 3. \item We have a canonical isomorphism $\mathcal{R}(\bar{\rho}) \cong \mathcal{T}(\bar{\rho})$, where $\mathcal{T}(\bar{\rho})$ is the localisation at the maximal ideal corresponding to $\bar{\rho}$ of the prime-to-$p$ Hecke algebra acting on the space $\mathcal{S}(1, \mathcal{O})$ of cuspidal $p$-adic modular forms of tame level 1. \end{itemize} \end{theorem} \begin{proof} This is proved in \cite{boeckle01} assuming that $\bar{\rho}|_{\Gamma_{\mathbf{Q}_p}}$ has a twist which is either ordinary, or irreducible and flat. This was extended to the setting described above (allowing irreducible but non-flat $\bar{\rho}$) by Emerton, see \cite[Theorem 1.2.3]{emerton-localglobal}. \end{proof} \begin{remark} If $\bar{\rho}$ is \emph{unobstructed} in the sense that $H^2\left(\Gamma_{\mathbf{Q}, \{p\}},\Ad(\bar{\rho})\right) = 0$, then $\mathcal{R}(\bar{\rho})$ is isomorphic to a power-series ring in 3 variables over $\mathcal{O}$. It is shown in \cite{weston04} that if $f$ is a fixed newform of weight $\ge 3$, then for all but finitely many primes $\mathfrak{p}$ of the coefficient field $\mathbf{Q}(f)$, the mod $\mathfrak{p}$ representation $\bar{\rho}_{f, \mathfrak{p}}$ is unobstructed. \end{remark} \begin{definition} \ \begin{enumerate} \item[(i)] If $f$ is a classical modular newform of $p$-power level (and any weight) such that $\bar{\rho}_{f, p} = \bar{\rho}$, then $\rho_{f, p}$ is a deformation of $\bar{\rho}$ and hence determines a $\overline{\QQ}_p$-point of $\mathfrak{X}(\bar{\rho})$. We shall call these points \emph{classical}. \item[(ii)] More generally, a $\overline{\QQ}_p$-point of $\mathfrak{X}(\bar{\rho})$ will be called \emph{nearly classical} if the corresponding Galois representation $\rho$ has the form $\rho_{f, p} \otimes (\chi_{\mathrm{cyc}})^{-t}$, for some (necessarily unique) newform $f$ and $t \in \mathbf{Z}$. \end{enumerate} \end{definition} In the setting of (ii), if $t \ge 0$, the Galois representation $\rho_{f, p} \otimes (\chi_{\mathrm{cyc}})^{-t}$ corresponds formally to the nearly-overconvergent $p$-adic modular form $\theta^t(f)$, where $\theta = q \frac{\mathrm{d}}{\mathrm{d}q}$ is the Serre--Tate differential operator on $p$-adic modular forms. Slightly abusively, we denote such a point by $\theta^t(f)$, even if $t < 0$ (in which case $\theta^t(f)$ may not actually exist as a $p$-adic modular form). Theorem 1.2.4 of \cite{emerton-localglobal}, combined with Theorem 0.4 of \cite{pillonistroh16} in the case of equal Hodge--Tate weights, shows that any $\overline{\QQ}_p$-point $\rho$ of $\mathfrak{X}(\bar{\rho})$ which is de Rham at $p$ is a nearly-classical point (as predicted by the Fontaine--Mazur conjecture). \begin{proposition} For any weight $k \ge 2$, modular points corresponding to weight $k$ modular forms are dense in $\mathfrak{X}(\bar{\rho})$. \end{proposition} \begin{proof} This is obvious for $\Spf \mathcal{T}(\bar{\rho})$, since $\mathcal{T}(\bar{\rho})$ can be written as an inverse limit of localisations of Hecke algebras associated to the finite-level spaces $S_k(\Gamma_1(p^n), \mathcal{O})$. Since we have $\mathcal{R}(\bar{\rho}) \cong \mathcal{T}(\bar{\rho})$ by \cref{thm:BE}, the result follows. \end{proof} \begin{remark} Note that a crucial step in the proof of \cref{thm:BE} is to establish that the set of \emph{all} modular points (of any weight) is dense in $\mathfrak{X}(\bar{\rho})$. However, once this theorem is established, we can obtain the above much stronger result \emph{a posteriori}. \end{remark} For later constructions we need the fact that there exists a ``universal modular form'' over $\mathfrak{X}(\bar{\rho})$: \begin{definition} \ \begin{enumerate} \item[(i)] Let $\mathbf{k}: \mathbf{Z}_p^\times \to \mathcal{R}(\bar{\rho})^\times$ be the character such that $\det \rho^{\mathrm{univ}} = (\chi_{\mathrm{cyc}})^{(\mathbf{k} - 1)}$. \item[(ii)] Let $\mathcal{G}^{[p]}_{\bar{\rho}}$ be the formal power series \[ \mathcal{G}^{[p]}_{\bar{\rho}} = \sum_{p \nmid n} t_n q^n \in \mathcal{R}(\bar{\rho})[[q]], \] where the $t_n$ are determined by the identity of formal Dirichlet series \[ \sum_{p \nmid n} t_n n^{-s} = \prod_{\ell \ne p} \det\left(1 - \ell^{-s} \rho^{\mathrm{univ}}(\Frob_\ell^{-1})\right)^{-1}.\] \end{enumerate} \end{definition} The specialisation of $\mathcal{G}^{[p]}_{\bar{\rho}}$ at a nearly-classical point $\rho_{f, p} \otimes (\chi_{\mathrm{cyc}})^{-t}$ is precisely the ``$p$-depletion'' $\theta^{t}(f^{[p]})$ of $\theta^{t}(f)$, where $\theta$ is the Serre--Tate differential operator $q \tfrac{\mathrm{d}}{\mathrm{d}q}$. If $t \ge 0$, this $p$-adic modular form is the image under the unit-root splitting of a classical \emph{nearly-holomorphic} cuspform, in the sense of Shimura. \begin{theorem}[Gouvea] The series $\mathcal{G}_{\bar{\rho}}^{[p]}$ is the $q$-expansion of a $p$-adic modular form with coefficients in $\mathcal{R}(\bar{\rho})$, of tame level 1 and weight-character $\mathbf{k}$, which is a normalised eigenform for all Hecke operators. \end{theorem} \begin{proof} This follows readily from the duality between Hecke algebras and spaces of cusp forms. \end{proof} \subsection{The universal ordinary representation} The following definition is standard: \begin{definition} An \emph{ordinary refinement} of $(\bar{\rho}, \overline{V})$ is a choice of 1-dimensional $\mathbf{F}$-subspace $\overline{V}^+ \subseteq \overline{V}$ stable under $\bar{\rho}(\Gamma_{\mathbf{Q}_p})$, such that the inertia subgroup $I_{\mathbf{Q}_p}$ acts trivially on $\overline{V}^+$. \end{definition} Let us fix a choice of ordinary refinement $\overline{V}^+$. Then there is a natural definition of ordinarity for deformations: we say that a deformation $\rho$ of $\bar{\rho}$ (to some ring $A \in \underline{\mathrm{CNL}}_{\cO}$) is ordinary if $\rho|_{\Gamma_{\mathbf{Q}_p}}$ preserves a rank one $A$-summand lifting $\overline{V}^+$, and the action of $I_{\mathbf{Q}_p}$ on this summand is trivial. (Note that this summand is unique if it exists, since our running hypotheses imply that $\overline{V}/\overline{V}^+$ is not isomorphic to $\overline{V}^+$). \begin{theorem} Suppose $\bar{\rho}$ is ordinary. Then there exists a complete local Noetherian $\mathcal{O}$-algebra representing the functor of ordinary deformations. We let $\mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ be this algebra, and $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho}) = \Spf \mathcal{R}^{\mathrm{ord}}(\bar{\rho})$. \end{theorem} On the ``modular'' side, we can consider the ordinary Hecke algebra $\mathcal{T}^{\mathrm{ord}}(\bar{\rho})$, which is the localisation at $\bar{\rho}$ of the algebra of endomorphisms of $e^{\mathrm{ord}} \cdot \mathcal{S}(1, \ZZ_p)$ generated by all of the Hecke operators (including $U_p$). Then we have isomorphisms \[ \mathcal{R}^{\mathrm{ord}}(\bar{\rho}) \cong \mathcal{T}^{\mathrm{ord}}(\bar{\rho}), \] compatible with the isomorphisms of the previous section via the natural maps $\mathcal{R}(\bar{\rho}) \to \mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ and $\mathcal{T}(\bar{\rho}) \to \mathcal{T}^{\mathrm{ord}}(\bar{\rho})$. Note that the composite $\ZZ_p^\times \xrightarrow{\mathbf{k}} \mathcal{R}(\bar{\rho}) \to \mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ gives $\mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ the structure of a $\Lambda$-algebra, where $\Lambda = \mathcal{O}[[\mathbf{Z}_p^\times]]$. So we have a map $\mathbf{k}: \mathfrak{X}^{\mathrm{ord}}(\bar{\rho}) \to \mathfrak{X}_{\mathrm{cyc}} = \Spf \Lambda$. \begin{proposition}[Hida] \ \begin{itemize} \item The ring $\mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ is finite and projective as a $\Lambda$-module, and thus has relative dimension 1 over $\mathcal{O}$. \item If $k \ge 2$ is an integer, and $\chi: \ZZ_p^\times \to \mathcal{O}^\times$ is a Dirichlet character of conductor $p^n$, then the fibre of $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho})$ at $\mathbf{k} = k + \chi$ is \'etale over $L = \operatorname{Frac} \mathcal{O}$, and its geometric points biject with the normalised weight $k$ eigenforms of level $\Gamma_1(p^n)$ and character $\chi$ (if $n \ge 1$) or level $\Gamma_0(p)$ (if $n = 0$) which are ordinary and whose mod $p$ Galois representation is $\bar{\rho}$. \end{itemize} \end{proposition} (Note that this fibre is empty if $k + \chi$ does not lie in the component of $\mathfrak{X}_{\mathrm{cyc}}$ determined by $\det \bar{\rho}$.) Much as above, we can define a universal \emph{ordinary} eigenform $\mathcal{G}_{\bar{\rho}}^{\mathrm{ord}}$ with coefficients in $\mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ (whose $p$-depletion is the pullback of $\mathcal{G}_{\bar{\rho}}^{[p]}$ along $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho}) \to \mathfrak{X}(\bar{\rho})$, and whose $U_p$-eigenvalue is the scalar by which $\Frob_p^{-1}$ acts on $\mathcal{V}^+$). However, we shall not use this explicitly here. More useful is the following dual construction due to Hida \cite{hida88}. The ring $\mathcal{R}^{\mathrm{ord}}(\bar{\rho})$ has finitely many minimal primes, corresponding to irreducible components of $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho})$ (``branches''). If $\mathfrak{a}$ is a minimal prime, and we let $\mathcal{T}_{\mathfrak{a}}$ be the integral closure of $\mathcal{T}^{\mathrm{ord}}(\bar{\rho}) / \mathfrak{a}$, then we can find an invertible ideal $I_{\mathfrak{a}} \triangleleft \mathcal{T}_{\mathfrak{a}}$, and a homomorphism \[ \lambda_{\mathfrak{a}}: \mathcal{S}^{\mathrm{ord}}(1, \Lambda) \otimes_{\mathcal{T}^{\mathrm{ord}}(\bar{\rho})} \mathcal{T}_{\mathfrak{a}} \to I_{\mathfrak{a}}^{-1}, \] characterised by mapping $\mathcal{G}_{\bar{\rho}}^{\mathrm{ord}}$ to 1. \subsection{Nearly ordinary deformations} More generally, we can define a \emph{nearly ordinary} refinement by dropping the requirement that inertia act trivially on $\overline{V}^+$; and there is a corresponding nearly-ordinary deformation functor, represented by a ring $\mathcal{R}^{\mathrm{no}}(\bar{\rho})$. If $(\overline{V}, \overline{V}^+)$ is nearly-ordinary, we can find a unique character $\bar\chi: \Gal(\mathbf{Q}(\zeta_p) /\mathbf{Q}) \to \mathbf{F}^\times$ such that $(\overline{V} \otimes \bar\chi, \overline{V}^+ \otimes \bar\chi)$ is ordinary; and we obtain an identification of $\mathcal{R}^{\mathrm{no}}(\bar{\rho})$ with the tensor product of $\mathcal{R}^{\mathrm{ord}}_{\bar{\rho} \otimes \bar\chi}$ and the ring parametrising deformations of $\bar\chi$ to a character of $\Gal(\mathbf{Q}(\zeta_{p^\infty}) / \mathbf{Q})$, which is isomorphic to $\mathcal{O}[[X]]$. Thus $\mathcal{R}^{\mathrm{no}}(\bar{\rho})$ is flat over $\mathcal{O}$ of relative dimension 2. \subsection{Examples of Panchishkin families} The above deformation-theoretic results give rise to the following examples of Panchishkin families, in the sense of \cref{def:panchfam}. \begin{example}[Ordinary families of modular forms] \label{ex1} Suppose $\overline{V}$ is a modular mod $p$ representation with a nearly-ordinary refinement $\overline{V}^+$. Then the universal family $\mathcal{V}^{\mathrm{no}}$ of Galois representations over $\mathcal{R}^{\mathrm{no}}(\bar{\rho})$, together with its universal nearly-ordinary refinement $\mathcal{V}^{\mathrm{no}, +}$, is an example of a Panchishkin family. In this case, Hida theory shows that $\Sigma(\mathcal{V}, \mathcal{V}^+)$ consists precisely of the $\overline{\QQ}_p$ points of $\mathfrak{X}^{\mathrm{no}}(\bar{\rho})$ of the form $\theta^{-s}(f)$, where $f$ has weight $k \ge 2$ and $1 \le s \le k-1$. These are manifestly Zariski-dense. \cref{conj:main} is known for this family, by work of Mazur and Kitagawa \cite{kitagawa94}. \end{example} We are principally interested in examples which (unlike \cref{ex1}) are \emph{not} nearly-ordinary. Our first examples of such representations come from tensor products: \begin{example}[Half-ordinary Rankin--Selberg convolutions]\label{ex2} Let $\overline{V}_1$ and $\overline{V}_2$ be two mod $p$ representations satisfying our running hypotheses, and suppose $\overline{V}_1$ admits a nearly-ordinary refinement $\overline{V}_1^+$. Twisting $\overline{V}_1$ by a character and $\overline{V}_2$ by the inverse of this character, we can suppose that $(\overline{V}_1, \overline{V}_1^+)$ is actually ordinary (not just nearly-so). Then we consider the triple $(\mathcal{R}, \mathcal{V}, \mathcal{V}^+)$ given by \[ \begin{aligned} \mathcal{R}&= \mathcal{R}^{\mathrm{ord}}(\bar{\rho}_1) \operatorname{\hat{\otimes}} \mathcal{R}(\bar{\rho}_2), &\mathcal{V} &= \mathcal{V}_1^{\mathrm{ord}} \operatorname{\hat{\otimes}} \mathcal{V}_2,& \mathcal{V}^+ &= \mathcal{V}_1^{\mathrm{ord}, +} \operatorname{\hat{\otimes}} \mathcal{V}_2. \end{aligned} \] where $(\mathcal{V}_1^{\mathrm{ord}}, \mathcal{V}_1^{\mathrm{ord}, +})$ is the universal ordinary deformation of $(\overline{V}_1, \overline{V}_1^+)$, and $\mathcal{V}_2$ the universal deformation of $\overline{V}_2$ (with no ordinarity condition). Note that $\mathcal{R}$ has relative dimension 4 over $\mathcal{O}$. The set $\Sigma(\mathcal{V}, \mathcal{V}^+)$ is the set of points of the form $\left(f, \theta^{-s}(g)\right)$, where $f$ is a classical point of weight $k \ge 2$, and $\theta^{-s}(g)$ is a nearly-classical point such that $g$ has weight $\ell < k$ and $s$ lies in the range of critical values of the Rankin--Selberg $L$-function, namely \[ \ell \le s \le k-1.\] This set $\Sigma(\mathcal{V}, \mathcal{V}^+)$ is Zariski-dense; even the specialisations with $(k, \ell, s) = (3, 2, 2)$ are dense. We shall verify \cref{conj:main} for this family below. \end{example} \begin{remark} A generalisation of the above two examples would be to consider tensor products of universal representations over product spaces of the form \[ \mathfrak{X} = \mathfrak{X}^{\mathrm{no}}(\bar{\rho}_1) \times \mathfrak{X}(\bar{\rho}_2) \times \dots \times \mathfrak{X}(\bar{\rho}_n)\] for general $n$, where $\bar{\rho}_1, \dots, \bar{\rho}_n$ are irreducible modular representations mod $p$ with $\bar{\rho}_1$ nearly ordinary. This space has dimension $3n-1$; but there are $n-1$ ``redundant'' dimensions, since the tensor product is not affected by twisting $\rho_1$ by a character and one of $\rho_2, \dots, \rho_n$ by the inverse of this character. Quotienting out by this action gives a Panchishkin family over a $2n$-dimensional base. \end{remark} \begin{example}[General tensor products] Let $L = \operatorname{Frac} \mathcal{O}$ and let $V_1$ be any $L$-linear representation of $\Gamma_{\mathbf{Q}}$ (not necessarily 2-dimensional) which is geometric, satisfies the Panchishkin condition, and has $\dim V^{c = 1} = \dim V^{c = -1}$. Let $V_1^\circ$ be a $\Gamma_{\mathbf{Q}}$-stable $\mathcal{O}$-lattice in $V_1$ (which always exists). Then, for any modular mod $p$ representation $\overline{V}_2$, we obtain a Panchishkin family by letting \[ \begin{aligned} \mathcal{R}&= \mathcal{R}(\bar{\rho}_2), &\mathcal{V} &= V_1^\circ \otimes \mathcal{V}_2,& \mathcal{V}^+ &= (V_1^\circ \cap V_1^+) \otimes \mathcal{V}_2, \end{aligned} \] In particular, we can take $V_1$ to be the Galois representation arising from a cohomological automorphic representation of $\operatorname{GSp}_4$ which is Klingen-ordinary at $p$. \end{example} Note that in the last two examples the subspace $\mathcal{V}^+$ will \emph{not}, in general, extend to a full flag of $\Gamma_{\mathbf{Q}_p}$-stable subspaces, so $\mathcal{V}$ is not nearly ordinary. \subsection{Families of Euler systems} \label{sect:nakamura} The canonical 2-dimensional family $\mathcal{V}$ over $\mathcal{R}(\bar{\rho})$ will not, in general, satisfy the Panchishkin condition. However, it automatically satisfies the more general ``$r$-Panchishkin condition'' described above if we take $r = 1$, since $\mathcal{V}^+ = \{0\}$ satisfies the conditions of a 1-Panchishkin submodule (with $\Sigma(\mathcal{V}, \mathcal{V}^+)$ being the set of nearly-classical specialisations $\theta^t(f)$ with $t \ge 0$). So the more general conjecture sketched in \S \ref{sect:euler} predicts that there should exist a family of Euler systems taking values in $\mathcal{V}^*(1)$, interpolating Kato's Euler systems for each modular form $f$ lifting $\bar{\rho}$. Such a family of Euler systems has recently been constructed by Nakamura \cite{nakamura20}. \section{P-adic L-functions for half-ordinary Rankin convolutions} \label{sect:rankin} Let us choose two mod $p$ representations $\bar{\rho}_1$, $\bar{\rho}_2$ satisfying the conditions above, with $\bar{\rho}_1$ ordinary (but no ordinarity assumption on $\bar{\rho}_2$). Choose a branch $\mathfrak{a}$ of $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho}_1)$ as before, and let $\mathcal{A}$ denote the ring \( \mathcal{T}_{\mathfrak{a}} \mathop{\hat\otimes}_{\ZZ_p} \mathcal{T}(\bar{\rho}_2) \), and $\mathfrak{X} = \mathfrak{X}_{\mathfrak{a}} \times \mathfrak{X}(\bar{\rho})$ its formal spectrum. This has relative dimension 4 over $\ZZ_p$. We let $\mathcal{V}$ denote the $\mathcal{A}$-linear representation $\rho_{1}^{\mathrm{ord}} \otimes (\rho_2)^*(1)$, and $\mathcal{V}^+ = (\rho_{1}^{\mathrm{ord}})^+ \otimes (\rho_2)^*(1)$ where $(\rho_{1}^{\mathrm{ord}})^+$ is the 1-dimensional unramified subrepresentation of $\rho_{1}^{\mathrm{ord}} |_{\Gamma_{\mathbf{Q}_p}}$. Thus $\mathcal{V}$ is a 4-dimensional family of $\Gamma_{\mathbf{Q}}$-representations over $\mathfrak{X}$ unramified outside $p$, and $\mathcal{V}^+$ a 2-dimensional local subrepresentation of $\mathcal{V}$. \begin{remark} This differs from the $(\mathcal{V}, \mathcal{V}^+)$ of \cref{ex2} by an automorphism of the base ring $\mathcal{R}$, so \cref{conj:main} for either one of these examples is equivalent to the other. The present setup is slightly more convenient for the proofs. \end{remark} The set $\Sigma(\mathcal{V}, \mathcal{V}^+)$ contains all points $(f, \theta^t(g))$ where $f$ has weight $k \ge 2$, $g$ has weight $\ell \ge 1$, and $t$ is an integer with $0 \le t \le k-\ell-1$. Our goal is to define a $p$-adic $L$-function associated to $(\mathcal{V}, \mathcal{V}^+)$, with an interpolating property at the points in $\Sigma(\mathcal{V}, \mathcal{V}^+)$. The ring $\mathcal{A}$ is endowed with two canonical characters $\mathbf{k}_1, \mathbf{k}_2: \ZZ_p^\times \to \mathcal{A}^\times$, the former factoring through $\mathcal{T}_{\mathfrak{a}}$ and the latter through $\mathcal{T}(\bar{\rho}_2)$. We can regard $\mathcal{G}_{\bar{\rho}_2}^{[p]}$ as a $p$-adic eigenform with coefficients in $\mathcal{A}$, of weight $\mathbf{k}_2$, by base extension. \begin{definition} Let $\Xi$ denote the $p$-adic modular form \[ e^{\mathrm{ord}} \left(\mathcal{G}_{\bar{\rho}_2}^{[p]} \cdot \mathcal{E}_{\mathbf{k}_1 - \mathbf{k}_2}^{[p]}\right) \in \mathcal{S}_{\mathbf{k}_1}^{\mathrm{ord}}(1, \mathcal{A}), \] where $\mathcal{E}_{\mathbf{k}}^{[p]} = \sum_{\substack{n \ge 1 \\ p \nmid n}} (\sum_{d \mid n} d^{\mathbf{k} - 1}) q^n \in \mathcal{S}_{\mathbf{k}}(1, \Lambda)$ denotes the $p$-depleted Eisenstein series of weight $\mathbf{k}$ and tame level 1. Let \[ \mathcal{L} \coloneqq \lambda_{\mathfrak{a}}\left(\Xi\right) \in I_{\mathfrak{a}}^{-1} \otimes_{\mathcal{T}_{\mathfrak{a}}}\mathcal{A}.\] \end{definition} This is a meromorphic formal-analytic function on the 4-dimensional space $\mathfrak{X}_{\mathfrak{a}} \times \mathfrak{X}(\bar{\rho})$, regular along any 3-dimensional slice $\{f\} \times \mathfrak{X}(\bar{\rho})$ with $f$ classical. We now show that the values of $\mathcal{L}$ at points in $\Sigma^+$ interpolate values of Rankin $L$-functions. Let $(f, \theta^t(g))$ be such a point, with $f, g$ newforms of $p$-power levels, and let $k, \ell$ be the weights of $f, g$. Let $\alpha$ be the eigenvalue of geometric Frobenius on the unramified subrepresentation of $\rho_{f, p} |_{\Gamma_{\mathbf{Q}_p}}$, and let $f_\alpha$ be the $p$-stabilisation of $f$ of $U_p$-eigenvalue $\alpha$. \begin{remark} If $f$ has non-trivial level, then $f_\alpha = f$, and $\alpha$ is just the $U_p$-eigenvalue of $f$. If $f$ has level one, then $\alpha$ is the unique unit root of the polynomial $X^2 - a_p(f) X + p^{k-1}$, and $f_\alpha$ is the level $p$ eigenvector $f_\alpha(\tau) = f(\tau) - \frac{p^{k-1}}{\alpha} f(p\tau)$. \end{remark} We define $\lambda_{f, \alpha}$ to be the unique linear functional on $\mathcal{S}_k^{\mathrm{ord}}(1, L)$ which factors through projection to the $f_\alpha$ eigenspace, and satisfies $\lambda_{f, \alpha}(f_\alpha) = 1$. By definition, we have \[ \mathcal{L}_{\mathfrak{a}}(\bar{\rho}_1, \bar{\rho}_2) (f, \theta^t(g)) =\lambda_{f, \alpha}\left(\theta^t(g^{[p]})\cdot E^{[p]}_{k - \ell - 2t}\right). \] \begin{definition} For $f$, $g$ newforms as above, we write $L^{(p)}(f \times g, s)$ for the Rankin--Selberg $L$-function of $f$ and $g$ without its Euler factor at $p$, \begin{align*} L^{(p)}(f \times g, s) &:= L^{(p)}(\chi_f\chi_g, 2s+2-k-\ell) \sum_{\substack{n \ge 1 \\ p\nmid n}} a_n(f) a_n(g) n^{-s}\\ &= \prod_{\ell \ne p} \det\left( 1 - \ell^{-s} \Frob_\ell^{-1} : V_p(f) \otimes V_p(g)\right)^{-1}, \end{align*} and let \[ \Lambda^{(p)}(f \otimes g, s) \coloneqq \Gamma_{\mathbb{C}}(s) \Gamma_{\mathbb{C}}(s - \ell + 1) L^{(p)}(f \otimes g, s).\] \end{definition} \begin{theorem} We have \[ \mathcal{L}(f, \theta^t(g)) = 2^{1-k} (-1)^{t}i^{k+\ell} \left( \frac{p^{(t + 1)}}{\alpha} \right)^b \lambda_{p^b}(g)\frac{P_p(g, p^t \alpha^{-1})} {P_p(g^*, p^{-(\ell + t)} \alpha)} \frac{\Lambda^{(p)}(f, g^*, \ell + t)}{\mathcal{E}_p^{\mathrm{ad}}(f) \langle f, f \rangle}, \] where $b$ is the level at which $g$ is new. Here $P_p(g, X)$ is the polynomial such that \[ P_p(g, X)^{-1} = \sum_{r \ge 0} a_{p^r}(g) X^r, \] and \[ \mathcal{E}_p^{\mathrm{ad}}(f) = \begin{cases} \left(1 - \frac{p^{k-1}}{\alpha^2}\right)\left(1 - \frac{p^{k-2}}{\alpha^2}\right) & \text{$f$ crystalline at $p$}, \\[2mm] -\left( \frac{p^{k-1}}{\alpha^2} \right) & \text{$f$ semistable non-crystalline at $p$},\\[2mm] \left( \frac{p^{k-1}}{\alpha^2} \right)^a G(\chi_f) & \text{$f$ non-semistable at $p$, new of level $p^a$.} \end{cases} \] \end{theorem} \begin{proof} This follows from the Rankin--Selberg integral formula. The computations are virtually identical to the case of finite-slope forms treated in \cite{loeffler18}, so we shall not reproduce the computations in detail here. \end{proof} \begin{remark} Note that the factor $\frac{P_p(g, p^t \alpha^{-1})} {P_p(g^*, p^{-(\ell + t)} \alpha)}$ can be written as \[ \det\left[ (1 - \varphi)^{-1}(1 - p^{-1} \varphi^{-1}) : \mathbf{D}_{\mathrm{cris}}(V^+)\right]\] where $V^+ = (\rho_{f, p})^+ \otimes \rho_{g, p}^*(1+t)$ is the fibre of $\mathcal{V}^+$ at $(f, \theta^t(g))$. On the other hand, the factor $\left( \frac{p^{(t + 1)}}{\alpha} \right)^b \lambda_{p^b}(g)$ is essentially the local $\varepsilon$-factor of this representation. \end{remark} \section{Other cases} We briefly comment on some other cases which can be treated by the same methods as above. \subsection{Relaxing the tame levels} Firstly, the assumption that the levels of our families be 1 should be easy to remove; the only price that must be paid is a little more careful book-keeping about the local Euler factors at the bad primes. \subsection{The case of GSp(4) \texorpdfstring{$\times$}{x} GL(2)} \label{sect:gsp4gl2} A more ambitious case which can be treated by the same methods is the following. Let $\Pi$ be a cohomological automorphic representation of $\operatorname{GSp}_4$ which is globally generic, unramified and Klingen-ordinary at $p$, and contributes to cohomology with coefficients in the algebraic representation of weight $(r_1, r_2)$, for some $r_1 \ge r_2 \ge 0$. (Classically, these correspond to holomorphic vector-valued Siegel modular forms taking values in the representation $\operatorname{Sym}^{r_1 - r_2} \otimes \det^{r_2 + 3}$ of $\GL_2$.) For technical reasons we assume $r_2 > 0$. In \cite{LPSZ} we constructed a cyclotomic $p$-adic $L$-function interpolating the critical values of $L(\Pi \otimes \sigma, s)$ where $\sigma$ is an automorphic representation of $\GL_2$ generated by a holomorphic form of weight $\ell \le r_1 - r_2 + 1$. This is constructed by applying a ``push-forward'' map to the product of the $p$-depleted newform $g^{[p]} \in \sigma$ with an auxiliary $p$-adic Eisenstein series, and pairing this with a coherent $H^2$ eigenclass coming from $\Pi$. This construction is closely parallel to the construction of the $p$-adic Rankin--Selberg $L$-function for $\GL_2 \times \GL_2$, and it generalises to universal-deformation families in the same way, since the pushforward map of \cite{LPSZ} can be applied to any family of $p$-adic modular forms (over any base). If we assume for simplicity that $\Pi$ is unramified at all finite places, and replace $g$ with a universal deformation family $\mathcal{G}^{[p]}_{\bar{\rho}}$ as above, then we obtain an element of $\mathcal{R}(\bar{\rho})$ interpolating these $p$-adic $L$-functions, with $\Pi$ fixed and $\sigma$ varying through the small-weight specialisations of a 3-dimensional universal-deformation family. We can also add a fourth variable, in which we vary $\Pi$ through a 1-dimensional family of Klingen-ordinary representations, with $r_1$ varying but $r_2$ fixed. \subsection{Self-dual triple products} If we are given three mod $p$ modular representations $\rho_1, \rho_2, \rho_3$ with $\rho_1$ nearly-ordinary and $\det(\rho_1) \cdot \det(\rho_2) \cdot \det(\rho_3) = \bar{\chi}_{\mathrm{cyc}}$, then the space \[ \left\{ (\rho_1, \rho_2, \rho_3) \in \mathfrak{X}^{\mathrm{no}}(\bar{\rho}_1) \times \mathfrak{X}(\bar{\rho}_2) \times \mathfrak{X}(\bar{\rho}_3) : \det(\rho_{1}) \cdot \det(\rho_{2}) \cdot \det(\rho_{3}) = \chi_{\mathrm{cyc}} \right\} \] carries a natural 8-dimensional Panchishkin family $\mathcal{V}$, given by the tensor product of the three universal deformations $\mathcal{V}_i$, with the Panchishkin submodule given by $\mathcal{V}_1^+ \otimes \mathcal{V}_2 \otimes \mathcal{V}_3$. The base space is \emph{a priori} 7-dimensional, but it has two ``redundant'' dimensions (since we can twist either $\rho_2$ or $\rho_3$ by a character, and $\rho_1$ by the inverse of that character, without changing the tensor product representation), so we obtain a Panchishkin family over a 5-dimensional base $\mathfrak{X}$, satisfying the self-duality condition $\mathcal{V} \cong \mathcal{V}^*(1)$. The set $\Sigma(\mathcal{V}, \mathcal{V}^+)$ corresponds to triples of classical modular forms $(f_1, f_2, f_3)$ which are ``$f_1$-dominant'' -- i.e.~their weights $(k_1, k_2, k_3)$ satisfy $k_1 \ge k_2 + k_3$. Feeding the universal eigenforms $\mathcal{G}^{[p]}_{\bar{\rho}_2}$ and $\mathcal{G}^{[p]}_{\bar{\rho}_3}$ into the construction of \cite{DR14} gives a $p$-adic $L$-function over this 5-dimensional base space , extending the construction in \emph{op.cit.}~of a $p$-adic $L$-function over the 3-dimensional subspace of $\mathfrak{X}$ where $\rho_2$ and $\rho_3$ are nearly-ordinary. (Note that this is actually a refinement of \cref{conj:main}, since the resulting $p$-adic $L$-function interpolates the square-roots of central $L$-values.) \subsection{The Bertolini--Darmon--Prasanna case} Let $\bar{\rho}$ be a modular mod $p$ representation of $\Gamma_{\mathbf{Q}, \{p\}}$, with universal deformation space $\mathfrak{X}(\bar{\rho})$. We shall suppose that $\det \bar{\rho} = \bar{\chi}_{\mathrm{cyc}}$, and we let $\mathfrak{X}^{0}(\bar{\rho}) \subseteq \mathfrak{X}(\bar{\rho})$ denote the subspace parametrising deformations whose determinant is $\chi_{\mathrm{cyc}}$; this is flat over $\mathcal{O}$ of relative dimension 2, and is formally smooth if $\bar{\rho}$ is unobstructed. Meanwhile, we choose an imaginary quadratic field $K$ in which $p = \mathfrak{p}_1 \mathfrak{p}_2$ is split, and we let $\mathfrak{X}_K^{\mathrm{ac}} \cong \operatorname{Spf} \mathcal{O}[[X]]$ be the character space of the anticyclotomic $\ZZ_p$-extension of $K$. Let $\mathfrak{X}$ denote the product $\mathfrak{X}^{\mathrm{ac}}_K \times \mathfrak{X}^{0}(\bar{\rho})$. This is $\mathcal{O}$-flat of relative dimension 3, and it carries a family of 4-dimensional Galois representations $\mathcal{V}$, given by tensoring the universal deformation $\rho^{\mathrm{univ}}$ of $\bar{\rho}$ with the induction to $\Gamma_{\mathbf{Q}}$ of the universal deformation of $\bar\psi$. Note that $\mathcal{V}$ satisfies the ``self-duality'' condition $\mathcal{V}^\vee(1) \cong \mathcal{V}$. Locally at $p$, $\mathcal{V}$ is the direct sum of two twists of the universal deformation of $\bar{\rho}$, corresponding to the two primes above $p$; and we can define a Panchishkin submodule $\mathcal{V}^+$ by taking the direct summand corresponding to one of these primes. Note that $\Sigma(\mathcal{V}, \mathcal{V}^+)$ consists of pairs $(\psi, f)$ where $f$ is a modular form and $\psi$ an anticyclotomic algebraic Hecke character of weight $(n, -n)$, where $n$ is large compared to the weight of $f$. Plugging in the universal family $\mathcal{G}^{[p]}_{\bar{\rho}}$ (more precisely, its pullback to $\mathfrak{X}^{0}(\bar{\rho})$) into the constructions of \cite{BDP13}, we obtain a $p$-adic analytic function on the 3-dimensional space $\mathfrak{X}^{\mathrm{ac}} \times \mathfrak{X}^{0}(\bar{\rho})$ interpolating the square-roots of central $L$-values at specialisations in $\Sigma(\mathcal{V}, \mathcal{V}^+)$. This refines the construction due to Castella \cite[\S 2]{castella19} of a BDP-type $L$-function over the 2-dimensional space $\mathfrak{X}^{\mathrm{ac}}_K \times \mathfrak{X}^{\mathrm{ord}}(\bar{\rho})$ when $\bar{\rho}$ is ordinary.\footnote{This is slightly imprecise since $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho})$ is not contained in $\mathfrak{X}^{0}(\bar{\rho})$; more precisely, the correspondence between the two constructions is given by identifying $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho})$ with $\mathfrak{X}^{\mathrm{no}}(\bar{\rho}) \cap \mathfrak{X}^{0}(\bar{\rho})$, via twisting by a suitable character of $\Gamma_{\mathbf{Q}, \{p\}}^{\mathrm{ab}}$.} \subsection{A finite-slope analogue?} One can easily formulate a ``finite-slope'' analogue of \cref{conj:main}, where the submodule $\mathcal{V}^+ \subseteq \mathcal{V}$ is replaced by a submodule of the Robba-ring $(\varphi, \Gamma)$-module of $\mathcal{V}|_{\Gamma_{\mathbf{Q}_p}}$. The analogue of Hida's ordinary deformation space $\mathfrak{X}^{\mathrm{ord}}(\bar{\rho})$ is now the $\bar{\rho}$-isotypic component $\mathcal{E}(\bar{\rho})$ of the Coleman--Mazur Eigencurve \cite{CMeigen}. However, proving a finite-slope version of the results of \cref{sect:rankin}, or of the generalisations sketched in the above paragraphs, appears to be much more difficult than the ordinary case. All of the above constructions rely on the existence of the universal eigenform $\mathcal{G}^{[p]}_{\bar{\rho}}$ as a family of $p$-adic modular forms over $\mathfrak{X}(\bar{\rho})$. However, in the finite-slope case, we need to pay attention to overconvergence conditions, since the finite-slope analogue of the projectors $\lambda_{\mathfrak{a}}$ are only defined on overconvergent spaces. Clearly $\mathcal{G}^{[p]}_{\bar{\rho}}$ is not overconvergent (as a family), since it has specialisations which are nearly-classical rather than classical. So we need to work in an appropriate theory of nearly-overconvergent families. Such a theory has recently been introduced by Andreatta and Iovita \cite{AI17}. We might make the following optimistic conjecture: \begin{conjecture} Let $f$ be a nearly-classical point of $\mathfrak{X}(\bar{\rho})$, corresponding to a modular form $f$ of prime-to-$p$ level. Then there is an affinoid neighbourhood $X_f = \operatorname{Max} A_f$ of $f$ in $\mathfrak{X}(\bar{\rho})^{\mathrm{an}}$ over which the universal eigenform $\mathcal{G}_{\bar{\rho}}^{[p]}$ is a family of nearly-overconvergent forms in the sense of \cite{AI17}. \end{conjecture} If this conjecture holds, one might realistically hope to define (for instance) a $p$-adic Rankin--Selberg $L$-function over neighbourhoods of crystalline classical points in $\mathcal{E}(\bar{\rho}_1) \times \mathfrak{X}(\bar{\rho}_2)^{\mathrm{an}}$. \section{Conjectures on $P$-nearly-ordinary families} In this section, we'll use Galois deformation theory to define universal parameter spaces for Galois representations valued in reductive groups, which satisfy a Panchishkin-type condition relative to a parabolic subgroup; and we formulate a ``parabolic $\mathcal{R} = \mathcal{T}$'' conjecture, predicting that these should have an alternative, purely automorphic description. We expect that these parameter spaces should be the natural base spaces for families of $p$-adic $L$-functions, and of Euler systems. \subsection{P-nearly-ordinary deformations} Let $G$ be a reductive group scheme over $\mathcal{O}$ and $P$ a parabolic subgroup. In \cite[\S 7]{boeckle07}, B\"ockle defines a homomorphism $\rho: \Gamma_{\mathbf{Q}, S} \to G(A)$, for $A \in \underline{\mathrm{CNL}}_{\cO}$, to be \emph{$P$-nearly ordinary} if $\rho|_{\Gamma_{\mathbf{Q}_p}}$ lands in a conjugate of $P(A)$. Theorem 7.6 of \emph{op.cit.} shows that under some mild hypotheses, the functor of $P$-nearly-ordinary deformations of a given $P$-nearly-ordinary residual representation is representable. The notion of a \emph{Panchishkin family} introduced in Definition \ref{def:panchfam} corresponds to taking $G = \GL_n$ and $P$ to be the parabolic subgroup with blocks of sizes $\dim \overline{V}^{c=1}$ and $\dim \overline{V}^{c=-1}$. However, the geometry of deformation spaces for $\GL_n$ is rather mysterious when $n > 2$, and it is not expected that these spaces will have a Zariski-dense set of classical points. On the other hand, the geometry of deformation spaces is much simpler and better-understood for Galois representations arising from Shimura varieties (or, more generally, from automorphic representations that are discrete-series at $\infty$). This suggests concentrating on the following setting. Let $G$ be a reductive group over $\mathbf{Q}$; for simplicity, we assume here $G$ is split. We also suppose $G$ has a ``twisting element'' in the sense of \cite{buzzardgee}, and fix a choice of such an element\footnote{Alternatively, one could replace $G^\vee$ by the connected component of the ``$C$-group'' of \emph{op.cit.}, which the quotient of $G^\vee \times \mathbf{G}_m$ by a central element of order 2. We can also allow non-split $G$, by considering representations into a larger, non-connected quotient of the $C$-group.}. Then Conjecture 5.3.4 of \emph{op.cit.} predicts that cohomological automorphic representations $\Pi$ of $G$ give rise to Galois representations $\rho_{\Pi, p}: \Gamma_{\mathbf{Q}} \to G^\vee(\overline{\QQ}_p)$, where $G^\vee$ is the Langlands dual of $G$. There is a canonical bijection $P \leftrightarrow P^\vee$ between conjugacy classes of parabolics in $P$ and parabolics in $G^\vee$, and one expects that if $\Pi$ is nearly-ordinary for $P$ (in the sense that the Hecke operators associated to $P$ have unit eigenvalues), then $\rho_{\Pi, p}$ should be a $P^\vee$-nearly-ordinary representation. In particular, families of $P$-nearly-ordinary cohomological automorphic representations of $G$ should give rise to families of $P^\vee$-nearly-ordinary Galois representations into $G^\vee$. If we also choose a linear representation $\xi: G^\vee \to \GL_n$, then for suitably chosen $P$, the resulting families of $n$-dimensional Galois representations will be Panchishkin families. The example of \S\ref{sect:rankin} is of this type, taking $G = G' = \GL_2 \times \GL_2$, and $P = P' = B_2 \times \GL_2$ where $B_2$ is the Borel subgroup of $\GL_2$, and $\xi$ the 4-dimensional tensor product representation of $G$. Similarly, the self-dual triple-product setting of Example \ref{ex2} corresponds to taking $G = (\GL_2 \times \GL_2 \times \GL_2) / \GL_1$, and $P$ the image of $B_2 \times \GL_2 \times \GL_2$. \subsection{Big and small Galois eigenvarieties} In the above setting, we define the \emph{big $P$-nearly-ordinary Galois eigenvariety} for $G$ to be the following space. Suppose $G^\vee$ and $P^\vee$ have smooth models over $\mathcal{O}$, and fix some choice of $\bar{\rho}: \Gamma_{\mathbf{Q}, S} \to G^\vee(\mathbf{F})$ which is $P^\vee$-nearly-ordinary. Then -- assuming the hypotheses of B\"ockle's construction are satisfied -- we obtain a universal deformation ring $\mathcal{R}^{P^\vee-\mathrm{no}}(\bar{\rho})$ for for $P^\vee$-nearly-ordinary liftings of $\bar{\rho}$. We define the big $P$-nearly-ordinary Galois eigenvariety $\mathfrak{X}_P(\bar{\rho})$ to be the formal spectrum of this ring $\mathcal{R}^{P^\vee-\mathrm{no}}(\bar{\rho})$. The methods of \cite{boeckle07} give a formula for the dimension of this space. Suppose $\bar{\rho}$ satisfies the ``oddness'' condition that $\dim \mathfrak{g}_{\FF}^{\bar{\rho}(c) = 1} = \dim(G / B_G)$, where $\mathfrak{g}_{\FF}$ is the Lie algebra of $G / \mathbf{F}$, $c$ is complex conjugation and $B_G$ is a Borel subgroup of $G$. (This condition is expected to hold for representations arising from Shimura varieties; see \cite[Introduction]{CHT08}.) Then $\mathcal{R}^{P^\vee-\mathrm{no}}(\bar{\rho})$ has a presentation as a quotient of a power series ring in $d_1$ variables by an ideal with $d_2$ generators, where \[ d_1 - d_2 = \dim P - \dim(G/B_G) = \dim B_M,\] where $M$ is the Levi factor of $P$ and $B_M \subseteq M$ is a Borel subgroup of $M$. It seems reasonable to conjecture that $\mathfrak{X}_P(\bar{\rho})$ is in fact flat over $\mathcal{O}$, and its relative dimension is $\dim B_M$. The term \emph{big} is intended to contrast with the following alternative construction (which is perhaps less immediately natural; we introduce it because it is the Galois counterpart of an existing construction on the automorphic side, as we shall recall below). Let $\overline{M^\vee} = M^\vee / Z(M^\vee)$ (the Langlands dual of $M^{\mathrm{der}}$), and fix a \emph{Hodge type} $\mathbf{v}$ and an \emph{inertial type} $\tau$ for $\overline{M^\vee}$-valued representations of $\Gamma_{\mathbf{Q}_p}$, in the sense of \cite{bellovingee19}. Then we say a lifting $\rho$ of $\bar{\rho}$ to $\overline{\QQ}_p$ is \emph{$P^\vee$-nearly-ordinary of type $(\tau, \mathbf{v})$} if it is $P^\vee$-nearly-ordinary, and the composition $\Gamma_{\mathbf{Q}_p} \xrightarrow{\rho} P^\vee(\overline{\QQ}_p) \to \overline{M^\vee}(\overline{\QQ}_p)$ has the given Hodge and inertial types. We define the \emph{small $P$-nearly-ordinary Galois eigenvariety} to be the universal deformation space $\mathfrak{X}_P(\bar{\rho}; \tau, \mathbf{v})$ for deformations that are $P^\vee$-nearly-ordinary of the specified type. Using the formulae of \cite{bellovingee19} applied to $\overline{M^\vee}$ to compute the dimension of the local lifting rings, and assuming that $\bar{\rho}$ is odd and $\mathbf{v}$ is sufficiently regular, we compute that the expected dimension of $\mathfrak{X}_P(\bar{\rho}; \tau, \mathbf{v})$ is now given by $\dim Z_{M^\vee} = \dim Z_{M}$. \begin{remark} Note that the big and small Galois eigenvarieties coincide if $P$ is a Borel subgroup; but the dimension of the big eigenvariety \emph{grows} with $P$, while the dimension of the small eigenvariety \emph{shrinks} as $P$ grows. For instance, if $G = \GL_2$ and $P = G$, then $\mathfrak{X}_P(\bar{\rho})$ is just the unrestricted deformation space, which is 3-dimensional over $\mathcal{O}$ as we have seen; but $\mathfrak{X}_P(\bar{\rho}; \tau, \mathbf{v})$ has dimension 1, since for any $(\tau, \mathbf{v})$ there are only finitely many deformations of that type, so $\mathfrak{X}_P(\bar{\rho}; \tau, \mathbf{v})$ has only finitely many points up to twisting by characters. \end{remark} \subsection{Big and small automorphic eigenvarieties} We can now ask if the above Galois-theoretic spaces have automorphic counterparts. \subsubsection{The big eigenvariety} Seeking an automorphic counterpart of the big Galois eigenvariety leads to the following question:\medskip \noindent\textbf{Question}. If $G$ is reductive over $\mathbf{Q}$, and $P$ is a parabolic in $G / \mathbf{Q}_p$ as above, is there a natural purely automorphic construction of a parameter space $\mathfrak{E}_P$ for systems of Hecke eigenvalues arising from cohomological automorphic representations for $G$ that are nearly ordinary for the parabolic $P$? \medskip We call this conjectural object $\mathfrak{E}_P$ the \emph{big $P$-nearly-ordinary automorphic eigenvariety}. We expect its dimension to be the same as its Galois analogue; in particular, if $G$ has discrete series its dimension should be $\dim B_M$, where $B_M$ is a Borel subgroup of the Levi of $P$ as before. The case when $P = B$ is a Borel subgroup is relatively well-understood; this is the setting of Hida theory. However, the case of non-Borel parabolics is much more mysterious. In this case, one can give a candidate for this space $\mathfrak{E}_P$ as follows. For any open compact $K \subset G(\mathbf{A}_{\mathrm{f}})$, we can form the $H^*(K, \mathcal{O})$ of Betti cohomology of the symmetric space for $G$ of level $K$, which is a finitely-generated graded $\mathcal{O}$-module. This has an action of Hecke operators, and the subalgebra of its endomorphisms generated by Hecke operators at primes where $K$ is unramified, the \emph{spherical Hecke algebra of level $K$}, is commutative. We fix an open compact subgroup $K^p \subset G(\mathbf{A}_\mathrm{f}^p)$, and let $K_{n, p} = \{ g \in G(\ZZ_p): g \bmod p^n \in N_P(\mathbf{Z}/p^n)\}$, where $N_P$ is the unipotent radical of $P$. Then, for any $n \ge 1$, $H^*(K, \mathcal{O})$ has a canonical idempotent endomorphism $e_P$ (the Hida ordinary projector associated to $P$), defined by $\lim_{r \to \infty} U_P^{r!}$ where $U_P$ is a suitable Hecke operator; this commutes with the spherical Hecke algebra. \begin{definition} With the above notations, let $\mathcal{T}^{P-\mathrm{no}}_n(K^p)$ be the quotient of the spherical Hecke algebra acting faithfully on $e_P H^*(K^p K_{p, n}, \mathcal{O})$; and define $\mathcal{T}^{P-\mathrm{no}}(K^p) = \varprojlim_n \mathcal{T}^{P-\mathrm{no}}_n(K^p)$. \end{definition} We conjecture that the formal spectrum of $\mathcal{T}^{P-\mathrm{no}}(K^p)$ should be the big $P$-nearly-ordinary eigenvariety. However, from this definition alone it is rather difficult to obtain much information about the properties of the resulting space (for instance, it is not clear whether $\mathcal{T}^{P-\mathrm{no}}(K^p)$ is Noetherian). As far as the author is aware, the only non-Borel cases where this construction is well-understood are the following: \begin{itemize} \item $G = \GL_2$ and $P = G$, as in Theorem \ref{thm:BE}. \item $G = \operatorname{Res}_{F^+ / \mathbf{Q}}(U)$, where $U$ is a totally definite unitary group for some CM extension $F / F^+$, with $p$ split in $F$ and $F / F^+$ unramified at all finite places; and $P$ is a parabolic subgroup of $G(\mathbf{Q}_p) \cong \GL_n(\mathbf{Q}_p)^{[F^+: \mathbf{Q}_p]}$ whose Levi subgroup is a product of copies of $\GL_1$ and $\GL_2$. This case has been studied extensively by Yiwen Ding \cite{ding19}. \end{itemize} In the definite unitary case, Ding proves that the localisation of $\mathcal{T}^{P-\mathrm{no}}(K^p)$ at the maximal ideal corresponding to an irreducible $\bar{\rho}$ is a quotient of the global Galois deformation ring $\mathcal{R}^{P^\vee-\mathrm{no}}(\bar{\rho})$, and is therefore Noetherian; and he gives a lower bound for the relative dimension of $\mathcal{T}^{P-\mathrm{no}}(K^p)$ over $\mathcal{O}$ (localised at the maximal ideal corresponding to some $\bar{\rho}$). This lower bound is exactly $\dim B_M$, the dimension conjectured for the Galois eigenvariety above. \begin{remark} Note that Ding's construction uses the $p$-adic local Langlands correspondence for $\GL_2(\mathbf{Q}_p)$ in an essential way, so this approach will be much harder to generalise to cases where the Levi of $P$ is not a product of tori and copies of $\GL_2(\mathbf{Q}_p)$. \end{remark} \subsubsection{The small eigenvariety} In contrast to the rather disappointing situation described above, there does seem to be a well-established theory for the ``little brother'' of this space -- the \emph{small $P$-nearly-ordinary automorphic eigenvariety}. This would be a parameter space for $P$-nearly-ordinary cohomological automorphic representations satisfying two additional conditions: \begin{itemize} \item the highest weight $\lambda$ of the algebraic representation of $G$ to whose cohomology $\Pi$ contributes should lie in a fixed equivalence class modulo characters of $M / M^{\mathrm{der}}$; \item the ordinary part $J_P(\Pi_p)^{\mathrm{no}}$ of $J_P(\Pi_p)$, which is an irreducible smooth representation of $M(\mathbf{Q}_p)$, should satisfy $e \cdot J_P(\Pi_p)^{\mathrm{no}} \ne 0$ where $e$ is some fixed idempotent in the Hecke algebra of $M^{\mathrm{der}}(\mathbf{Q}_p)$. \end{itemize} Note that both conditions are vacuous if $P$ is a Borel. These conditions are the automorphic counterparts of the fixed Hodge and inertial types up to twisting used to define the small $P$-nearly-ordinary Galois eigenvariety. See e.g. Mauger \cite{mauger05} for the construction of the small $P$-nearly-ordinary automorphic eigenvariety, and \cite{hillloeffler11} for a ``$P$-finite-slope'' analogue. \begin{remark} The most obvious choice of $e$ would be the idempotent projecting to the invariants for some choice of open compact subgroup of $M^{\mathrm{der}}(\mathbf{Q}_p)$. For instance, Mauger's theory applies to $\Pi$ such that $J_P(\Pi_p)^{\mathrm{no}}$ has non-zero invariants under $M^{\mathrm{der}}(\ZZ_p)$, although it can be extended without difficulty to allow other more general idempotents. However, a craftier choice would be to take $e$ to be a \emph{special idempotent} in the sense of \cite{bushnellkutzko98}, corresponding to a choice of Bernstein component for $M^{\mathrm{der}}(\mathbf{Q}_p)$; these Bernstein components are expected to biject with inertial types on the Galois side (the inertial local Langlands correspondence for $M^{\mathrm{der}}(\mathbf{Q}_p)$), while the highest weights $\lambda$ biject with Hodge types, so we obtain a natural dictionary between the defining data at $p$ for the Galois and automorphic versions of the small $P$-nearly-ordinary eigenvariety. \end{remark} \subsubsection{$R = T$ theorems} Both big and small automorphic eigenvarieties should, clearly, decompose into disjoint unions of pieces indexed by mod $p$ Hecke eigenvalue systems. We can then formulate the (\emph{extremely} speculative) ``parabolic $R = T$'' conjecture that each of these pieces should correspond to one of the big or small Galois eigenvarieties of the previous section, for a mod $p$ Galois representation $\bar{\rho}$ determined by the mod $p$ Hecke eigensystem. In the case when $G$ is a definite unitary group, results of this kind have been proven by Geraghty \cite{geraghty} when $P$ is a Borel subgroup; and when the Levi of $P$ is a product of $\GL_1$'s and $\GL_2$'s, Ding proves in \cite{ding19} the slightly weaker result that the map from $\mathcal{R}^{P^\vee-\mathrm{no}}(\bar{\rho})$ to the $\bar{\rho}$-localisation of $\mathcal{T}^{P-\mathrm{no}}(K^p)$ is surjective with nilpotent kernel, after possibly extending the totally real field $F^+$ (an ``$R^{\mathrm{red}} = T^{\mathrm{red}}$'' theorem). \subsection{Miscellaneous remarks} \begin{remark} The 4-dimensional parameter space for $\operatorname{GSp}_4 \times \GL_2$ mentioned at the end of \S\ref{sect:gsp4gl2} is a slightly artificial hybrid: the it is the product of the \emph{big} automorphic (or Galois) eigenvariety for $P = G = \GL_2$ with the \emph{small} automorphic eigenvariety for the Klingen parabolic of $\operatorname{GSp}_4$. Of course, we expect that the ``correct'' parameter space for this construction is the product of the big eigenvarieties for the two groups, which would have dimension 7 (or 6 if we factor out a redundant twist, which corresponds to working with the group $\operatorname{GSp}_4 \times_{\GL_1} \GL_2$). However, we do not know how to construct $p$-adic $L$-functions on this eigenvariety at present. \end{remark} \begin{remark} The small $P$-nearly-ordinary eigenvariety is finite over the ``weight space'' parametrising characters of $(M / M^{\mathrm{der}})(\ZZ_p)$. Moreover, in Shimura-variety settings it is flat over this space (up to a minor grain of salt if $Z_G$ has infinite arithmetic subgroups). It is natural to ask if there is an analogous, purely locally defined ``big $P$-weight space'' over which the big eigenvariety $\mathfrak{E}_P$ is finite; the results of \cite{ding19} suggest that a candidate could be a universal deformation space for $p$-adic Banach representations of $M(\mathbf{Q}_p)$ on the automorphic side, or $M^\vee$-valued representations of $\Gamma_{\mathbf{Q}_p}$ on the Galois side. However, these spaces will in general have much larger dimension than the eigenvariety, so there does not seem to be a natural choice of local parameter space over which $\mathfrak{E}_P$ is finite and flat. \end{remark} \providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace} \providecommand{\MR}[1]{} \renewcommand{\MR}[1]{% MR \href{http://www.ams.org/mathscinet-getitem?mr=#1}{#1}. } \providecommand{\href}[2]{#2} \newcommand{\articlehref}[2]{\href{#1}{#2}}
proofpile-arXiv_067-9063
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} The standard $\Lambda$ Cold Dark Matter ($\Lambda$CDM) model \citep[cf.,][]{Komatsu08} explains the formation of cosmological structure in the non-linear regime in a hierarchical way, i.e. big structures are not formed monolithically but by successive merging of small structures \citep[e.g,.][]{Davis85}. Recent cosmological simulations also support this idea of hierarchical structure formation in MOND gravity \citep{Llinares08} (but see also the analytical models of \citet{Sanders08} and \citet{Zhao08b}). The hierarchical merging scenario naturally promotes the picture that we should observe collisions of (clusters of) galaxies. Observationally there is evidence that some of these impacts actually occur with speeds that are not readily reproduced by simulations of $\Lambda$CDM structure formation, in the sense that the relative speed of the merging dark halos is rarely very higher than the internal dispersion of each halo \citep{Hayashi06,Knebe08wdm, Llinares08, Angus08}. There is, for instance, the famous ``Bullet cluster'', an extremely high velocity merger between two galaxy clusters, with an inferred shock velocity of $\sim4700$km/sec. While relative encounters with comparable velocities are rather rare in pure dark matter simulations \citep[e.g.][]{Hayashi06, Knebe08wdm}, they may nevertheless be accomodated when considering explicit hydrodynamical modelling of the phenomenon \citep{Springel07}. One unavoidbale consequence of any high speed collision of mass concentrations seems to be the decoupling or offsetting of the baryonic component from the dark component. Aside from the aforementioned Bullet cluster -- whose offset has been measured to be approximately $\sim100$ kpc \citep[e.g. ][]{Clowe06} -- more examples are given in \citet{Jee05a}, \citet{Jee05b}, and \citet{Bradac08}. The latter authors actually present data for a particulr cluster (i.e. MACS J0025.4-1222) with an even greater difference of $\sim200$~kpc in the peak of the baryonic and the dark matter. All this work is culminated in one particular recent \textit{Letter} by \citet{Shan09} where a sample of 38 galaxy clusters has been studied utilizing both X-ray and strong lensing observations. They show that such offsets are a common phenomenon in galaxy clusters: they found at least 13 objects with a separation greater than 50~kpc with 3 clusters exhibiting a separation between baryonic and (hypothetic) dark component in excess of 200~kpc. All these papers are analysing combined X-ray and lensing observations to decipher (and actually measure) the offset between baryonic and dark matter. But how certain are we that the lensing signal is caused by ``real'' dark matter particles? What if the gravitational potential is not generated by Newtonian physics yet interpreted in that way? One theory capable of producing potentials akin to dark matter particles is modified Newtonian dynamics \citep[MOND, ][]{Milgrom83}; (pedagogical) reviews of the concepts and successes can be found in, for instance, \citet{Sanders02} and \citet{Milgrom08}. While MOND was originally proposed as an alternative to Newtonian gravity designed to solely explain galactic dynamics without the need for dark matter the theory has gained substantial momentum during the past decade: although current cosmological observations point to the existence of vast amounts of non-baryonic dark matter in the Universe \citep[e.g.][]{Komatsu08}, not all of the features of CDM models appear to match observational data (e.g., the ``missing satellite problem'' \citep{Klypin99, Moore99} and the so-called ``cusp-core crisis'' \citep[e.g.][]{deBlok03,Swaters03}). Just as for CDM the MOND theory successfully matches observations on a wide range of scales, different types of galaxies including dwarfs and giants, spirals and ellipticals \citep{Famaey05,Gentile07b,Milgrom07a,Milgrom07b,Sanders07,McGaugh08,Angus08b}. However, one of MOND's major set-backs for a long time was the lack of a covariant formulation of the theory. This has been remedied by \citet{Bekenstein04} who was the first to cast MOND into a more universal form compliant with general relativity. This in turn spawned further investigations into the same direction leaving us nowadays with various relativistic formulations of the MOND theory \citep[e.g.][]{Bekenstein04,Sanders05,Bruneton07,Zhao07,Zlosnik07,Zhao08,Skordis08,Blanchet09}; a recent review of both MOND and its relativistic offspring (in particular the \textit{TeVeS} formulation of \citet{Bekenstein04}) can be found in \citet{Milgrom08} and \citet{Skordis09}. We though need to acknowledge that despite the original idea of abandoning the need for dark matter, even MOND cannot do without it: despite the great success we need to accept that even MOND cannot do well without dark matter completely. A recent study utilizing a combination of strong and weak lensing by galaxy clusters indicates the necessity for neutrinos of mass $5-7eV$ \citep{Natarajan08}. And to be consistent with dark matter estimates of galaxy clusters and observations of the CMB anisotropies \citet{Angus09a,Angus09b} claims for $11eV$ neutrinos. One theory capabale of accomodating both these requirements is that of a mass-varying neutrino by \citet{Zhao08}. In summary, we are eventually left with a situation where the development of several frameworks for a relativistic formulation of MOND enabled the study of the cosmic microwave background \citep{Skordis05,Li08mond}, cosmological structure formation \citep{Halle08,Skordis08}, strong gravitational lensing of galaxies \citep{Zhao06, Chen06,Shan08}, and weak lensing of clusters \citep{Angus07,Famaey08}. The MOND theory has matured and became a credible competitor to the commonly accepted CDM model. In that regards, however, it appears important to look for even more tests that are capable of discriminating between MOND and Newtonian gravity, especially in the context of cosmology. We therefore raise the question whether the kinds of offsets alluded to above can be explained by simply interpreting the MONDian potential in a Newtonian way? This concept of ``phantom dark matter'' was originally introduced in the beginnings of MOND already by \citet{Milgrom86} and has recently been discussed by \citet{Milgrom08}, \citet{Wu08}, and \citet{Bienayme09}. It is based upon the idea to use the MONDian potential in a dark matter context, i.e. given the MONDian potential one can use the Newtonian Poisson's equation to derive the corresponding density of matter that would be needed in the Newtonian context. Then, subtracting the visible (baryonic) matter one obtains the ``virtual'' dark matter or, in other words, ``phantom dark matter'' distribution predicted by MOND. And in a Newtonian interpretation this phantom dark matter would be responsible for the gravitational lensing signal alluded to above. We though need to acknowledge that \citet{Brownstein07} already pointed out the possibility that the observed offset in the (alleged) dark matter and baryonic density peaks of the Bullet cluster system can be explained by extending the equations for gravitational lensing to modified gravity, without the need for a dominant dark matter component. Further, as MOND is a non-linear theory it is not clear whether the (baryonic) matter will be distributed ab initio in the same way as phantom dark matter. We therefore set out to answer the question whether or not these two density fields share peaks at the same locations and are distributed in comparable ways, respectively. Can an offset between the dark and baryonic matter be explained by the non-linearity of the MONDian Poisson's equation and the existence of this putative phantom matter, respectively? We need to close with a cautionary note: this work does \textit{not} deal with (collisions of) galaxies or galaxy clusters; we are solely focusing on the properties of the matter density fields and their respective peaks. The primary question we set out to answer is whether or not MOND will produce offsets between the actual (baryonic) matter component and the phantom matter field, even though this work is motivated by observations of such offsets in galaxy clusters. \section{The Non-Cosmological Framework} \label{sec:noncosmology} Before investigating phantom dark matter in a cosmological environment we start off with phrasing the question about shifts in the respective density peaks for MONDian systems in a non-cosmological context. This will provide us with a gauge whether or not we should actually expect to find the reputed offsets. \subsection{Phantom Dark Matter} The MONDian Poisson's equation embedded within an external field reads as follows \begin{equation} \label{efpoisson} -\nabla \cdot \left[ \mu \left({|\textbf{g}|\over a_0}\right) {\bf g} \right]=4\pi G\rho,\qquad {\bf g}={\bf g}_{\rm ext} - {\bf \nabla} \Phi_{\rm int} , \end{equation} where $\rho$ is the baryonic matter, ${\bf g}_{\rm ext}$ an external field, and $\Phi_{\rm int}$ the (internal) potential of the system and $\mu(x)$ is $\mu\rightarrow1$ for $x\gg 1$ (Newtonian limit) and $\mu\rightarrow x$ for $x\ll 1$ (deep MOND limit).\footnote{Please note that we used $\mu(x) = x (1+x^2)^{-1/2}$ as originally suggested by \cite{Milgrom83} throughout our tests.}. We now take the liberty to interpret this internal potential $\Phi_{\rm int}$ within the context of Newtonian gravity \begin{equation} \label{pdm} \nabla^2 \Phi_{\rm int}=4\pi G(\rho+\rho_{\rm ph}), \end{equation} where the $(\rho+\rho_{\rm ph})$ is the total dynamical mass of the system, and $\rho_{\rm ph}$ is the so called ``phantom dark matter'', which can be held responsible for the ``extra gravity beyond the baryonic matter''\footnote{Or in other words ``dark matter''.} in the linear Poisson's equation \Eq{pdm}. But as opposed to the dark matter theory, MOND immediately predicts the distribution of the dynamical mass as soon as the baryons $\rho$ and the gravity of the environment ${\bf g}_{\rm ext}$ are specified. However, due to the external field ${\bf g}_{\rm ext}$ the boundary condition of the (internal) system changes, not necessarily preserving spherical symmetry: the distribution of the dynamical mass is somewhat different from that of CDM halos \citep[see][]{Wu07, Wu08}. Further, there exist negative solutions of the phantom dark matter, and the peaks of dynamical mass can in fact be offset to the baryonic peaks! These effects are most significant at places where the external and internal fields are comparable and should be quantified more carefully in the following subsection. \subsection{The Simulations} For the non-cosmological settings studied in this Section we use the MONDian Poisson solver developed by the Bologna group \citep{Ciotti06, Nipoti07} to solve \Eq{efpoisson} and hence derive the internal potential $\Phi_{\rm int}$ of the systems under investigation. The Poisson solver is a spherical grid code, and our choice for the grid parameters is $n_r \times n_\theta \times n_\phi = 256 \times 64 \times 128$ with a radial grid spacing given by $r_i=r_0 \tan \left[(i+0.5){0.5\pi /(n_r+1)}\right]$~kpc. We further utilize \Eq{pdm} to derive the dynamical mass and the phantom matter, respectively. All our (baryonic) galaxies have a Plummer density profile \begin{equation} \rho(r)=\left({3M\over 4\pi b^3}\right)\left(1+{r^2\over b^2}\right)^{-5/2}, \end{equation} with a core radius b of 1.0~kpc. \\ We investigated several scenaria (cf. discussion below in \Sec{sec:peak_offsets}), however, decided to only present the results for one representative configuration where a single galaxy G is embedded within a strong constant external field EF. A summary of the actual parameters of the model can be found Table~\ref{models}. \begin{table} \makeatletter\def\@captype{table}\makeatother \caption{Parameters of our non-cosmological test model. The mass is in unit of $(10^{10}M_\odot)$, positions and $r_0$ are in units of $kpc$, and the external gravity acceleration is in unit of $a_0$.} \begin{center} \begin{tabular}{lclcccccc} \hline Model & Mass &Centre & $g_{\rm ext}$ & $r_0$ \\ \hline G+EF & 10.0 & (0,0,0) & 1.0 & 2.0 \\ \hline \\ \end{tabular}\label{models} \end{center} \end{table} \subsection{Density Peak Offsets} \label{sec:peak_offsets} \Fig{efeqden1} shows the distribution of the dynamical massfor model G+EF, i.e. a single galaxy embedded within an external field of strength $|{\bf g}_{\rm ext}|=a_0$ along the $x$-axis. We notice that when the internal and external field are of the same order of magnitude, i.e. at $x\approx$~15~kpc there are some noticable effects: first, we obtain negative dynamical mass on the corns perpendicular to the direction of external field also mentioned by \citet{Milgrom86} and \citet{Wu08}; second, there exist an additional peak on the $x$-axis, right where the external and internal fields cancel each other! However, this additional peak is four orders of magnitudes smaller than the baryonic peak. \begin{figure}{} \makeatletter\def\@captype{figure}\makeatother \resizebox{8cm}{!}{\includegraphics{f1.eps}} \caption{Isodensity countours of the dynamical mass, i.e. baryons + phantom dark matter density, on the $x-z$ plane for a galaxy embedded within an external field along the $x$-axis.}\label{efeqden1} \end{figure} We have also run idealistic simulations with (un-)equal mass galaxies with and without external fields (not shown though). In summary, we have seen that in most of the situations the interpretation of the MONDian potential in a Newtonian sense will lead to the prediction of additional peaks in the distribution of the dynamical mass when compared to the actual (underlying) baryonic matter distribution. However, the strength of these extra peaks vary and depend on the actual setup of the system ranging from as low as four orders of magnitude smaller to as large as 1\% for the cases considered here. Encouraged by the observation that we actually recover offsets in our controlled experiments we may now rightfully ask the question {\it whether these additional phantom peaks occur in realistic cosmological simulations}, i.e., whether a self-consistent cosmological simulation will provide a suitable variety of configurations so that we will in fact be able to observe (and quantify) the offset between the baryonic and phantom matter peaks. \section{The Cosmological Framework} \label{sec:cosmology} \subsection{Phantom Dark Matter} \label{sec:phantomdarkmatter} The equation for the (MONDian) gravitational potential in a cosmological setting is somewhat different to \Eq{efpoisson} and reads \begin{equation} \label{eq:PoissonMOND} \nabla\cdot\left[ \mu\left(\frac{|\nabla\Phi_{M}|}{a \gamma(a)}\right)\nabla\Phi_{M}\right]=\frac{4\pi G}{a} \left(\rho-\bar\rho\right) \ , \end{equation} \noindent We further took the liberty to encode the MONDian acceleration scale $\gamma(a)$ as a (possible) function of the cosmic expansion factor $a$. The most naive choice would be $\gamma(a) = g_0 = 1.2\times 10^{-8}{\rm cm}/{\rm sec}^2$ whereas other theories may lead to different dependencies; for instance, in \citet{Zhao08} $\gamma(a)$ is given as $\gamma(a)=a^{1/2}g_0$. For more details and a derivation of this equation we refer the reader to \citet{Llinares08} where it has been justified and implemented into the cosmological $N$-body code \texttt{MLAPM} \citep{Knebe01}. Given the MONDian potential $\Phi_M$ we may now apply the same logic as in \Sec{sec:noncosmology} and use it with the Newtonian Poisson's equation whose right-hand-side will again no longer be the (baryonic) density $\rho$ field alone but rather read as follows \begin{equation}\label{eq:phantomdensity} \displaystyle {\nabla} \cdot \left[ \nabla\Phi_M \right] = \frac{4 \pi G}{a} \left[ (\rho+\rho_{\rm ph}) - \overline{(\rho+\rho_{\rm ph})} \right] \ . \end{equation} \noindent This is the defining equation for the phantom matter density field $\rho_{\rm ph}$ used througout this Section. \subsection{The Simulation} \label{sec:simulation} The analysis presented in here is based upon a particular simulation published in \citet{Llinares08}, i.e. the OCBMond2 model. This simulation has been run in a cosmological volume with a side length of $32h^{-1}$Mpc and utilized $128^3$ particles. It employed the MONDification of the $N$-body code \texttt{MLAPM} \citep{Knebe01, Llinares08}. We chose to simulate an open universe with neither dark matter nor dark energy but characterized by $\Omega_{b}=0.04$. The simulation was started at redshift $z=50$ and resorted to a Hubble parameter of $h=0.7$. We further need to mention that there are two values $\sigma_8$ in a MOND simulation, one characterising the amplitude of fluctuations of the initial condition and one measuring the strength of fluctuations at the present time. This comes about because of the faster growth of structures in MOND \citep[cf.][]{Sanders01, Knebe04b, Llinares08}, i.e. in order to arrive at a comparable evolutionary stage to a $\Lambda$CDM model at redshift $z=0$ with $\sigma_8 \sim 0.9$ we had to lower the magnitude of the fluctuations to $\sigma_8=0.4$ during the process of generating the initial condition. We acknowledge that this value is incompatible with CMB constraints, at least in the dark matter explanation for cosmic structure formation. MOND, however, is a highly non-linear theory and the simulation presented and used here should be considered as a first toy model for trying to understand structure formation using modified gravity. For more details and elaborate study of the simulation we refer the reader again to \citet{Llinares08}. In order to perform the analysis presented here we further modified our potential solver to use the MONDian potential $\Phi_M$ obtained by solving \Eq{eq:PoissonMOND} for the final output at today's redshift $z=0$ and our knowledge about the (baryonic) matter density $\rho$ together with \Eq{eq:phantomdensity} to obtain the resulting phantom density $\rho_{\rm ph}$. \subsection{Locating Density Peaks} \label{sec:peaks} \begin{figure} \begin{center} \begin{minipage}{0.49\textwidth} \epsfig{file=f2.eps, width=0.9\textwidth, angle=0} \end{minipage} \end{center} \caption{Sketch illustrating the definition of the (baryonic) matter density centres $x^b_i$ on various refinement patches and the corresponding phantom matter peaks $x^{\rm ph}_i$. Note that the boundary of each patch is an isodensity contour $\rho^{iso}_i$ in the (baryonic) matter distribution. Due to the nature of the mass assignment of the (baryonic) particles onto each refinement patch we are left with a density field smoothed on approximately the scale of the respective grid spacing $\epsilon_i$.\label{fig:PhDM}} \end{figure} Given both the (baryonic) matter density $\rho$ and the phantom matter density $\rho_{\rm ph}$ we determine peaks in both fields. To this extend we smooth the fields on various scales and study spatial offsets in corresponding peaks in relation to the smoothing scale. This is accomplished by exploiting the adaptive mesh nature of the simulation code \texttt{MLAPM}\ used to generate the simulation in the first place \citep{Knebe01, Llinares08}: the (baryonic) matter is represented by discrete particles whose mass is assigned to a regular grid covering the whole computational volume. This grid is then recursively refined in regions of high (baryonic) density according to a pre-selected refinement criterion of 4, 8, or 16 particles per cell. This leaves us with a hierarchy of (nested) refinement patches where the boundary of each such patch defines a unique (baryonic) isodensity contour. The situation is illustrated in \Fig{fig:PhDM} where we show an example of two nested refinements embedded within a regular domain grid covering the whole computational volume of side length $B$. For each isolated patch we calculate $x^b_i$ and $x^{\rm ph}_i$ by first finding the position of the cell containing the maximum in $\rho$ ($\rho_{\rm ph}$); we then use the density (phantom density) weighted average of the 27 neighbouring cells to define $x^b_i$ ($x^{\rm ph}_i$). The relevant quantity to be studied below is the difference between the two peaks in the density field \begin{equation} \label{eq:offset} D = | x^b_i - x^{\rm ph}_i | \ , \end{equation} \noindent that obviously is both a function of the smoothing scale $\epsilon_i$ and the isodensity contour $\rho^{\rm iso}_i$. \subsection{Density Peak Offsets} \label{sec:offsets} \begin{table} \begin{center} \caption{Smoothing scales in {\ifmmode{h^{-1}{\rm kpc}}\else{$h^{-1}$kpc}\fi}.} \begin{tabular}{rr} \hline $L$ & $\epsilon_{\rm L}$ [{\ifmmode{h^{-1}{\rm kpc}}\else{$h^{-1}$kpc}\fi}] \\ \hline 16382 & 1.95\\\ 8192 & 3.91\\ 4096 & 7.81\\ 2048 & 15.63\\ 1024 & 31.25\\ 512 & 62.50\\ \end{tabular} \label{tab:smoothingscales} \end{center} \end{table} \begin{figure} \begin{center} \begin{minipage}{0.47\textwidth} \epsfig{file=f3.eps, width=0.9\textwidth, angle=0} \end{minipage} \end{center} \caption{The cumulative number distribution $N(<D)$ of the offset $D$ between (baryonic) matter density and phantom density. The offset has been normalized to the respective smoothing scale of the refinement patch it is based upon. \label{fig:NdistNorm}} \end{figure} We are primarily interested in the question whether or not there are any (substantial) offsets in the peaks of the (baryonic) matter density field and the corresponding phantom field defined via \Eq{eq:phantomdensity}. For the time being we therefore ignore any relation this offset has with the refinement level it is based upon (cf. \Eq{eq:offset}). In \Fig{fig:NdistNorm} we simply plot the cumulative distance distribution $N(<D)$ normalized to the total number of refinement patches on all levels; we further chose to normalize the distance to the respective smoothing scale $\epsilon_i$ as we consider distances smaller than this scale to be below the resolution limit and hence not credible. The physical values for $\epsilon_i$ for the grid levels used in our calculations are summarized in \Tab{tab:smoothingscales}. While most of the offsets between (baryonic) matter and phantom matter are in fact smaller than the resolution limit there are nevertheless of order 1\% instances for which we observe larger (and hence physical) differences! We like to caution the reader that the same matter peak enters multiple times (at most six times) into \Fig{fig:NdistNorm} (and all subsequent plots below). This is due to the fact that we smooth the same peak using various smoothing scales $\epsilon_i$ listed in \Tab{tab:smoothingscales}. However, as we are not interested in the change in offset for a given peak when altering the smoothing scale we can treat them independently. \begin{figure} \begin{center} \begin{minipage}{0.47\textwidth} \epsfig{file=f4.eps, width=0.9\textwidth, angle=0} \end{minipage} \end{center} \caption{The relation between the (baryonic) isodensity level of the refinement patch and the (normalized) offset between matter and phantom density on it. \label{fig:DistOvdensNorm}} \end{figure} Even though we just found that there is a small yet measurable probability of finding an offset between baryonic and phantom density it still remains unclear how this can be interpreted in terms of astrophysical objects. Observationally the edge of an object is primarily defined by a given threshold in (over-)density. This, however, is a natural by-product of our method for calculating $D$ (cf. \Sec{sec:offsets}). We therefore plot in \Fig{fig:DistOvdensNorm} the dependence of the (normalized) distance $D$ on the overdensity of the corresponding refinement patch. Recall that the usual overdensity limit for virialized objects in an $\Lambda$CDM\ cosmology at redshift $z=0$ is $\approx340$ and approximately coincides with our coarsest refinement level. As the only credible difference in the position of (baryonic) matter and phantom matter peaks are apparent on the lower iso-density levels (i.e. the physically larger refinement patches) one may raise the question whether we calculated the offset for corresponding peaks. A large refinement patch will certainly host several peaks both in baryonic and phantom matter, so how can we be sure to take the difference between matching peaks? Maybe the maximum baryonic density is not at the same position as the maximum phantom density (cf. \Sec{sec:offsets})? This concern is readily eliminated as the maximum offset observed is no larger than two times the actual smoothing scale, i.e. the peaks lie in two neighbouring grid cells. \begin{figure} \begin{center} \begin{minipage}{0.47\textwidth} \epsfig{file=f5.eps, width=0.9\textwidth, angle=0} \end{minipage} \end{center} \caption{Cumulative distribution of isolated patches with an offset $D>\epsilon_L$ (normalized to the total number of patches). Only patches on grids $L\leq2048$ fulfill this criterion and the contribution of these grids to the distribution are marked by the respective $L$ values given in the plot. \label{fig:Ndist-kpc}} \end{figure} So far, we always normalized the offset $D$ to the respective smoothing scale $\epsilon_i$. However, to gain a better and more quantitative feeling for the relevance of our results it appears obligatory to also consider the distance in physical units {\ifmmode{h^{-1}{\rm kpc}}\else{$h^{-1}$kpc}\fi}. To this extent we plot in \Fig{fig:Ndist-kpc} the cumulative distribution of all offsets $D>\epsilon_L$ larger than the respective smoothing scale in physical {\ifmmode{h^{-1}{\rm kpc}}\else{$h^{-1}$kpc}\fi}.\footnote{Note that this figure represents a zoom of \Fig{fig:NdistNorm} into the region right of the vertical line and in physical units on the $x$-axis now.} Note that only the three coarsest grids (i.e. $512^3$, $1024^3$, and $2048^3$) lead to offsets that are larger the smoothing scale; for all finer grids the distance in the (baryonic) matter and the phantom matter peak is below the credibility level given by the smoothing scale. We further observe that the reasonable offsets lie in the range between 15--80{\ifmmode{h^{-1}{\rm kpc}}\else{$h^{-1}$kpc}\fi}. We though need to acknowledge that the absolute fraction of isolated patches fulfilling this credibility criterion is below 0.6\%. Our simulation therefore has difficulties to accomodate offsets as large as the ones observed. \section{Summary and Conclusions} \label{sec:summary} Driven by the observation of offsets in the baryonic and gravitational matter distribution in collisions of galaxy clusters \citep[e.g. ][]{Shan09} we explore such phenomena in the context of phantom dark matter. This is an interpretation of the MONDian potential (generated purely by baryons) in a Newtonian context, i.e. the MOND potential is used with the standard (Newtonian) Poisson's equation and the resulting right-hand-side source term is understood as a combination of the baryonic matter and some phantom dark matter. An initial study of (interacting) galaxies in isolation as well as external fields indicated that we should expect to find additional peaks in the distribution of the dynamical mass as opposed to the baryonic mass distriution. However, the strength of these extra peaks varied and the contrast may in some cases be even too low to be observed. Utilizing a MONDification of the $N$-body code \texttt{MLAPM} \citep{Knebe01} we set ourselves into the position of calculating both the baryonic and phantom density distributions in a fully self-consistent MONDian cosmological simulation on adaptive refinement patches. We then quantified differences in the peaks of both these fields concluding that the (theoretically predicted) offsets are too small to be compliant with the observed offsets, at least in the presented incarnation of phantom matter and our MONDian cosmological simulation. One possible drawback of our applied method is the fact that the isodensity level that define isolated refinement patches are based upon $\rho_b$ only. However, we compensated for this quibble by adjusting the refinement criterion and subsequentially modifying the size of isolated patches; we though could not detect any systematics. We conclude that our results give support to the idea that neutrino-like non-collisional matter might be responsible for the observed offsets of lensing and X-ray peaks. There are in fact indications by several authors that non-classical neutrinos are required to explain phenomena, such as, cluster lensing \citep{Natarajan08} or CMB anisotropies \citep{Angus09b} within the context of (relativistic) MOND. One theory capabale of accomodating both these requirements is that of a mass-varying neutrino by \citet{Zhao08} to be studied more detailed in future work. \acknowledgments AK is supported by the MICINN through the Ramon y Cajal programme. CL and AK further acknowledge funding by the DFG under grant KN 755/2. This work was also carried out under the HPC-EUROPA++ project (project number: 211437), with the support of the European Community - Research Infrastructure Action of the FP7 ``Coordination and support action'' Programme. XW thanks the support of the SUPA studentship. \bibliographystyle{apj}
proofpile-arXiv_067-9386
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Stellar shells observed in some elliptical galaxies are thought to be by-products of galaxy mergers, predominantly of those involving a giant elliptical with a~much smaller galaxy, e.g.,~a spiral or a dwarf elliptical. The most regular shell systems, Type\,I shell galaxies, are believed to result from a nearly radial merger. Stars of the secondary galaxy oscillate in the potential of the primary and cumulate near the turning points of their orbits. This can be observed as shell-like enhancements of surface brightness if observed along a line-of-sight nearly perpendicular to the merger axis. While the mechanism of shell formation was explained nearly three decades ago \citep{quinn84,dupraz86,hernquist88}, recent discoveries -- e.g.,~a regular shell system in a quasar host galaxy \citep{canalizo07,bennert08}, shells found in M31 \citep{fardal07,fardal08} \mbox{--} bring fresh wind into this field. On top of the new data, the shells attract interest due to the (so far theore\-tical) possibility of using them to probe the dark matter distribution of the host galaxy. While \citet{dupraz87} showed that using shell spa\-cing from photometry to constrain the matter distribution is hopeless due to the effects of dynamical friction, \citet[][hereafter MK98]{merrifield98} proposed a way to use spectroscopy to reach the same goal via studying profiles of stellar absorption lines. Here, we extend their analysis beyond monoenergetic shells and show that line profiles from more realistic shells are more complex. \section{Monoenergetic Shells: Double-peaked LOSVDs} MK98 studied the kinematics of a monoenergetic shell -- sphe\-ri\-cal system of stars oscillating on radial orbits of the same amplitudes in a spherical potential. The amplitude of oscillations corresponds to the shell-edge radius $R_{\mathrm{shell}}$. They derived an analytic approximation for the line-of-sight ve\-lo\-ci\-ty distribution (LOSVD) in the vicinity of the shell-edge, predicting a~double-peaked profile (Fig.\,\ref{mono}a). The separation of the peaks is related to the gravitational potential of the primary galaxy. For a general gravitational potential and a general projected radius, the LOSVD has no analytical form. We computed the LOSVD numerically as a~ge\-ne\-ra\-li\-za\-tion of the MK98 approach for various gravitational potentials (Plummer, isochrone, de~Vaucouleurs). An example for the Plummer sphere, and two different projected radii, is presented in Fig.\,\ref{mono}a. \begin{figure} \plottwo{jilkovalf1.eps}{jilkovalf2.eps} \caption{ (a) LOSVDs for a monoenergetic shell ($R_{\mathrm{shell}}$\,$=$\,20\,kpc) in the Plummer potential (mass of 3.2$\cdot$10${^{11}}$\,M$_{\odot}$, scaling length of 5\,kpc) at projected radii of 0.9\,$R_{\mathrm{shell}}$ and 0.8\,$R_{\mathrm{shell}}$ (red and blue solid lines). The dashed lines show MK98 approximation. (b)~$v_{\mathrm{los,\,max}}$ for the monoenergetic, i.e.,~stationary, shell (red line), and the uniformly expanding shell (blue lines) in the Plummer potential as in Fig.\,\ref{mono}a. At the given instant, $R_{\mathrm{shell}}$ is the same for both cases. The green line shows the MK98 approximation. }\label{mono} \end{figure} \section{Traveling Shells: Splitting of LOSVD Peaks} Real shells are not stationary features: the infalling galaxy stars have a con\-ti\-nuous energy distribution, and therefore the shell edge is successively formed by stars of different energies, which appears as the shell edge traveling outwards from the primary-galaxy center. We studied numerically line-of-sight velocity $v_{\mathrm{los}}$ of particles in a uniformly expanding spherical shell. The LOSVD contains signatures of stars returning from a radius where the shell-edge was at some past time, and those traveling to a~position which the shell-edge will reach at a future time. This leads to splitting of both $v_{\mathrm{los}}$ maxima ($v_{\mathrm{los,\,max}}$) at a given projected radius (Fig.\,\ref{mono}b). The stars traveling to their apocenters have higher energies and higher $v_{\mathrm{los,\,max}}$ than the falling stars. \section{LOSVDs from N-body Simulations} \begin{figure}[t] \plotone{jilkovalf3.eps} \caption{Position-velocity maps of particles originally belonging to the secondary galaxy, at two different times. Surface density of particles from the vicinity of merger axis per $v_{\mathrm{los}}$ is mapped. The ``wedges'' correspond to stars traveling to the shell-edge future position and returning from the past one –- notice the splitting similar to Fig.\,\ref{mono}b. Panels on the right represent the LOSVDs (cuts parallel to the velocity axis). The green line corresponds to the outermost (oldest) shell at both times, the blue line to the cut of a younger shell. All cuts are made at the same relative radius 0.8\,$R_{\mathrm{shell}}$.} \label{mapvel} \end{figure} To study LOSVDs in more detail, we carried out a restricted N-body simulation of shells resulting from a radial merger of a giant elliptical galaxy with a dwarf elliptical (Figs.\,\ref{mapvel} and \ref{profiles}). The primary was represented by a~two-component potential: stars and the dark matter halo. For simplicity the Plummer profile was assumed for both components (with masses of 2$\cdot$10${^{11}}$\,M$_{\odot}$ and 1.2$\cdot$10${^{13}}$\,M$_{\odot}$, scaling lengths of 5\,kpc and 100\,kpc for stars and the dark matter halo, respectively). The dwarf elliptical was simulated as a single Plummer sphere (mass of 2$\cdot$10${^{10}}$\,M$_{\odot}$, scaling length of 2\,kpc). Right panels in Fig.\,\ref{mapvel} show the LOSVDs for different times of the simulation. For the outermost shell, we can see a narrowing of the line profile with time, i.e.,~with increasing shell-edge radius, due to the spatial change of the primary's gra\-vi\-ta\-tio\-nal potential (see MK98). The bottom right panel in Fig.\,\ref{mapvel} also shows the inner shell profile, which is more complicated, as it also contains signatures of particles belonging to the outer shell. In Fig.\,\ref{profiles}a, the LOSVD from the top panel of Fig.\,\ref{mapvel} is decomposed according to the sense of particle's motion. \begin{figure}[t] \plottwo{jilkovalf4.eps}{jilkovalf5.eps} \caption{(a) Decomposition of the LOSVD (green) to the con\-tri\-bu\-tions produced by stars moving radially outward (red) and inward (blue) with respect to the primary-galaxy center. The same LOSVD of the outermoust shell as in top panel of Fig.\,\ref{mapvel} was used. (b)~A~prediction of observed line profiles: green line shows the simulated LOSVDs (same as in Fig.\,\ref{profiles}a), brown and pink lines show convolutions with different Gaussians representing the instrumental dispersion of FWHM 30 and 100\,km$/$s.} \label{profiles} \end{figure} \section{Conclusions} Theoretical studies of line profiles are needed and timely since getting high S$/$N and high spectral resolution spectra from faint external parts of ellipticals becomes within the reach of current large telescopes. We predict the shape of spectral lines for Type\,I shell galaxies: quadruple-peaked profile. The con\-nec\-tion of this shape with the shell galaxy's gravitational potential is not as straightforward as previously predicted. We also show that relatively high spectral resolution is necessary for observing the line profiles (Fig.\,\ref{profiles}b). To make our study still more realistic, better models for galaxy potentials, and the dynamical friction need to be applied (see \citeauthor{ebrova09}, these proceedings). \acknowledgements This project is supported by the Institutional Research Plan No.~AV0Z10030501 of the Academy of Sciences of the Czech Republic, by the Doctoral Grant No.~205/08/H005 of the Czech Science Foundation and by the grant LC06014 (Center for Theoretical Astrophysics) of the Czech Ministry of Education.
proofpile-arXiv_067-9566
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} It is known that there are two types of $\gamma-$ray emitting active galactic nuclei (AGN): blazars and radiogalaxies. Their spectral energy distribution (SED) has typically two broad peaks: one, at low frequencies, is due to the radiation emitted by synchrotron processes; the second, at high frequencies, is thought to be due to inverse-Compton scattering (IC) of high-energy electrons off ambient seed photons. The underlying physical mechanism generating such a type of SED is supposed to be the same: a relativistic jet observed with different viewing angles, very small in the case of blazars and larger for radiogalaxies (see Fossati et al. 1998 and Donato et al. 2001 for blazars; see Ghisellini et al. 2005, Tavecchio \& Ghisellini 2008 for radiogalaxies). The SEDs of blazars seem to have different shapes depending on the emitted power and are organized in the so-called ``blazar sequence'' (Fossati et al. 1998). In particular, high-luminosity blazars have the peaks at low frequencies (``red'' blazars), while as the luminosities of the objects decrease, the peaks shift to higher frequencies, so that the lowest luminosity blazars are detected even at TeV energies (``blue'' blazars). The blazar sequence can be interpreted in terms of changes of the seed photons for the IC processes (Ghisellini et al. 1998). Blue blazars have no or weak emission lines (equivalent width $EW < 5$~\AA) and the IC seed photons are those from the synchtrotron radiation (e.g. Ghisellini et al. 1985, Band \& Grindlay 1985). Instead, red blazars have strong ($EW > 5$~\AA) emission lines and the seed photons are from the broad-line region (BLR) or accretion disk or even from the molecular torus (e.g. Dermer et al. 1992, Sikora et al. 1994, B{\l}a\.zejowski et al. 2000). It is worth noting that all the permitted emission lines - with weak or strong intensities - are broad, i.e. with $FWHM > 2000$~km/s (see, e.g., Wills \& Browne 1986, Wang et al. 2009). This was, roughly speaking, the scenario before of the launch of \emph{Fermi}. \section{The discovery of gamma-rays from PMN~J0948+0022} The surprise came with the detection by the Large Area Telescope (LAT, Atwood et al. 2009), onboard the \emph{Fermi} satellite, of a bright $\gamma-$ray source associated with the quasar PMN J0948+0022 (Abdo et al. 2009a,b,c). This quasar is known to be a radio-loud narrow-line Seyfert 1, with narrow permitted lines, bump of FeII and the flux ratio between [OIII] and $H\beta$ smaller than 3 (Zhou et al. 2003, Komossa et al. 2006, Yuan et al. 2008). Particularly, the FWHM of $H\beta$ is about $1500$~km/s (Zhou et al. 2003, Yuan et al. 2008), the narrowest permitted line ever detected in a $\gamma-$ray emitting AGN. Two observations at different epochs (28~February and 27~March~2000) were available at the \emph{Sloan Digital Sky Survey} (SDSS\footnote{\texttt{http://www.sdss.org/}}), indicating a change in the intensity (EW) of the $H\beta$ from $16$~\AA \, to $21$~\AA \, within about one month. This ``surprise'' was somehow expected, since radio observations of PMN J0948+0022 have shown a compact source, with flat spectrum and high brightness temperature, suggesting the presence of a relativistic jet (Doi et al. 2006, Yuan et al. 2008). The $\gamma-$ray detection by \emph{Fermi} confirmed this suggestion and allowed to build a complete SED: it was found that PMN J0948+0022 has the characteristics of flat-spectrum radio quasars (FSRQ), but with low power, relatively small mass ($1.5\times 10^{8}M_{\odot}$) and high accretion ($40$\% the Eddington value). More details can be found in Abdo et al. (2009c). We take the opportunity of this work to deal with more details on the classification. Indeed, this will be the first source of a new population of $\gamma-$ray emitting AGN and, therefore, it is necessary to address carefully all the known possible issues and doubts. \begin{figure}[!t] \centering \includegraphics[scale=0.5,clip,trim = 0 50 0 40]{foschinil_f1.ps} \caption{Comparison of the SED of PMN J0948+0022 (from Abdo et al. 2009c) with the spectral sequence of blazars and with SED of some well-known powerful radiogalaxies. Adapted from Ghisellini et al. (2005).} \label{fig:SED} \end{figure} \begin{figure}[!h] \centering \includegraphics[angle=270,scale=0.36]{foschinil_f2.ps} \caption{X-ray image ($0.2-10$~keV) of PMN J0948+0022 obtained by integrating $12$ Swift/XRT observations performed between $2008$ and $2009$, for a total exposure of $54.4$~ks. Radio observations at $1.4$~GHz from NVSS (dashed yellow lines) and FIRST (continuous white lines) are superimposed. Epoch of coordinates is J2000. Color bar indicates X-ray counts.} \label{fig:XRTFIRST} \end{figure} \section{Differences and similarities with blazars and radiogalaxies} Fig.~\ref{fig:SED} shows the SED of PMN J0948+0022 compared with the blazar sequence (continuous lines of different colors) and a few of the most powerful radiogalaxies (Cen~A, M~87, NGC~6251). It is immediately evident that PMN J0948+0022 is in the region of blazars, with the observed emitted power well above the radiogalaxies region. This is an observational evidence, which does not need for any further explanation. The study of the morphology of the source at radio frequencies shows a very compact source from $1.7$ to $15.4$~GHz (see Doi et al. 2006), except for the data at $1.4$~GHz from the NVSS (Fig.~\ref{fig:XRTFIRST}). In this case, there seems to be an extended structure, but it is likely to be an artifact of low angular resolution of NVSS ($FWHM=45''$, Condon et al. 1998). Indeed, images at higher resolution ($FWHM=5''$, Becker et al. 1995) from the FIRST survey ($1.4$~GHz) indicated the presence of two resolved sources, $1'.2$ distant each other, one of which is the core of PMN~J0948+0022 (with flux $107.5\pm 0.1$~mJy), while the second one is unknown and has a radio flux of $8.0\pm 0.1$~mJy. No optical data are available in any public catalog. \emph{Swift}/UVOT observations found no source at any filter with these upper limits ($3\sigma$, in units of $10^{-13}$~erg~cm$^{-2}$~s$^{-1}$): $V < 3.7$ , $B < 2.5$, $U < 1.5$, $UVW1 < 0.9$, $UVM2 < 0.9$, $UVW2 < 0.6$. No X-ray source was found by integrating all the available $12$ Swift/XRT observations (total exposure $54.4$~ks), with an upper limit ($3\sigma$) of $1.4\times 10^{-14}$~erg~cm$^{-2}$~s$^{-1}$ in the $0.2-10$~keV energy band. Search for detections at lower frequencies did not result any useful data (A. Capetti, private communication). No detection was found, both for PMN J0948+0022 and the unknown nearby source in the Very Large Array Low-frequency Sky Survey (VLSS) at $74$~MHz (Cohen et al. 2007), with an upper limit of $300$~mJy, which is not very constraining. It is therefore unlikely that there is any link between PMN J0948+0022 and the unknown source. We can also add that - in case we considered the unknown source as extended emission of the RL-NLS1 - the ``extended''/core flux ratio of 0.13 is in the range of lobe-dominated sources (cf Ghisellini et al. 1993), which in turn imply a strong X-ray emission from the unknown source (not observed) and very low Doppler factors ($\delta \approx 1$) at odds with $\gamma-$ray emission and the observed variability. The most striking differences with blazars and radiogalaxies emerge in the optical spectrum. Again, since radiogalaxies have optical spectra similar to Seyferts or Low-Ionization Nuclear Emission-Line Regions (LINERs), it is tempting to try recovering this option. However, this is not the case. LINER are easily discarded because of the high-power of the source (cf Fig.~\ref{fig:SED}) and the absence of other low-ionization lines, such as [OI] (for a review, see Ho 2008). With respect to the differences with Seyferts, NLS1 are indeed an AGN class separate from Seyfert 1 and 2 (for a review, see Pogge 2000). As known, it is possible to observe broad (permitted) and narrow (permitted and forbidden) emission lines in Seyfert 1, while only narrow lines can be observed in Seyfert 2, since a molecular torus on the line of sight hampers the viewing of the broad-line region (BLR) and allow only to see the narrow-line region (NLR). In the case of NLS1, we are observing lines from the BLR, which are narrower than usual, but are indeed coming from the BLR, not from the NLR (see also Rodr\'iguez-Ardila et al. 2000). There is no obscuration hampering the viewing of the BLR, as in Seyfert 2, otherwise this would result in [OIII]/H$\beta > 3$ (like in Seyfert 2) and the absence of FeII bump, which is observed in Seyfert 1, but not in Seyfert 2. The narrowness of the permitted lines is an indicator of physical conditions, which are really different from those in other Seyferts. Decarli et al. (2008) suggested that the observed spectrum of NLS1 is due to the fact that these sources have a disk-like BLR and we are observing it pole-on. Therefore, there is no component of the circular motion in the disk directed toward the observer, which can cause the Doppler broadening. On the other hand, Marconi et al. (2008) explain the narrowness of BLR lines as due to the radiation pressure of an accretion disk close to the Eddington limit, which pushes the BLR farther from the central spacetime singularity. \section{The morphology of the host galaxy} Due to the high-redshift of PMN J0948+0022 ($z=0.585$), no direct information of the morphology of its host galaxy is available. However, since Seyferts are generally hosted by spiral galaxies and radio-loud AGN are in ellipticals, the possibility to find a relativistic jet in a spiral galaxy is surely charming. There are some studies on the NLS1 morphology suggesting a bulge-dominated structure (Zhou et al. 2006), even with some starburst contribution (Ant\'on et al. 2008 Sani et al. 2009). Work is in progress to study the specific case of PMN J0948+0022. \section{Conclusions} We can conclude that PMN J0948+0022 is a $\gamma-$ray emitting AGN really different from blazars and radiogalaxies and, therefore, \emph{it represents the first detection of a likely emerging new population of $\gamma-$ray AGN}. It is not clear what is the impact of these newly discovered sources in the unified scheme of AGN and, specifically, for radio-loud sources. For example, such a type of source was not predicted by the well-known unified scheme of radio-loud AGN proposed by Urry \& Padovani (1995). Another example is in the Fig.~7 of Boroson (2002), where this population would hardly find its place: it should be in the right part of the figure, in the radio-loud region, perhaps at the border with radio-quiet objects and close to the BAL QSO (Broad-Absorption Line Quasi-Stellar Objects) region, but if it will be confirmed a spiral host galaxy for RL-NLS1, this would be at odds with the whole diagram. So, we are not able yet to give an answer to these questions and issues. A multiwavelength campaign on PMN J0948+0022 was performed from 26 March to 5 July 2009 and will surely give some additional hints to understand the nature of RL-NLS1. More information will likely to come from the increasing numbers of this type of sources that will be detected at $\gamma-$rays by \emph{Fermi}/LAT. \acknowledgements The \emph{Fermi} LAT Collaboration acknowledges support from a number of agencies and institutes for both development and the operation of the LAT as well as scientific data analysis. These include NASA and DOE in the United States, CEA/Irfu and IN2P3/CNRS in France, ASI and INFN in Italy, MEXT, KEK, and JAXA in Japan, and the K.~A.~Wallenberg Foundation, the Swedish Research Council and the National Space Board in Sweden. Additional support from INAF in Italy for science analysis during the operations phase is also gratefully acknowledged. This research has made use of data obtained from HEASARC, provided by NASA/GSFC, and of the NASA/IPAC Extragalactic Database (NED) which is operated by the JPL, Caltech, under contract with the NASA. EA acknowledges the use of the $100$-m telescope of the MPIfR (Max-Planck-Institut f\"ur Radioastronomie) at Effelsberg.
proofpile-arXiv_067-9677
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Solar flares and coronal mass ejections (CMEs) are explosive processes that are able to generate large-scale wave-like disturbances in the solar atmosphere \citep[e.g.][]{warmuth07}. Signatures of such disturbances were first imaged in the hydrogen H$\alpha$ spectral line and called Moreton waves after \citet[][see also Moreton \& Ramsey, 1960]{moreton_orig60}. Typically, Moreton waves appear as propagating dark and bright fronts in H$\alpha$ filtergrams and dopplergrams, respectively, which can be attributed to a compression and relaxation of the chromospheric plasma. The disturbance propagates with a speed in the order of 1000~km~s$^{-1}$ \citep[e.g.][]{moreton60,zhang01,warmuth04a,veronig06}, which led to the conclusion that such a phenomenon cannot be of chromospheric origin, but is the surface track of a coronal disturbance compressing the underlying chromosphere \citep[sweeping-skirt hypothesis; see][]{uchida68}. Moreton waves are generally observed to be closely associated with the flare impulsive phase \citep{warmuth04a}, which often coincides also with the acceleration phase of the associated CME \citep[cf.][]{zhang01,vrsnak04b,maricic07,temmer08}. Moreton waves are observed to propagate perpendicular to the magnetic field, and the initial magnetosonic Mach numbers are estimated to lie in the range of \mbox{M$_{\rm ms}\sim$1.4--4}, suggesting that they are at least initially shocked fast-mode waves \citep{narukage02,narukage04,warmuth04b}. In their late propagation phase the wave perturbations undergo a broadening, weakening, and deceleration until \mbox{M$_{\rm ms}\sim$1} is reached. These results indicate that Moreton waves are a consequence of shocks formed from large amplitude-waves that decay to ordinary fast magnetosonic waves, which is in line with the flare initiated ``blast wave'' scenario \citep[e.g.,][]{warmuth01,khan02,narukage02,vrsnak02a,hudson03,narukage04}. Further evidence for the close association to shocks is the quasi-simultaneous appearance of Moreton waves and radio type II bursts, which are one of the best indicators of coronal shocks \citep[e.g.,][]{khan02,pohjolainen01,pohjolainen08,warmuth04b,vrsnak05b,vrsnak08}. Wave-like disturbances were for the first time imaged directly in the corona by the EIT instrument aboard the Solar and Heliospheric Observatory (SoHO), thereafter called EIT-waves \citep[][]{moses97,thompson98}. They were considered to be the coronal manifestation of the Moreton wave \citep{thompson99}, but statistical studies revealed discrepancies in their velocities. EIT waves were found to be two to three times slower than Moreton waves \citep{klassen00}. Today, their relation to Moreton waves and the generation mechanism of EIT waves is very much debated \citep[e.g.,][]{delannee99,wills-davey99,chen00,biesecker02,cliver04,warmuth04b,cliver05,vrsnak05b,chen06,attrill07,veronig08}. In the present paper, we solely focus on Moreton waves, which are generally accepted to be a chromospheric response to coronal shock waves. In particular, we study their generation mechanism and address the issue whether they are flare-ignited or CME-driven, or a combination of both, which is still a matter of debate. To this aim, we developed a simple analytical model which describes the launch and propagation of Moreton waves. (Note that the presented model does not intend to evaluate generation mechanisms which may cause EIT waves.) We use for the model different input parameters acting as source that drives wave, first derived from CME observations (assuming that the upward moving CME drives the wave), and second using synthetically generated scenarios (to emulate alternative driving mechanisms). By confronting the results derived from the model with observations we aim to find constraints on the possible drivers of the wave. For this we use the outstanding observations of the Moreton wave associated with the X3.8/3B flare-CME event from January 17, 2005. We emphasize that the event was characterized by a very distinct and fast Moreton signature, indicating that it was caused by a coronal fast-mode shock \citep[c.f.][]{warmuth04b}. The observations of the Moreton wave and associated CME and flare under study are presented in Sect.~2. The model is described in Sect.~\ref{model}. The results are given in Sect.~4. Discussions on the results, constrains of model input parameters by observations, and final conclusions are presented in Sect.~5. \section{Observations} Associated with the January 17, 2005 3B/X3.8 flare event, a fast Moreton wave starting at $\sim$09:44~UT was observed with high time cadence ($\lesssim$1~min) in full-disk H$\alpha$ filtergrams at Kanzelh\"ohe Observatory. The wave propagated at a mean velocity of 930~km~s$^{-1}$ up to a distance of 500~Mm from its source location \citep[for more details on the wave measurements and its propagation characteristics we refer to][]{veronig06}. The flare and its associated coronal mass ejection (CME) occurred at [N15,W25]. From this region actually two fast CMEs were launched within short time and in our study we are focusing on the second event. The Large Angle and Spectrometric Coronagraph \citep[LASCO;][]{brueckner95} instrument C2 aboard the Solar and Heliospheric Observatory (SoHO) imaged the first CME at 09:30~UT and the second CME at 09:54~UT. The linear plane-of-sky speed of the first CME was of $\sim$2100~km~s$^{-1}$ and of the second CME of $\sim$2500~km~s$^{-1}$ as observed with LASCO C2 and C3 \citep[LASCO catalogue;][]{yashiro04}. The study is performed over the time range 09:30--09:54~UT, hence, in the interval of interest we assume no impact on the CME kinematics due to the possible merging process with the previous event. The early CME evolution could be observed with the GOES12 Soft X-ray Imager \citep[SXI;][]{hill05}. Rising CME loops could be identified in 9~SXI frames with high time cadence \citep[$\sim$2--4~min; see][]{temmer08}. After co-aligning the GOES/SXI and H$\alpha$ observations, the distances of the CME leading edge as well as the Moreton wave fronts were measured using as null-point the wave ``radiant point"\footnote{Note that in \cite{temmer08} the Sun-center was used as null-point for the distance measurements of the CME.}, which was derived from circular fits to the earliest observed wavefronts \citep[for details see][]{veronig06}. From running ratio SXI images the height-time profile of the erupting CME structure was measured. In Fig.~\ref{cme-kin} we show the propagation of the Moreton wave together with the associated CME during its initial phase up to $\sim$1~R$_{\odot}$ and the flare hard X-ray (HXR) flux measured with the Ramaty High-Energy Solar Spectroscopic Imager \citep[RHESSI;][]{lin02} in the non-thermal energy range 30--100~keV. From the second derivative of the height-time measurements we determined the onset of the CME fast acceleration phase, i.e. the launch time of the CME, at $\sim$09:40--09:42~UT. The back-extrapolated Moreton wave as well as the first HXR burst started at $\sim$9:42~UT \citep[see][]{veronig06}. The CME acceleration reached its peak of $4.4\pm0.3$~km~s$^{-2}$ at $\sim$09:46~UT, and ends at $\sim$10:06~UT \citep[cf.][]{temmer08}. For the full CME kinematics up to 30~R$_{\odot}$ we refer to \cite{vrsnak07} and \cite{temmer08}. A composite dynamic radio spectrum for that day over the frequency range 600~MHz--20~kHz combining Artemis, DAM and WAVES measurements can be found under \url{http://secchirh.obspm.fr/select.php}. The radio signatures show a rather complex situation most probably due to the launch of two CMEs for which a detailed study is given by \cite{bouratzis09}. Associated with the event under study was a metric type II radio burst at 09:43--09:46~UT reported from San Vito, Italy (SVTO; spectral range 70--25~MHz) and also from Learmonth, Australia at 09:44--09:47~UT (LEAR; spectral range 65--25~MHz) as reported from the Solar Geophysical Data (SGD) under {\it Solar Radio Spectral Observations} (\url{ftp://ftp.ngdc.noaa.gov/STP/SGD/}). Both stations report shock velocities of 1500~km~s$^{-1}$ using a one-fold Newkirk model which is consistent with an MHD shock moving through the solar corona. A group of type III bursts occurred 09:41--09:47~UT, matching the main RHESSI peak. In Fig.~\ref{wave-kin} the distance-time and velocity-time profile of the observed Moreton wave is shown. The profile shows an increase in velocity with an initial speed of 400~km~s$^{-1}$ until it reaches a maximum speed of 1100~km~s$^{-1}$ at $\sim$09:47~UT, afterwards the velocity decreases. This temporal behavior can be interpreted as nonlinear evolution of the wavefront. First, the wavefront steepens until a discontinuity appears, i.e. the shock formation starts. Then follows a phase of shock amplitude growth, which is reflected in shock acceleration and intensification \citep[see Figures 4 and 5 in][]{vrsnak00a}. Finally, after the shock amplitude attains its maximum, the wave gradually decays to an ordinary fast-mode wave \citep[cf.][]{zic08}. Fig.~\ref{mdi} shows the derived Moreton wave fronts with respect to the photospheric magnetic field. The first wave appearance is clearly located outside the active region. Since the wave propagated well outside the active region, Alfv\'en speeds for the corona can be considered to lie in the range of 300--600~km~s$^{-1}$ \cite[e.g.][]{narukage02,warmuth05}. The high velocity of the wave within a low Alfv\'en speed environment as well as the associated metric type II radio burst suggest that the wave is at least initially shocked \citep[e.g.][]{gopalswamy98,klassen99}. The main criteria derived from the observations which our model results have to meet are 1) general kinematics of the wave, 2) velocity evolution and 3) timing of the shock formation. \section{The model}\label{model} We would like to emphasize that the following analytical model is kept as simple as possible and can thus only reproduce the general characteristics of the propagation of the disturbance. The model will simulate the Moreton wave by applying a driver which is a circular source region that may expand and move translatory at the same time. Three types of source expansion are applied following the terminology by \cite{vrsnak05a}: 1) The radius of the source is kept constant, i.e.\ there is no expansion of the source in time during its upward motion. Accordingly, plasma can flow behind the driver and the source acts as blunt-body driving a bow-shock. 2) The source radius expands with a constant radius-to-height ratio, $r(t)/h(t)$, acting as a combined bow-shock/piston driver. 3) The source expands only in lateral direction without upward motion and plasma can not flow behind the contact surface, according to which the driver acts as piston mechanism. Our first intention is to investigate whether the Moreton wave could be produced by the upward moving CME, using the height-time measurements derived from the CME observations as input for the expanding source. We consider this model input for scenarios where the source acts as bow-shock and combined bow/piston driver for the wave (different strengths and proportions between the upward motion and lateral expansion of the driver are applied). Our second intention is to emulate an expanding flare region or the lateral expansion of the CME flanks for which we use synthetic expansion profiles. Such kind of model input is considered for a source that acts as piston driver mechanism for the wave. The results from the model will be compared to the kinematics of the January 17, 2005 Moreton wave to estimate what kind of source expansion reproduces the general characteristics of the observed wave kinematics best. We suppose that the source accelerates to a high velocity, which causes a large amplitude coronal disturbance that is capable of compressing the underlying chromosphere to produce the Moreton wave. The term large-amplitude waves should emphasize that the wave evolution can not be described through linearized equations. For more details on the terminology of large scale waves we refer to \cite{vrsnak05a} and \cite{warmuth07}. In the case of a large amplitude wave, the rest frame velocity $w$ of a given wavefront element (hereinafter called ``signal'') depends on two quantities. First, it depends on the local magnetosonic speed $v_{\rm ms}$, which is larger than in the unperturbed plasma due to the plasma compression, and is thus related to the perturbation amplitude. Second, it must be taken into account that a given signal propagates through a moving plasma, since the plasma flow velocity $u$ associated with the perturbation amplitude is not negligible (see Fig.~\ref{sig}a). Consequently, the rest frame velocity of the signal equals to $w=v_{\rm ms}+u$ \citep[see][]{landau87}, i.e., elements of larger amplitude propagate faster. Due to the nonlinear evolution of the wave front, its profile steepens and after a certain time/distance a discontinuity forms, marking the onset of shock formation \citep[][]{landau87,mann95_s,vrsnak00a,zic08}. Generally, the dependence of $v_{\rm ms}$ on the perturbation amplitude cannot be expressed straightforwardly. However, in the case of a low plasma-to-magnetic pressure ratio $\beta$ which is assumed here, the relationship simplifies, since the Alfv\'en velocity $v_{\rm A}$ is much larger than the sound speed, and under the frozen-in condition in the case of perpendicular wave propagation, the plasma density $\rho$ is proportional to the magnetic field strength $B$, i.e.\ $v_{\rm ms}\approx v_{\rm A}\propto \sqrt{\rho}$. \cite{vrsnak00a} have shown that in such a situation the relationship between the local propagation speed and the amplitude becomes very simple: the local value of $v_{\rm A}$ can be expressed as $v_{\rm A}=v_{\rm A0}+u/2$, where $v_{\rm A0}$ is the local Alfv\'en velocity in the unperturbed plasma. Bearing in mind that $w=v_{\rm A}+u$, one finally finds that the wave element propagates at the rest frame speed $w=v_{\rm A0}+3u/2$ \citep{vrsnak00a}. Since the phase velocity of the signal depends on its amplitude $u$ and the ambient Alfv\'en velocity $v_{\rm A0}$, the evolution of the wavefront depends on the spatial distribution of $v_{\rm A0}$ and the evolution of the amplitude. The simplest possible situation is propagation of the wave in a medium where $v_{\rm A0}$ is uniform. In such a case, the phase velocity changes only due to the amplitude evolution, which is governed by the energy conservation. For example, in the case of a spherically symmetric source, creating a spherically symmetric wavefront (Fig.~\ref{sig}b), the amplitude is inversely proportional to the distance $d$, i.e., decreases as $d^{-1}$, whereas in the cylindrical symmetry it decreases as $d^{-1/2}$ \citep{landau87}. Note that in the case of freely-propagating shock waves (blasts), the amplitude decreases also because the leading edge of the perturbation (having the highest velocity) propagates faster than the low-amplitude segments in the trailing edge. This causes perturbation profile broadening, which must be compensated by an amplitude decrease \citep{landau87}.\footnote{Note that in a medium where the Alfv\'en velocity decreases steeply enough with the distance, the leading edge might be slower than the trailing edge. In such a case, the wavefront slows down, whereas the amplitude increases.} Of course, in the solar corona the Alfv\'en velocity is far from being uniform. Even if the coronal structural inhomogeneities are neglected, it changes with height and depends on the distance from active regions \cite[e.g.,][]{warmuth05}. In such a situation, where the spatial distribution of $v_{\rm A0}$ is generally unknown, one has to investigate the wavefront kinematics by calculating the amplitude evolution for various reasonable spatial distributions of $v_{\rm A0}$. However, instead of this, we apply an analogous procedure, where we take $v_{\rm A0}$ uniform, and describe the signal amplitude and the phase-velocity evolution by different functional forms. In other words, instead of presuming a function that describes the change of $v_{\rm A0}$ with distance $d$ from the wave source, we directly presume a function that describes the wave evolution. In particular, we use the power-law function \begin{equation}\label{pl} f(d)= d^{-\alpha}\, \end{equation} and exponential function \begin{equation}\label{expon} f(d)= {\rm e}^{-d/p}\,. \end{equation} Applying different decay lengths (denoted in the power-law function by $\alpha$ and in the exponential by $p$) we can reproduce a weak or strong attenuation of the signal. Note that $f=1$ would represent a plane wave without decay as achieved for $p\rightarrow\infty$ and $\alpha\rightarrow0$. On the other hand, large $\alpha$ or small $p$ represents a strong attenuation. Beside the power-law and exponential function, we also employ as a kind of reference, the functions: \begin{equation}\label{cylin_dec} f(d)= \frac{1}{\sqrt{d}}\, \end{equation} and \begin{equation}\label{spher_dec} f(d)= \frac{1}{d}\,, \end{equation} which describe the amplitude decrease of cylindrically and spherically symmetric sound waves, respectively. The initial amplitude of a given signal is determined by the velocity of the source surface $v_{\rm s}$. At the starting time $t_{0}$ when the signal is launched, $u(t_0)=v_{\rm s}(t_0)$, since the flow velocity has to be equal to the contact-surface velocity. The geometry of the source is considered as a radially expanding surface of cylindrical (arcade expansion) or spherical shape (volume expansion) with a radius $r(t)$ centered at the height $h(t)$. Applying the Huygens-Fresnel principle, one finds that due to the presumed symmetry of the source and the presumed homogeneity of the ambient plasma, the wavefront elements are concentric with the source surface (cf. Figs.~\ref{sig}b,~\ref{f1} and~\ref{f1b}). We follow the signals which are emitted continuously from the source surface for the time span $t_{0}$ until a certain time $t_{\rm c}$ at each small time step $\Delta t=t_{i}-t_{i-1}$. The distance $x$ traveled by the signal from the time $t_0$ when it was emitted, until the time $t_i$, is calculated iteratively. Using the expression \begin{equation}\label{radius} x(t_{i}) = x(t_{i-1}) + \left( v_{\rm A0} + v_{\rm s}(t_{i-1})~\frac{3}{2}~f(t_{i-1}) \right) \Delta t \, , \end{equation} we obtain the distance from the source region center, $d(t_{i})=r(t_0)+x(t_{i})$, where $r(t_{0})$ is the radius of the source surface at the time $t_0$, when the signal was emitted (Fig.~\ref{sig}b). Note that $x(t_0)=0$ and $d(t_{0})=r(t_{0})$, and Eq.~\ref{radius} has to be integrated from $t_{0}$ to $t_{\rm c}$. Considering the mimicked Moreton wave as the extension of the outermost signal measured at the solar surface (cf.\ arrows in Figs.~\ref{f1} and~\ref{f1b}), we derive for each time step $\Delta t$ the propagation of the wave as distance $d_{\rm M}(t)$. Hereinafter, this outermost signal that is considered to mimic the Moreton wave, will be denoted as the \textit{ground track signal} (GTS). \section{Implementation and interpretation of the model}\label{implement} In the following, distance-time plots and velocity profiles are shown for the propagated GTS resulting from our model. The results are confronted with the observed Moreton wave kinematics. Due to the huge spectrum of possibilities obtained by varying and combining the different model parameters, we will show here only representative model results, i.e.\ those which match the observational criteria of the Moreton wave best. The successful model will reproduce the general characteristics of the observed Moreton wave in terms of 1) kinematics, 2) velocity evolution (increasing velocity until $\sim$09:47~UT followed by decreasing velocity), and 3) shock formation around the onset of the type II burst ($\sim$09:43~UT), i.e., before or close in time to the first appearance of the Moreton wave ($\sim$09:44~UT). The wave-like disturbance that generates the Moreton wave is assumed to propagate approximately near the coronal base. Under this assumption, the value for $v_{A0}$ lies in the range of $\sim$300--600~km~s$^{-1}$ \cite[][]{warmuth05}. To ease the comparison between the model results and the observations (bearing in mind also other aspects of the CME/flare event) we use for the model the absolute time in UT. The parameter $t_{0}$ varies around $\sim$09:42~UT which is close to the onset of the fast acceleration stage of the CME and the flare onset in H$\alpha$ and HXRs. The parameter $t_{\rm c}$ is the time at which the Moreton wave was observed the last time \cite[$\sim$9:54~UT; see][]{veronig06}. The time range $t_{0}$--$t_{\rm c}$ is subdivided into time steps $\Delta t$=10~s, i.e.\ each 10 seconds the position of the wavefront and the GTS is calculated. \subsection{Model results based on observed CME kinematics} In Fig.~\ref{f1} we give a snapshot of the propagated signals (circles) that were emitted during the upward motion (along the $y$-axis) of an expanding source. The kinematics for the upward moving source is taken from the CME observations, and the type of source expansion acts as a combined bow shock/piston driver for the emitted signals with $r(t)/h(t)$=0.2, i.e.\ source size is proportional to height at each time $t$. The decay of the signal is based on a cylindrical geometry of the source (see Equ.~\ref{cylin_dec}). The first signals are emitted at $t_{0}$=9:41:52~UT when the CME had a height of $h(t_0)$=105~Mm and an initial size of $r(t_0)$=21~Mm. The surrounding Alfv\'en speed of the unperturbed plasma is chosen as $v_{\rm A0}$=400~km~s$^{-1}$. From $t_{0}$ on, we follow the signals every 10~s, until they have reached a certain extension at $t_{\rm c}$=9:53:52~UT (Fig.~\ref{f1}). Note that signals which are launched right after $t_0$ have the longest time to evolve, signals launched close to $t_{\rm c}$ the shortest. At 9:53:52~UT the CME has a height of $h(t_{\rm c})$=1570~Mm and a size of $r(t_{\rm c})$=314~Mm. The arrow in Fig.~\ref{f1} indicates the propagated distance $d_{\rm M}(t_{\rm c})$=881~Mm of the GTS, i.e.\ the mimicked Moreton wave at 9:53:52~UT. Fig.~\ref{cme-real} shows the calculated GTS distance versus time using the observed CME kinematics as input for the upward moving source for two different types of the source expansion. The top panel of Fig.~\ref{cme-real} is supposed to mimic a combination of a bow shock and piston driven scenario; the source was expanding during its upward motion self-similarly with a constant ratio of $r(t)/h(t)=0.6$. The bottom panel of Fig.~\ref{cme-real} supposes the source to act as a rigid-body driver, i.e. the radius was kept constant during its upward movement with $r(t)$=140~Mm, imitating a bow-shock scenario. The derived kinematics of the GTS show a distinct feature of a ``knee'' as indicated in the top panel of Fig.~\ref{cme-real}. The feature occurs when a later emitted GTS passes the preceding one\footnote{In the specific case of our model the overtaking GTS was launched when the source speed changed from subsonic to supersonic.}, i.e.\ the knee marks the time of the shock formation \citep{vrsnak00a}. From Fig.~\ref{cme-real} it can be seen that the first phase of the observed Moreton wave could be partly mimicked but not its later evolution. The knee, which represents the time of the shock formation, occurs $\sim$4--6 minutes after the first Moreton wave front was observed. In Fig.~\ref{cme-real-vel} the according velocity profiles are plotted for the scenarios presented in Fig.~\ref{cme-real}. For both scenarios, CME acting as combined bow/piston and bow driver, the GTS is of decreasing velocity until $\sim$09:47~UT and the velocity of the GTS at $\sim$09:51~UT (last observational data point) is about 1.5 times as high as for the observed Moreton wave. Hence, the CME is a too fast driver which generates a too fast GTS at large distances. Although various kinds of parameter values were applied, it was not possible to reproduce the general observational characteristics of the Moreton wave. From this we conclude that, using a fast upward moving driver for the model, like the observed CME, all generated GTS profiles reveal 1) increasing velocity after $\sim$09:47~UT and 2) a shock formation several minutes after the first observed front of the Moreton wave (cf.~Fig.~\ref{wave-kin}), which is not consistent with the observations. \subsection{Model results based on a synthetic kinematical profile of the source}\label{synth} From the calculated GTS kinematics using real CME observations, it became clear that the radially upward moving CME, imitating a bow or combined bow/piston scenario, cannot reproduce the observed Moreton wave characteristics. In order to investigate alternative driving mechanisms, we use as input parameters a synthetic kinematics of an expanding source acting as piston mechanism. As simplest approach, we assume that during the radial expansion the center of the source is fixed at the surface, i.e.\ $h(t)$=0, in order to imitate a spherical or cylindrical piston. The synthetic kinematics consists of an acceleration phase $t_{\rm a}$ of constant acceleration $a$, until a certain velocity is reached by which the source expands further. This enables us to study the signal evolution emitted from very differently expanding driving sources, ranging from sudden impulsively to gradually accelerating. In Figs.~\ref{f1b} and~\ref{f5} a relatively gradual expansion of a spherical piston is represented. We use as input an initial source size of $r(t_{0})$=140~Mm accelerating over a time span of $t_{a}$=400~s with $a$=2.8~km~s$^{-2}$ (final velocity 1120~km~s$^{-1}$). The arrow in Fig.~\ref{f1b} indicates the propagated distance $d_{\rm M}(t_{\rm c})$ of the GTS, i.e.\ the mimicked Moreton wave. The shock formation time was obtained at 09:48:32~UT, hence, several minutes after the first Moreton wave front was observed. The corresponding velocity profile (shown in Fig.~\ref{f7} as dashed line) reveals an increase of velocity of the GTS in the late propagation phase after 09:47~UT, although a strong decay (exponential) was applied to the GTS its kinematics. Similar to what we obtained applying the observed CME kinematics such an acceleration behavior of the source cannot mimic the observed Moreton wave. To compensate for the delayed timing of the shock formation, a shorter and more impulsive acceleration of the source expansion would be required to reproduce adequately the Moreton wave propagation. The top panel of Fig.~\ref{f6} shows the expansion of a smaller source of $r(t_{0})$=110~Mm of a shorter and stronger acceleration ($t_{a}$=160~s; $a$=4.8~km~s$^{-2}$) in comparison to the previous scenario. The calculated GTS from this case shows a very good match with the observed Moreton wave kinematics as well as its velocity profile (dotted line in Fig.~\ref{f7}). The timing of the shock formation at 9:44:32~UT is close to the first detected Moreton wave front ($\sim$9:44:30~UT). Since after the shock formation the GTS propagates faster than the later emitted signals we assume that the source is acting only temporarily as piston. The time range during which the wavefront evolves independently from the driver is indicated as dashed gray line in Fig.~\ref{f6}. From this we derive, the source surface would need to expand from the initial size of 110~Mm up to 170~Mm to mimic the resulting Moreton wave (solid gray line in Fig.~\ref{f6}). The initial source size of $\sim$110~Mm would roughly correspond to the diameter of the active region (cf.~Fig.~\ref{mdi}). A further scenario is presented in the bottom panel of Fig.~\ref{f6} with source parameters comprising an initial size of $r(t_{0})$=50~Mm and a very impulsive expansion (short and strong acceleration) of $t_{a}$=80~s and $a$=8~km~s$^{-2}$. The synthetic kinematics of the calculated GTS matches the observed Moreton wave reasonably and the shock formation for this scenario takes place at 9:42:52~UT. The source surface, acting as a temporary piston, would need to expand from its initial size of 50~Mm up to 75~Mm. Considering the velocity profile (dashed-dotted line in Fig.~\ref{f7}) the GTS reaches its peak velocity before 09:47~UT, however, decreases very rapidly. Compared to the earlier scenario (source parameters: $r(t_{0})$=110~Mm; $t_{a}$=160~s; $a$=4.8~km~s$^{-2}$; marked with the dotted line in Fig.~\ref{f7}) the match is worse, however, still reasonable within the limits of such a simple model. In Fig.~\ref{f7} we show the velocity profiles from the simulated wave kinematics as given in Figs.~\ref{f5} and~\ref{f6}, and compare them to the velocity profile derived from the observed Moreton wave (solid line). We obtain the best match for a wave which is assumed to be driven by a shortly and strongly accelerating source (dotted line); a more impulsive expansion of the source would generate a profile of comparable velocity at the last point of observation close to 09:51~UT, but peaks earlier (dashed-dotted line). Such source behavior could be interpreted as the expanding flanks of a CME or the volume expansion of a flare. On the other hand, a weak and long acceleration similar to the upward moving CME (dashed line) reveals substantial inconsistencies to the observed wave profile (late peak, final velocity too high). \section{Discussion and Conclusion} The analytical model presented here is based on tracing the evolution of a large amplitude wave. This is justified since Moreton waves are caused by a strong compression of the chromosphere (otherwise the wave would not be seen in H$\alpha$). There are several unknown factors whose implementation would be beyond the scope of this model. For example, we considered a homogeneous corona where the density and the Alfv\'en velocity do not change, neither in the vertical nor in the horizontal direction, taking $v_{\rm A0}$ in the range 300--600~km~s$^{-1}$. Recent observational studies showed that the magnetosonic speed $v_{\rm ms}$ (we assume $v_{\rm ms}\approx v_{\rm A0}$) can drop down to a local minimum of 300--500~km~s$^{-1}$ around the height $\sim$2~R$_{\odot}$ but then rises steadily up to a local maximum of $\sim$1000~km~s$^{-1}$ at a height between 3 and 4~R$_{\odot}$ \citep{mann03, warmuth05}. \cite{vrsnak04b} obtained from observations of type II bursts that on average the magnetosonic speed attains a local minimum of $v_{\rm ms}\approx$400~km~s$^{-1}$ around 3~R$_{\odot}$ and a broad local maximum of $v_{\rm ms}\approx$500~km~s$^{-1}$ in the range of 4--6~R$_{\odot}$. Besides, the previous CME event which started about 40~min earlier \citep[LASCO catalogue;][]{yashiro04} from the same active region might affect the actual value of the Alfv\'en velocity too. Furthermore, we did not take into account the accurate relation between the plasma flow and source velocity $u$, i.e.\ the CME velocity, but simply used a one-to-one relation. We approximated $u$ by the CME speed which is appropriate concerning the upper part of the moving and expanding CME but does not hold for the lateral direction, i.e., from which the GTS kinematics is determined. We tried to account for this by reducing the CME speed by $\sim$60\%, thus maintaining the CME kinematical profile as model input but with a lower speed. However, also that option did not result in a better match between the generated GTS and the observed Moreton wave. An important factor for the derived model results is the decay factor used to attenuate the signal. Since in the corona the distribution of density $\rho(r)$, magnetic field $B(r)$, and Alfv\'en speed $v_{A0}(r)$ are unknown, we use different ``decay functions'' (see Equ.~\ref{pl}--\ref{spher_dec}). It had to satisfy two criteria: it should be strong enough to decelerate the signal in its late propagation phase but should not, due to its strength, delay the timing of the shock formation. We used geometry dependent factors adapted from sound waves (cylindrical and spherical), i.e., without implementing a magnetic field \citep[for details see][]{zic08}. Formal decay factors, like power-law and exponential functions, were used to put the decay to the limits either having no attenuation or very strong attenuation and to account for the unknown distribution of $v_{\rm ms}$. \cite{pagano07} investigated the role of magnetic fields for an expanding and upward traveling CME and showed that a spherical cloud without a magnetic field drives a wave that propagates to longer distances than that with a weak open field \citep[see Fig.~7 in][]{pagano07}. This implies that the presence of a magnetic field would result in a stronger signal decay than obtained from our simple approaches. Since we were not able to reproduce the wave using the limits for the decay factor (strong versus no attenuation), we suppose that even utilizing more sophisticated decay factors, the disturbance generated by the CME forehead would not be able to reproduce the observed Moreton wave. Using the observed CME kinematics as input parameters the model could not reproduce the general characteristics of the observed Moreton wave. The timing of the shock formation (``knee'') was not appropriate but occurred later than the first observed Moreton wave front. The velocity profile was not conform and the final velocity was too high in comparison to the observed Moreton wave. By varying the initial source size as well as the behavior during the source evolution (bow, piston or combined bow/piston driver), the GTS kinematics was shifted to a larger or smaller propagation distance, however, the shock formation always appeared too late \citep[see also][]{zic08}. Similar results are obtained by applying different start times for the signal $t_0$ and different local Alfv\'en velocities $v_{\rm A0}$. Thus, experimenting with all these different parameters demonstrated that the Moreton wave could not be reproduced when taking the kinematics of the radial outward movement of the CME as input for the model. This finally pushed us to use synthetic kinematics in order to imitate other possible drivers for the signal. So far it was clearly derived from the model that the source expansion needs to be more impulsive (early shock formation). For synthetic kinematics of stronger and shorter acceleration of the source surface expansion (3-D piston type) we found a good match between the model generated signal and the observed Moreton wave. The timing of the shock formation is, when using these kinematical profiles, in good agreement with the appearance of the first Moreton wave front. Using an exponential attenuation factor (see Equ.~\ref{expon}) with short signal decay lengths the best match to the observed Moreton wave could be found. On average the Alfv\'en Mach number $M_{\rm A}$ from such synthetic kinematics are within the range of $M_{\rm A}\approx$1.5--3 which agrees with observed Alfv\'en Mach numbers for Moreton waves \citep[e.g.][]{narukage02,warmuth04a}. The initial source size and its expansion dimension that is necessary to mimic the observed Moreton wave can be interpreted as the laterally expanding CME flanks or the volume expansion of the flare. \cite{pomoell08} concluded from a 2D magnetohydrodynamic simulations that for the driver of a Moreton wave a high acceleration during a short time interval is necessary. This was interpreted to require a strong lateral expansion, either lift-off of an over-pressured flux rope or thermal explosion-kind of energy release. Likewise, \cite{zic08} obtained from a 3D analytical model that a short acceleration phase up to high velocities ($\sim$1000~km~s$^{-1}$) within a low Alfv\'en velocity environment is necessary to create a shock that is capable of causing type II bursts in the dm/m wavelength range and H$\alpha$ Moreton waves. Concluding, for the January 17, 2005 event under study it is unlikely that the bow shock of the CME generated the observed Moreton wave. The CME is a too gradually accelerating source in the lift-off phase and a too fast one in the later evolution phase to cause the observed Moreton wave kinematics. An impulsively accelerated expansion of a source surface acting as a temporary piston would be a more appropriate mechanism to generate the observed Moreton wave. Possible driving mechanisms would be the laterally expanding CME flanks or the impulsive volume expansion of the flare. The latter scenario would be in accordance with the flare initiated ``blast wave'' scenario proposed from observational results for the kinematics of Moreton waves \citep[see][]{warmuth01,warmuth04a,vrsnak02a} but in contrast to the numerical model by \cite{chen02} who claimed that Moreton waves correspond to the piston-driven shock over the CME. For the future it would be important to have more such complete data sets including both, observations from the early CME evolution (upward moving front as well as expanding flanks) and detailed observations of Moreton waves, in order to validate the presented results. \acknowledgements M.T. is supported by the Austrian {\em Fonds zur F\"orderung der wissen\-schaftlichen Forschung} (FWF Erwin-Schr\"odinger grant J2512-N02). T.{\v{Z}}. and B.V. acknowledge funding by the Croatian Ministry of Science, Education and Sports under the project 007-0000000-1362. A.V. gratefully acknowledges the Austrian {\em Fonds zur F\"orderung der wissen\-schaftlichen Forschung} (P20867-N16). \bibliographystyle{apj}
proofpile-arXiv_067-9755
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{INTRODUCTION} Discovery of topological insulators with Dirac fermions at the surface~\cite{Hasan2010} initiated an intense search of topological phases having Dirac~\cite{Young2012, Wang2012a, Wang2013a, Borisenko2014, Neupane2014a, Liu2014a, Thirupathaiah2018a} and Weyl fermions in the bulk~\cite{Huang2015,Zhang2015a,Ghimire2015,Shekhar2015, Xu2016, Tamai2016, Deng2016, Huang2016, Liang2016,Wang2016a, Jiang2017, Thirupathaiah2017, Thirupathaiah2017a}. All these systems have potential applications in topological quantum computations~\cite{Nayak2008}, spintronics~\cite{Datta1990, Ziutifmmodecuteclseci2004} and topotronics~\cite{Hesjedal2017}. The Dirac fermions emerging in three dimensional band crossings are usually protected by both the time-reversal and crystal symmetries. These bulk Dirac fermions are characterized by a four-fold degeneracy at such crossing point~\cite{Burkov2011, Young2012, Wang2012a}. On the other hand, the Weyl fermions, being formed due to breaking of either time-reversal or inversion crystal symmetry, are characterized by a two-fold degeneracy\cite{Wan2011, Borisenko2015}. When symmetry protection of the band crossings disappears, a gap opens and the system becomes topologically trivial. Materials of the type AMnB$_2$ (A=Ca, Sr, Ba, Eu, Yb; B=Bi and Sb) are widely known for possessing the Dirac fermions near the Fermi level~\cite{Park2011a, Wang2011b, Wang2011a, Wang2012c, Lee2013, Farhan2014, Feng2014, Guo2014, Jo2014, May2014, Jia2014, Borisenko2015, Zhang2016, Li2016, Liu2016, He2017}. Origin of these Dirac states was understand by considering the main structural constituent of these materials - a square net of Bi atoms. The strong hybridization results in a large bandwidth with large portions of linear dispersions. If there is a reason for folding of such electronic structure, e.g. doubling of the unit cell in real space, the linear dispersions start to cross resulting in a multitude of Dirac crossings in the Brillouin zone~\cite{Borisenko2015}. However, in many cases such crossings are gapped out by the spin-orbit interaction~\cite{Park2011a, Lee2013, Feng2014, Guo2014, Li2016, Masuda2016, Liu2016, Wang2016d, Liu2017a}. In some materials the non-trivial phases can still be observed despite the gapped Dirac states~\cite{Wang2016e,Wang2016e, Huang2017}. Furthermore, these compounds show unsaturated linear magnetoresistance under the external magnetic fields which are attributed to the Dirac fermions~\cite{He2012, Wang2016c} or to the proximity of long range magnetic ordering~\cite{Zhang2016, Kormondy2018}. Interestingly, isostructural to AMnBi$_2$, BaZnBi$_2$ is a nonmagnetic compound but yet shows linear magnetoresistance~\cite{Wang2017}. Despite the property of linear magnetoresistance in BaZnBi$_2$, the presence of Dirac states near the Fermi level in this system is not yet established. While some studies do not find evidence of the Dirac states ~\cite{Wang2017, Ren2018}, the other ones report the existence Dirac states ~\cite{Zhao2018} in this system. Since BaZnBi$_2$ is a nonmagnetic system and yet shows the linear magnetoresistance, it is an ideal system to examine the relation between the linear magnetoresistance, the magnetism, and the possible existence of the Dirac states. \begin{figure}[htbp] \centering \includegraphics[width=0.49\textwidth]{Fig1.pdf} \caption{(a) Tetragonal crystal structure of BaZnBi$_2$. (b) Calculated bulk band structure without SOC (top panel) and with SOC (bottom panel). (c) 3D view of the Brillouin zone with projected surface 2D Brillouin zone on the top. (d) 3D Fermi surface obtained with SOC. (e) Experimental measuring geometry in which the $s$ and $p$ polarized lights are defined with respect to the scattering plane (SP) and the analyzed entrance slit (ES).} \label{1} \end{figure} \begin{figure*}[htbp] \centering \includegraphics[width=0.85\textwidth]{Fig2.pdf} \caption{Slab calculations performed on BaZnBi$_2$ including SOC. (a) Shows the supercell of BaZnBi$_2$ in the $b-c$ plane (left panel), termination1 (T1) leaves out the Bi$_{2}^{1}$ layer on the top of the sample surface (middle panel), the termination2 (T2) leaves out the Bi$_{2}^{2}$ layer on the top of the sample surface (middle panel), and the termination3 (T3) leaves out the Bi$_{1}$ layer on the top of the sample surface (right panel). (b) Surface sates obtained after T1 cleavage. (c) Surface sates obtained after T2 cleavage. (d) Surface sates obtained after T3 cleavage. (d) Orbital resolved surface states are shown in the $\overline{X}$-$\overline{M}$ high symmetry line after T2 cleavage.} \label{2} \end{figure*} In this paper, we report the low-energy electronic structure of BaZnBi$_2$ studied using the high-resolution angle-resolved photoemission spectroscopy and the density functional theory calculations. Our experimental results show no evidence of bulk or surface Dirac states near the Fermi level in this system. However, we do observe several linear dispersive band crossing points near the Fermi level when calculated without including the spin-orbit interaction. On the other hand, these band crossing points are gapped out as soon as the spin-orbit interaction is turned on. Thus, our results suggest that the Dirac states in BaZnBi$_2$ are trivial and massive, analogous to the Dirac states in graphene~\cite{Novoselov2005, Brey2006, Varykhalov2008}. Nevertheless, despite the gap opening at the nodes, there exist several linear dispersive bands crossing the Fermi level as seen from our experimental data. \subsection{EXPERIMENTAL DETAILS} Single crystals of BaZnBi$_2$ were synthesized by the self-flux method. The elements of Ba, Zn and Bi were mixed in the composition of BaZnBi$_6$ and then was placed in an alumina crucible, which in turn was sealed inside the evacuated quartz tube. The whole setup was heated to 1000$^\circ$C and held at that temperature for 10 hours, and then slowly cooled to 400$^\circ$C at rate of 2$^\circ$C/h. Excess flux was removed by centrifugation at this temperature. The remaining Bi flux residue was removed by cleaving the crystal. ARPES measurements were performed in the Diamond light source at I05 beamline which is equipped with SCIENTA R4000 analyzer. During the measurements the sample temperature was kept at 5 K. The energy resolution was set between 10 and 20 meV depending on the incident photon energy and the angle resolution was set at 0.3$^\circ$. \subsection{BAND STRUCTURE CALCULATIONS} Bulk band structure calculations were performed using density functional theory (DFT) within the generalized gradient approximation (GGA) of Perdew, Burke and Ernzerhof (PBE) exchange and correlation potential~\cite{Perdew1996} as implemented in the Quantum Espresso simulation package~\cite{QE-2009}. Norm conserving scalar relativistic and fully relativistic pseudopotentials were used to perform the calculations without spin-orbit coupling (SOC) and with SOC, respectively. The electronic wave function is expanded using the plane waves up to a cutoff energy of 50 Ry (680 eV). Brillouin zone sampling was done over a 24 $\times$ 24 $\times$ 10 Monkhorst-Pack k-grid. The slab calculations are performed using the DFT within the fully relativistic GGA approximation as implemented in the Full Potential Local Orbital band structure package (FPLO)~\cite{Koepernik1999}. To obtain the surface states, we projected the Bloch wave functions onto the atomic-like Wannier functions, and constructed the tight-binding model Hamiltonian. Then the tight-binding model was mapped onto a slab geometry. \begin{figure} [htbp] \centering \includegraphics[width=0.49\textwidth]{Fig3.pdf} \caption{ARPES data of BaZnBi$_2$. (a) Fermi surface maps taken with different photon energies (70 and 100 eV) and polarizations ($s$ and $p$). (b) and (c) are the energy distribution maps (EDM) taken along the cuts~1 and 2 (left panel) and corresponding second derivative (right panel) using $p$ and $s$ polarized photons, respectively, along the $\Gamma - X$ high symmetry line. (d) Shows EDMs taken along the cuts~3, 4, and 5 from left to right, respectively. (e) Shows respective second derivatives of (d).} \label{3} \end{figure} \subsection{RESULTS AND DISCUSSIONS} Figure ~\ref{1} shows the calculated bulk band structure of BaZnBi$_2$. Top panel in Fig.~\ref{1} (b) shows the electronic structure calculated without spin-orbit interaction and the bottom panel in Fig.~\ref{1} (b) shows the electronic structure calculated with SOC. Fig.~\ref{1} (d) depicts the 3D view of the Fermi surface obtained under SOC. As can be noticed in Fig.~\ref{1} (b), there exist several band crossing points (Dirac points) in the vicinity of the Fermi level as highlighted by the red circles in the electronic structure calculated without SOC. However, all of them are gapped out as soon as the SOC is turned on in the calculations as shown in the bottom panel of Fig.~\ref{1} (b). Important to notice here that our DFT calculations show no bulk Dirac states at the $X$ point. We will further discuss this point at a later stage. To further examine whether there exists any Dirac fermions originated from the surface, we performed slab calculations on two different top Bi layers as demonstrated in Fig.~\ref{2} (a). In this compound it is possible to have different Bi layers on the sample surface with different cleavages. As shown in Fig.~\ref{2} (a), termination1 (T1) leaves out the Bi$_{2}^{1}$ layer on top of the sample surface that is bonded with the bottom Zn layer, the termination2 (T2) leaves out the Bi$_{2}^{2}$ layer on top of the sample surface after breaking the bonds with Zn layer, and the termination3 (T3) leaves out the Bi$_{1}$ layer on top of the sample surface which is an isolated layer. Thus, the top Bi surface layers produced with T1, T2, and T3 cleavages are different from each other. Specifically, the T2 cleavage produces a polar surface due to Bi-Zn bond breaking. The other two cleavages, T1 and T2, produce neutral surface. Since during the experiment getting one of these three surface Bi layers has equal chances, we performed slab calculations for all Bi$_{2}^{1}$, Bi$_{2}^{2}$, and Bi$_{1}$ surface layers. Corresponding slab calculations are shown in Figs.~\ref{2} (b),(c), and (d) after T1, T2, and T3 cleavages, respectively. Here black colored band structure represents the bulk and the red colored band structure represents the surface. Fig.~\ref{2} (e) shows the orbital resolved surface states with T2 cleavage. From Figs.~\ref{2} (b)-(d) it is evident that the different terminations (T1, T2, \& T3) lead to an entirely different set of surface states, more specifically, the difference in the surface states is substantial near the $\overline{M}$ point. Furthermore, one can notice from Fig.~\ref{2} (d) that all the surface states near the Fermi level are mainly contributed by the Bi-2$p$ orbital characters. Most importantly, our slab calculations also do not predict any noticeable surface Dirac states at the $X$ point. \begin{figure} [htbp] \centering \includegraphics[width=0.49\textwidth]{Fig4.pdf} \caption{ (a) Fermi surface map taken along the $X-M$ orientation in the $k_y-k_z$ plane. (b) Photon energy dependent momentum distribution curves (MDCc) extracted from (a) around the Fermi level with an energy integration of 15 meV . (c) Photon energy dependent EDMs. In the figure, the red dashed lines represent linear dispersive bands, while the green dashed curves represent the quadratic bands.} \label{4} \end{figure} Figure ~\ref{3} shows the ARPES data of BaZnBi$_2$. Fig.~\ref{3}(a) depicts Fermi surface maps measured with a photon energy of 100 eV using $p$-polarized light (left panel)and $s$-polarized light (middle panel). Similarly, the right most panel in Fig.~\ref{3}(a) shows the FS map measured with a photon energy of 70 eV using $s$-polarized light. Energy distribution maps (EDMs) measured along the cuts 1 and 2 [as shown on the FS maps in Fig.~\ref{3} (a)] and their corresponding second derivatives are shown in Fig.~\ref{3} (b) and (c), respectively. Similarly, the EDMs along the cuts 3-5 and their corresponding second derivatives are shown in Fig.~\ref{3}(d). As can be seen from Fig.~\ref{3}(a), the Fermi surface consists of several square shaped outer Fermi sheets and circle shaped inner Fermi sheets around the $\Gamma$ point. From the EDM cut taken along $\Gamma-X$ orientation as shown in Fig.~\ref{3}(b) and (c), we could resolve three hole-like, $h_1$, $h_2$ and $h_3$, linear band dispersions crossing the Fermi level at the $\Gamma$ point and two electron-like, $e_1$ and $e_2$, linear dispersions crossing the Fermi level at the $X$ point. Similarly, from the EDM cut 3 [left panel in Fig.~\ref{3} (d)], we could resolve one linear dispersive hole-like band, $h_3$, whose band top is just touching the Fermi level. From the EDM cut taken along the $X-M$ orientation [middle panel in Fig.~\ref{3} (d)] two electron-like,$e_1$ and $e_2$, linear band dispersions are noticed at the $X$-point. From the EDM cut taken along the $\Gamma-M$ orientation [right most panel in Fig.~\ref{3} (d)] we could observe the upper cone of the gapped Dirac states as shown by the white dashed lines. As can be seen in the figure [right most panel in Fig.~\ref{3} (d)], it is very clear that there is a gap in the place of lower part of the Dirac cone, consistent with our DFT calculations performed with SOC. We further estimated the mass enhancement factor of the upper cone of the gapped Dirac states to $\frac{m_{SO}^{*}}{m}$ = 1.4 by using our calculated band structure. Thus, the Dirac states are acquiring mass under the spin-orbit interaction. Although experimentally we did not find any evidence for the Dirac states in this system, Ref.~\onlinecite{Zhao2018} showed the Dirac states near the $X$ point and as well along the $\Gamma-M$ orientation in their ARPES data. In order to verify this more carefully and to further elucidate nature of the states near the $X$ point, we performed photon energy dependent measurements in the range of 65 eV to 100 eV taken in steps of 3 eV using $p$ polarized light along the $\Gamma-X$ high symmetry line as shown in Fig.~\ref{4}. Fig.~\ref{4} (a) depicts the Fermi surface map measured in the $k_y-k_z$ plane. From Fig.~\ref{4} (a) we can resolve two Fermi sheets shown by the red dashed lines on the FS map. Photon energy dependent momentum distribution curves are shown in Fig.~\ref{4} (b). To further elucidate the $k_z$ dependent band structure at the $X$-point we showed EDMs as function of photon energy in Fig.~\ref{4} (c). From Figs.~\ref{4} (a)-(c) it is clear that the outer-electron pocket shows no $k_z$ dispersion, while the inner-electron pocket shows a significant change in the momentum vector in going from 65 eV to 100 eV photon energy. Our careful analysis of the band structure as a function of photon energy shows no evidence of Dirac states in the $k_z$ direction at the $X$ point. Comparing more rigorously our experimental data with that of Ref.~\onlinecite{Zhao2018} to resolve the discrepancies on the presence of Dirac states near the $X$ point, in Ref.~\onlinecite{Zhao2018} the Dirac states are concluded based on finding one upper and one lower band with a speculated Dirac node at around 200 meV. However, from our high resolution ARPES data we clearly see two linearly dispersive upper states [see Figs.~\ref{3}(d), (d) and Fig.~\ref{4}], $e_1$ and $e_2$, whose band bottoms are at approximately 200 meV and 900 meV below the Fermi level, respectively. Further, we found two lower bands whose band tops are at 700 meV and 1 eV, respectively. Importantly, one of the two lower bands whose band top is at 700 meV is a parabolic band. Thus, our experimental data completely rule out the existence of Dirac states near the $X$ point in this system. Also it is worth mentioning here that, neither our DFT calculations did predict them. We further noticed clearly that the gapped out Dirac states from the EDM measured along the $\Gamma-M$ direction [see white dashed lines in the right most panel of Fig.~\ref{3}(d)], again in contrast to the observations made in Ref.~\onlinecite{Zhao2018} we did not find any clear evidence on the presence of Dirac states. Thus, our experimental observation on the absence of gapless Dirac states in this compound is in very good agreement with previous reports on this system~\cite{Wang2017, Ren2018}. As we systematically demonstrated above, our experimental results show no evidence of gapless Dirac states in this compound originated either from the bulk or the surface. However, we noticed several linear dispersive bands crossing the Fermi level near the $\Gamma$ and $X$ high symmetry points. This observation is also consistent with the previous reports on AMnBi$_2$ type compounds~\cite{Feng2014, Borisenko2015, Kealhofer2018}, in which the linear dispersive bands crossing the Fermi level have been reported by both the ARPES studies and DFT calculations. Moreover, the Dirac states are gapped out in AMnBi$_2$ type compounds under the spin-orbit interactions, much like the case in BaZnBi$_2$~\cite{Park2011a, Farhan2014, Feng2014}. This large overlapping of the band structure between BaZnBi$_2$ and AMnBi$_2$ are suggesting that both share a common mechanism for the presence of Dirac states as predicted from the DFT calculations, that is the Bi square net present in both systems. At the same time, these Dirac states are gapped out at the Dirac node in both compounds with the spin-orbit interactions. Thus, in these magnetic AMnBi$_2$ and nonmagnetic BaZnBi$_2$, the electronic structure is largely governed by the crystal structure rather than by the magnetic ordering although there is non-negligible magnetic effect on the band structure~\cite{May2014, Guo2014, Zhang2016}. Next, BaZnBi$_2$ is known to show a large linear magnetoresistance under the external magnetic fields~\cite{Wang2017, Ren2018, Zhao2018}. This property is attributed to the electron-hole compensation~\cite{Wang2017} and the linear dispersive Dirac states~\cite{Zhao2018}. The charge compensation theory is widely applied for understanding the quadratic field dependent magnetoresistance in the bulk crystals~\cite{Ali2014}. On the other hand, from our experimental data we found no evidence of massless Dirac states in this compound. Hence, the Dirac states can not be the reason behind the observed linear magnetoresistance in this compound. As noticed from our experimental data and well from the slab calculations, there exist several linear dispersive bulk and surface states near the Fermi level dispersing to a wider range of binding energy (see Figs.~\ref{3} and ~\ref{4}). Perhaps, these linear dispersive bulk and surface states in the vicinity of the Fermi level could be a reason behind the observed linear magnetoresistance~\cite{Thirupathaiah2018, Abrikosov1998} in this compound. The same explanation maybe is true also for the recorded linear magnetoresistance in the AMnBi$_2$ type compounds~\cite{Wang2011a, Wang2012c, Li2016, Wang2016d}. In conclusion, we examined the low-energy electronic structure of BaZnBi$_2$ by means of ARPES and DFT calculations. Our experimental results show no evidence bulk or surface massless Dirac states near the Fermi level in this system. However, we do notice several linear dispersive bands crossing the Fermi level at both the $\Gamma$ and $X$ high symmetry points. The bulk band structure obtained with DFT calculations without SOC shows several linear dispersive Dirac states. However, all these Dirac states are gapped out and acquire a significant mass as soon as the SOC is turned on. Thus, our results suggest that the Dirac states predicted in this system are trivial and massive. Since experimentally we could not find the massless Dirac states in this compound, the observed linear magnetoresistance may have a different origin rather than the proposed Dirac states. This work was supported under DFG Grant No. BO 1912/7-1. S.T. acknowledges support by the Department of Science and Technology, India through the INSPIRE-Faculty program (Grant No. IFA14 PH-86). The authors thank K. K\"opernik for useful discussions and U. Nitzsche for technical support. The authors also thank G. Shipunov and S. M\"uller-Litvanyi. D.E. and I.M. acknowledge the support by RSCF and DFG through the grant RSF-DFG 16-42-01100. We acknowledge the Diamond Light Source for the time on Beamline I05 under the Proposal SI18586-1.
proofpile-arXiv_067-10074
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:introduction} Quantum computation appears straightforward at small scales of two or three qubits, but attempts to scale it up have not been successful. Shor showed in 1994 that large-scale quantum computers could have significant impact, such as in factoring the large integers that form the basis of RSA cryptography~\cite{Shor}, but this would require maintaining coherence among thousands of qubits. In 1998, Jones, Mosca and Hansen reported a quantum computer with two qubits~\cite{JMH98} while Chuang, Gershenfeld, Kubinec and Leung demonstrated a cascade of three~\cite{CGKL98}. In 2001, Vanderspysen, Steffen, Breyta, Yannoni, Sherwood and Chuang reported a quantum computer that could factor 15~\cite{Vanderspeisen+2001}. In 2002, the Los Alamos quantum information science and technology roadmap aimed at having functioning quantum computation testbeds by 2012~\cite{LAR}. See Chen et al~\cite{Chen+} for an extensive survey of the technology. Yet despite the investment of tremendous funding resources worldwide, we don't have working testbeds; we're still stuck at factoring 15 using a three-qubit algorithm~\cite{Lucero+2012}. It is time to wonder whether there might be something we missed, such as theoretical limits on entanglement and coherence. Doubts about the feasibility of quantum computers go back to 1995, when Unruh warned that maintaining coherence might be hard~\cite{Unruh95}; researchers in this field still see the problem in terms of reducing sources of noise (for example by using lower temperatures), on increasing the signal (for example by bringing the particles closer together) and on using error-correcting codes~\cite{Chen+,Zurek2003}. Researchers are now starting to wonder whether geometry affects entanglement and coherence; the first workshop on this topic was held last year~\cite{GQE12}. However, experiments elsewhere in physics suggest a type of limit that has not so far been considered. \section{Guiding waves} \label{sec:possible-limit-qubits} In recent experiments by Couder and colleagues~\cite{Couder-nature, Couder-2006a, Couder-2006b, Couder-2009, Couder-2010}, a small liquid drop is kept bouncing on the surface of a bath of the same liquid by oscillating this substrate vertically. The bouncing induces waves in the surface which, in certain regimes, guide the motion of the droplet. As shown schematically in Figure \ref{fig:DropletPhaseLockedWithSurfaceWaves}, in this regime the droplet moves along the surface at the same velocity as the peaks and troughs of the waves in the vicinity. By measuring the statistical motion of the droplet, the experiments show clear phenomena corresponding to those of quantum mechanics, including single-slit diffraction, double-slit diffraction, quantised energy levels and tunnelling through a barrier. A video shows clearly how quantum-mechanical phenomena can arise in a completely classical system~\cite{Couder2011}. \begin{figure}[h] \includegraphics[width=0.49\textwidth]{lose-phase.pdf} \caption{Schematic of droplet phase locked with surface waves} \label{fig:DropletPhaseLockedWithSurfaceWaves} \end{figure} In this two-dimensional analogue there is a limit to the number of qubits in a coherent system. It is easy to get phase coherence with waves associated with one other particle and possible to get coherence with two -- one coherence per dimension. (In a three-dimensional system, a further coherence could be added.) Kuramoto and others have developed extensive mathematical models of coupled oscillators; for a review, see Acebr\'on et al.~\cite{Acebron+2007}. It is the Dangelmayr-Knobloch radial standing-wave solutions that appear of most interest here~\cite{DK87}. Even so, a single coherence between an ensemble of particles is more likely, so that they will act as a single ensemble, as when the many electrons in a Josephson junction act as a single qubit. (Coupled-oscillator models have already helped explain other aspects of Josephson junction behaviour.) Couder's experimental measurements are also evocative of the de Broglie--Bohm model of quantum mechanics~\cite{Albert, Bohm, Bell}, which is equivalent to the traditional Copenhagen interpretation. In this model, a small particle interacts with waves in three dimensions which obey the same equations as the quantum mechanical wavefunction. The motion of the particle is given by \begin{equation} v ~= ~\frac{\hbar}{m} ~Im \left( \frac{\nabla \psi}{\psi} \right) \label{eq:dbb} \end{equation} and the resulting observables are the same as those of the Copenhagen interpretation; in fact equation (\ref{eq:dbb}) is merely the equation that is required for this to happen (it is derived from the usual quantum mechanical wavefunction plus a continuity condition). The models are also equivalent for a quantum mechanical system with entangled states. Indeed Nikoli\'c has argued that had the Bohm interpretation come along first, no-one would have needed the Copenhagen interpretation~\cite{Nikolic2007}. But the de Broglie--Bohm model may give more insight into what happens when a system loses coherence. If two particles are entangled, then the guiding wave $\psi$ of one particle must be correlated with that of the other. Now as quantum wavefunctions are considered to be nonlocal, this caused difficulty for some writers: Bell, for example, argued that the nonlocal nature of the wavefunction of two spin-$1/2$ entangled particles meant that a geometrical interpretation of the guiding wave was impossible~\cite{Bell}. The textbook approach is that in such circumstances the guiding wave is in six-dimensional configuration space, for which a geometric interpretation in physical space is not obvious. Yet Bell also warned that impossibility proofs mostly represented a failure of imagination, and he himself had demolished previous arguments against a local-realist interpretation of quantum mechanics. We will argue, first, that the loss of phase coherence may provide a better model for the behaviour observed in quantum decoherence experiments; and second, that this hypothesis might be tested by decoherence experiments that measure the physical geometry associated with entanglement and decoherence. Before that, we will discuss how soliton models might provide some insight into possible underlying mechanisms, in order to tackle the imagination failure. By presenting a local-realist model that is consistent with de Broglie--Bohm and with observed empirical results, we challenge the argument of impossibility. \section{Soliton models} Solitons are persistent, localised solutions of the wave equation (with additional nonlinear terms, which are usually small). They arise in fluid and other media, having first been observed and described on a canal in the mid-19th century~\cite{solitons}, and were applied to particle physics following the proposal by Skyrme in 1961 of a model of an atomic nucleus, later developed and popularised by Witten~\cite{Skyrme,Witten}. Many other soliton models have been proposed in various branches of physics. More recently, for example, Volovik has found that quasiparticles in liquid helium exhibit many of the properties described by the Copenhagen model and relativity (albeit with $c$ being the speed of sound in the fluid)~\cite{Volovik}, and raised the question of whether fluid models could be applied to all elementary particles. In the field of analogue gravity, Unruh and others have explored fluid models of black holes~\cite{Unruh81} and this led to a thriving research programme exploring many provocative analogies between fluid flow and general relativity~\cite{Barcel11}. In particular, an event horizon corresponds to the start of supersonic flow; Lahav and colleagues have observed this experimentally in a Bose-Einstein condensate~\cite{Lahav+2010}. In short, over the past thirty years, fluid models have developed to express most of the properties of elementary particles from the basic Copenhagen model to (in aggregate) general relativity. In a companion paper, Brady has proposed a soliton model for the electron~\cite{Brady} which we will now summarise. It provides a fluid-model analogue of the Coulomb force, and is thus of relevance at least to decoherence in quantum computers relying on electron behaviour (such as qubits based on Josephson junctions). The key insight is that Euler's equation for a compressible fluid possesses quasiparticle solutions with chirality. These may be visualised as smoke rings but with a twist, in that the line of greatest pressure circulates not merely around the ring's long diameter but around its short one too. Consider a compressible inviscid fluid of pressure $P$, density $\rho$ and velocity ${\bf u}$ of an inviscid fluid medium that obeys Euler's equation: \begin{equation} \frac{\partial {\bf u}}{\partial t} + ({\bf u}. \nabla){\bf u} = - \frac{1}{\rho}\nabla P \label{eq:euler} \end{equation} where $\partial \rho / \partial t = -\nabla (\rho {\bf u})$. At low amplitude, this gives the wave equation \begin{equation} \frac{\partial^2 \rho}{\partial t^2} = c^2\nabla^2 \rho \label{eq:wave} \end{equation} The wave equation has linear solutions, and also eddy-like solutions like smoke rings. There the line of greatest density rotates round the ring's small axis, as in Figure 2a. However, there are also chiral solutions where the line of greatest density rotates around both axes, as in figure 2b. The general solutions are referred to as sonons. This solution of the wave equation can be written \begin{equation} \xi_{mn} = \psi_o R_{mn} \label{sonon} \end{equation} where \begin{equation} \psi_o = Ae^{-i\omega_0t} \label{eq:psi-o} \end{equation} \begin{equation} R_{mn} = \int_0^{2 \pi} e^{-i (m \theta' - n \phi)} j_m (k_r \sigma) k_r R_o d\phi \label{eq:rmn} \end{equation} \begin{figure}[htb] \centering \includegraphics[width=1.00\columnwidth]{two-sonons} \caption{Sonons (a) without chirality (b) with chirality} \label{fig:two-sonons} \end{figure} Figure \ref{fig:two-sonons}a shows the $R_{10}$ sonon. The red line is the line of maximum density, rotating at angular speed $\omega_0$. Figure \ref{fig:two-sonons}b shows the $R_{11}$ sonon, which models the electron. In such particles, the chirality, spin direction, $m$ and $n$ are preserved by continuous transformations, so are persistent and quantised. At low amplitude they are Lorentz covariant because they obey the wave equation \eqref{eq:wave}, which is Lorentz itself covariant, and it turns out that the perturbations at finite amplitude average to zero over a cycle. Classical dynamics follow in the approximation of constant $R_{mn}$ and small $v/c$. Meanwhile, at a large distance from the sonon, $\chi$ may be approximated up to a phase factor as \begin{equation} \chi = \frac{1}{r} \sin k_r r \label{eq:chi-large-r} \end{equation} (We refer the reader to~\cite{Brady} for the details.) The important point for this paper is that $\chi$ behaves like a carrier wave and $\psi$ as its modulation, which is a complex function as its phase is important. This provides a physical model of the de Broglie--Bohm view that a particle moves through space surrounded by waves that obey the usual quantum equations. Extending equation \eqref{eq:psi-o} into a Lorentz covariant form leads directly to the Klein--Gordon equation \begin{equation} \frac{\partial^2 \psi}{\partial t^2} -c^2 \nabla^2 \psi = - \omega_0^2 \psi \end{equation} (the relativistic form of Schr\"odinger's equation); with a little more work we find that the $R_{11}$ sonon is governed by the Dirac equation, which describes the behaviour of the electron in detail~\cite{Brady}. It follows that provided a system remains coherent, the usual predictions of quantum mechanics will apply. (The analogue gravity community has found numerous cases of quantized behaviour of sound waves in fluids and applied them as analogies to other problems in quantum physics; see the survey by Barcel\'o, Liberati and Visser~\cite{Barcel11}.) The more detailed equations (4--7) enable us to make a number of predictions about decoherence. For example, as the carrier wave $\chi$ decays as $1/r$, the system will be more prone to decoherence with distance. In the absence of decoherence, the equations of motion are time-reversal symmetric, as Euler's equation is. The state of the system at any one time determines its state at any other time, whether in the future or in the past. Thus it might not be surprising if we see behaviour that appears to violate microcausality~\cite{Bennett87}; entropy kicks in once phase coherence is lost. The big question is whether we can have a local realist model of quantum systems without violation of macrocausality. This leads us to Bell's theorem. \section{Local realism and quantum crypography} If the soliton model of the electron (or perhaps another coupled-oscillator theory) is correct, then two of the possibilities are as follows. \begin{description} \item [Weak (transactional) soliton hypothesis:] the elementary particles are solitons in an inviscid fluid, but time reversal symmetry in entangled states means that there may be violations of microcausality. We still get quantum electrodynamics with advanced and retarded waves following the exposition of Mead~\cite{Mead2000}, and relativity works because all particles are solutions to the wave equation and thus Lorentz covariant. \item [Strong (causal) soliton hypothesis:] the elementary particles are solitons in an inviscid fluid; relativity emerges from the fact they satisfy the wave equation; and quantum mechanics from the nature of the solutions. So Euler's equation explains not just the motion of matter, but also electricity, light and atomic forces. \end{description} These two interpretations give quite different views of reality. The first is analogous to Cramer's transactional interpretation of quantum mechanics~\cite{Cramer86}. The second is a classical view of the world; Newton's laws determine everything, including the very large and the very small. Initially one might think that Bell's theorem, and the entanglement experiments inspired by it, compel us to favour the former. But a closer examination suggests that this is not necessarily so, because the experiments are designed to interact with the propagating waves, not, on this hypothesis, with the carrier waves which might themselves carry information about spin correlations. If an experimenter creates a pair of entangled particles, sends one of them round an optical fibre or waveguide or tunnel of length $D$, and then performs a measurement on them with equipment spaced a distance $d$ apart for the two particles, then although the $\psi$ waves of the soliton may have travelled a spacelike separation $D$, this does not necessarily hold for the $\chi$ waves whose phase coherence creates the entanglement in the soliton model. The $\chi$ waves are broadcast in all directions from a sonon and thus the distance that matters to prove impossibility results about coherence is $d$. If this is not spacelike then no violation of locality (or relativity or causality) has been proved. In 1982, Aspect, Dalibard and Roger tried to exhibit a spacelike separation by using polarisers that switched in 10ns while the length L of the path traversed by the photons had $L/c$ = 40ns~\cite{Aspect1982b}. Yet they used a single receiver for coincidence monitoring, so $d$ = 0. In 1998, Tittel, Brendel, Zbinden and Gisin demonstrated coherence in photons sent round a 10.9km optical fibre in a direct attempt to probe the tension between quantum non locality and relativity; yet the same issue arises with this experiment~\cite{Tittel+98}. The source, located in Geneva, was 4.5 km from the first analyser in Bellevue and 7.3 km from the second in Bernex, with connecting fibers of 8.1 and 9.3 km. However, entangled states were studied only when both photons went either through the short arms or through the long arms. In the same year, Weihs, Jennewein, Simon, Weinfurter and Zeilinger performed an experiment with what they believed was a proper spacelike separation: photon pairs were sent from a source to two detectors 400m apart and were found to be coherent on arrival~\cite{Weihs+98}. However this does not establish that information was transmitted faster than light by the $\psi$ wavefuntion, as coherence is maintained by the $\chi$ wave which travels at the speed of light just like the photons but in a straight line. In 2008 Salart, Baas, van Houwelingen, Gisin and Zbinden did a fibre-loop experiment over a distance of 18km (from Geneva to Satigny and Jussy) and actuated a piezoelectric crystal which moved a mirror, ensuring that coherence was lost~\cite{Salart+2008}; yet the same applies here as in Weihs' experiment. In short, experimenters have sought to close one loophole after another in the Bell test experiments over the last thirty years. But the soliton model of the electron creates another major hole as the experimenter must consider not just the propagation of the quantum-mechanical wavefuntion $\psi$ but also of the density waves $\chi$ on which they are modulated. The consequences for quantum crypto are notable. As the experiments done to test the Bell inequalities have failed to rule out a classical hidden-variable theory of quantum mechanics such as the soliton model, the security case for quantum cryptography based on EPR pairs has not been made. We propose that experimeters test explicitly whether entanglement is a function of physical geometry in the way predicted by the soliton model, or more generally by the results of Kuramoto theory. First, one might fabricate a series of 3-qubit quantum computers with the coherent elements in a triangle whose largest angle was 90$^o$, 100$^o$, ..., 180$^o$. We predict that 3 distinct qubits will not be measured when the elements are collinear, and perhaps also when they are nearly collinear. One might also make a 4-qubit machine in three dimensions, and similarly measure the correlation with geometry. Second, more general entanglement experiments might attempt to identify behaviour consistent with Kuramoto theory such as finite size effects on decoherence, relationships with the order parameter and whether bifurcation points can explain the circumstances in which systems become coherent. Third, we suggest close scrutiny of claims that computation can be sustained without decoherence. If the strong soliton hypothesis is correct, we would expect that a single physical qubit cannot be recycled in the same coherent computation; thus if a computation requires $k$ steps on $n$ qubits it would need at least a $k$-by-$n$ array of qubits, not a single $k$-qubit register plus some CNOT gates. If quantum mechanics is really just a convenient calculus for dealing with coupled oscillators, then reality is classical, and quantum computers are just classical computers. They cannot then provide a way to beat the Bremermann limit of $mc^2/h$ computations per second for a computer of mass $m$~\cite{Bremer65}. \section{Conclusions} \label{sec:conclusion} One of the big puzzles that straddles the boundary between physics and computation is why quantum computers have got stuck at three qubits. We have shown that a local-realist version of the de Broglie--Bohm interpretation of quantum mechanics provides a good explanation: entangled particles are precisely those whose guiding waves are phase coherent. It follows that we can expect two entangled qubits to be possible on a line, three in a plane and four in a three-dimensional structure. In fact, it may be more helpful to model qubits as coupled oscillators, following Mead's model of quantum electrodynamics and Kuramoto theory, than using Hilbert space. We propose experiments to verify this directly. Bell warned that claimed impossibility proofs often showed merely a lack of imagination on the part of the `prover', so we presented a concrete guiding-wave model given by a recent soliton model of the electron. In this model, the electron is a spinning twisted torus in an inviscid fluid. It generates compression waves $\chi$ which are in turn modulated by guiding waves $\psi$. Since the Bell test community has not yet considered the possibility that coherence information might be transmitted other than by the quantum mechanical wavefunction $\psi$, the experiments that have claimed to demonstrate nonlocal behaviour of entangled systems have done nothing of the kind. If entanglement is simply phase coherence, it is not enough to show that two photons sent to separated sensors remain coherent even though the distance between the sensors have a spacelike separation, as the phase coherence is carried by the $\chi$ waves. In consequence we dispute the claim that a quantum cryptosystem based on EPR pairs must be secure. The evidence needed to support that has simply never been exhibited. We also challenge experimentalists who believe that entangled states violate locality to devise an experiment where locality fails in the soliton model. In fact since quantum mechanics and relativity can both be derived from this local and causal model, it will be surprising if anyone can use Bell's theorem to prove an incompatibility betweeen quantum mechanics, relativity, locality and causality, regardless of whether the soliton model turns out in the end to be the right one. More generally, we invite experimentalists to investigate the physical geometry of entanglement and coherence. The real prize is not the ability to build better quantum machines, but the far greater one of understanding the most fundamental questions. Do soliton models provide a better explanation of the world than string theories? If so, which soliton models are supported? And in the absence of evidence, we need not accept that physics really requires us to abandon the concept of a single objective universe where action is both local and causal. \small
proofpile-arXiv_067-10429
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgments} This work was supported in part by grant NSF-PHY-0968871, NSF-PHY-1068743, funds of the Hearne Institute for Theoretical Physics and CCT-LSU. This publication was made possible through the support of a grant from the John Templeton Foundation. The opinions expressed in this publication are those of the author(s) and do not necessarily reflect the views of the John Templeton Foundation.
proofpile-arXiv_067-10472
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec_Int} Bose-Einstein condensates (BECs) confined in toroidal traps have been the subject of many experimental studies recently~\cite{Ryu07,Ramanathan11,Moulder12,Wright12,Marti12,Beattie13}. This research covers topics such as the observation of persistent current~\cite{Ryu07}, phase slips across a stationary barrier~\cite{Ramanathan11}, stochastic~\cite{Moulder12} and deterministic~\cite{Wright12} phase slips between vortex states, the use of toroidal condensates in interferometry~\cite{Marti12}, and the stability of superfluid flow in a spinor condensate~\cite{Beattie13}. These experiments have given rise to theoretical studies discussing, e.g., the excitation spectrum and critical velocity of a superfluid BEC~\cite{Dubessy12} and the simulation of the experiment~\cite{Ramanathan11} using the Gross-Pitaevskii equation~\cite{Mathey12,Piazza12} and the truncated Wigner approximation~\cite{Mathey12}. Most of the experimental and theoretical studies concentrate on the properties of persistent currents. The phase of a toroidal BEC changes by $2\pi k$ as the toroid is encircled, the integer $k$ being the winding number of the vortex. In a singly connected geometry a vortex with $|k|>1$ is typically unstable against splitting into vortices with smaller $k$. In a multiply connected geometry this process is suppressed for energetic reasons. In Ref.~\cite{Moulder12} it was shown experimentally that a vortex with winding number three can persist in a toroidal single-component BEC for up to a minute. In other words, toroidal geometry makes it possible to avoid the fast vortex splitting taking place in a singly connected BEC and study the properties of vortices with large winding number. Instead of using a toroidal trap, a multiply connected geometry that stabilizes vortices can also be created by applying a Gaussian potential along the vortex core~\cite{Kuopanportti10}. In this paper, we calculate the Bogoliubov spectrum of a toroidal quasi-one-dimensional (1D) spin-1 BEC. Motivated by the experimental results of Refs.~\cite{Moulder12,Beattie13}, we assume that the splitting of vortices occurs on a very long time scale in a spinor condensate where only one spin component is populated. The dominant instabilities can then be assumed to arise from the spin-spin interaction. For related theoretical studies on toroidal two-component condensates, see, for example, Refs.~\cite{Smyrnakis09,Anoshkin12}. In our analysis, the population of the $m_F=0$ spin component is taken to be zero initially, making it possible to calculate the excitation spectrum analytically. This type of a state can be prepared straightforwardly experimentally. The proliferation of instabilities can be observed by measuring the densities of the spin components. This paper is organized as follows. In Sec.~\ref{sec_Ham} we define the Hamiltonian, describe briefly the calculation of the excitation spectrum, and show that the spectrum can be divided into magnetization and spin modes. In Sec.~\ref{sec_Mag} we analyze the properties of the magnetization modes and illustrate how the presence of unstable modes can be seen experimentally. We also compare the analytical results with numerical calculations. In Sec.~\ref{sec_Spin} we study the spin modes and their experimental observability analytically and numerically and show that a rotonlike spectrum can be realized both in rubidium and sodium condensates. In Sec.~\ref{sec_Exp} we discuss two recent experiments on toroidal BECs and show examples of the instabilities than can be realized in these systems. Finally, in Sec.~\ref{sec_Con} we summarize our results. \section{Energy and Hamiltonian} \label{sec_Ham} The order parameter of a spin-$1$ Bose-Einstein condensate reads $\psi=(\psi_{1},\psi_{0},\psi_{-1})^T$, where $T$ denotes the transpose. It fulfills the identity $\psi^\dag \psi =n_{3D}$, where $n_{3D}$ is the total particle density. We assume that the system is exposed to a homogeneous magnetic field oriented along the $z$ axis. The energy functional becomes, then, \begin{align} \nonumber &E[\psi]=\int d\mathbf{r} \left(\psi^\dag(\mathbf{r})\hat{H}_0(\mathbf{r})\psi(\mathbf{r})\right.\\ &\left. +\frac{1}{2}\left\{ g_0 n_{3D}^2(\mathbf{r}) + g_2 [\psi^\dag(\mathbf{r})\hat{\mathbf{F}}\psi(\mathbf{r})]^2\right\}\right), \label{eq_E} \end{align} where the single-particle Hamiltonian $\hat{H}_0$ is defined as \begin{align} \hat{H}_0(\mathbf{r})=-\frac{\hbar^2\nabla^2}{2m}+U(\mathbf{r})-\mu_{3D}-p\hat{F}_z+q\hat{F}_z^2, \end{align} and $\hat{\mathbf{F}}=(\hat{F}_x,\hat{F}_y,\hat{F}_z)$ is the (dimensionless) spin operator of a spin-1 particle, $U$ is the trapping potential, and $\mu_{3D}$ is the chemical potential. The magnetic field introduces the linear and quadratic Zeeman terms, given by $p$ and $q$, respectively. The sign of $q$ can be controlled experimentally by using a linearly polarized microwave field \cite{Gerbier06}. The strength of the atom-atom interaction is characterized by $g_0=4\pi \hbar^2(a_0+2a_2)/3m$ and $g_2=4\pi \hbar^2(a_2-a_0)/3m$, where $a_F$ is the $s$-wave scattering length for two atoms colliding with total angular momentum $F$. The scattering lengths of ${}^{87}$Rb used here are $a_0=101.8a_B$ and $a_2=100.4 a_B$ \cite{vanKempen02}, measured in units of the Bohr radius $a_B$. For ${}^{23}$Na the corresponding values are $a_0=50.0a_B$ and $a_2=55.1a_B$ \cite{Crubellier99}. The condensate is confined in a toroidal trap given in cylindrical coordinates as $U(r,z,\varphi)=m \left[\omega_r^2 (R-r)^2+\omega_z^2 z^2 \right]/2$, where $R$ is the radius of the torus and $\omega_r,\omega_z$ are the trapping frequencies in the radial and axial directions, respectively. We assume that the condensate is quasi-1D, so that the order parameter factors as $\psi(r,z,\varphi;t)=\psi_{r;z}(r,z)\psi_\varphi(\varphi;t)$, where $\psi_{r;z}$ is complex valued and time independent. The normalization of $\psi_{r;z}$ is chosen such that $\int\int r dr dz |\psi_{r;z}(r,z)|^2=N/2\pi$, where $N$ is the total number of particles. This means that \begin{align} \|\psi_\varphi(t)\|\equiv \sqrt{\int_{0}^{2\pi}d\varphi\ \psi_\varphi^\dag(\varphi;t)\psi_\varphi(\varphi;t)} \end{align} has to be equal to $\sqrt{2\pi}$ for any $t$. By integrating over $r$ and $z$ in Eq. \eqref{eq_E} we obtain \begin{align} \nonumber & E_{1\textrm{D}}[\psi_\varphi] = \\ \nonumber &\int_{0}^{2\pi} d\varphi \left( \psi_\varphi^\dag(\varphi)\left(-\epsilon \frac{\partial^2}{\partial \varphi^2} -\mu-p\hat{F}_z+q\hat{F}_z^2\right)\psi_\varphi(\varphi)\right.\\ &\left. +\frac{n}{2}\left\{ g_0 \left[\psi_\varphi^\dag(\varphi)\psi_\varphi(\varphi)\right]^2 + g_2 \left[\psi_\varphi^\dag(\varphi)\hat{\mathbf{F}}\psi_\varphi(\varphi)\right]^2\right\}\right), \label{eq_E1D} \end{align} where \begin{align} \label{eq_epsilon} \epsilon & =\frac{2\pi}{N}\frac{\hbar^2}{2m}\int_{0}^{\infty} rdr\int_{-\infty}^{\infty} dz\, \frac{1}{r^2} |\psi_{r;z}(r,z)|^2 \end{align} and \begin{align} \label{eq_n} n =\frac{2\pi}{N}\int_{0}^{\infty} rdr\int_{-\infty}^{\infty} dz\,|\psi_{r;z}(r,z)|^4. \end{align} In Eq. \eqref{eq_E1D} we have omitted an overall factor $N/2\pi$ multiplying the right-hand side of this equation. The chemical potential $\mu$ contains the original chemical potential $\mu_{3D}$ and terms coming from the integration of the kinetic and potential energies. The magnetization in the $z$ direction, \begin{align} f_z=\frac{1}{2\pi} \int_{0}^{2\pi} d\varphi\,\psi_\varphi^\dag (\varphi;t)\hat{F}_z\psi_\varphi (\varphi;t), \end{align} is a conserved quantity; the corresponding Lagrange multiplier can be included into $p$. In the following we drop the superscript $\varphi$ of $\psi_\varphi$. We assume that in the initial state the spin is parallel to the magnetic field. In \cite{Makela11} it was argued that in a homogeneous system the most unstable states are almost always of this form. This state can be written as \begin{align} \label{psipara} \psi_\parallel(\varphi) = \frac{1}{\sqrt{2}} \begin{pmatrix} e^{i k_1\varphi}\sqrt{1+f_z}\\ 0 \\ e^{i\theta} e^{i k_{-1}\varphi}\sqrt{1-f_z} \end{pmatrix}, \end{align} where $\theta$ is the relative phase and the integer $k_{\pm 1}$ is the winding number of the $m_F=\pm 1$ component. The energy and stability of $\psi_\parallel$ are independent of $\theta$ and therefore we set $\theta=0$ in the rest of this article. If $k_1=1$ and $k_{-1}=0$, $\psi_\parallel$ describes a half-quantum vortex (Alice string), see, e.g., Refs. \cite{Leonhardt00,Isoshima01,Hoshi08}. The populations of $\psi_\parallel$ are time independent and the Hamiltonian giving the time evolution of $\psi_\parallel$ reads \begin{align} \label{Hparallel} \hat{H}_\parallel= \left(g_0 n - \mu\right)\hat{\mathbb{I}} +(g_2 n f_z -p_{\textrm{eff}})\hat{F}_z + q_{\textrm{eff}}\hat{F}_z^2, \end{align} where \begin{align} p_{\textrm{eff}}=& p-\frac{\epsilon}{2} (k_{1}^2-k_{-1}^2),\\ q_{\textrm{eff}}=& q+\frac{\epsilon}{2}(k_1^2+k_{-1}^2). \end{align} The time evolution operator of $\psi_\parallel$ is $\hat{U}_\parallel(t)=e^{-it \hat{H}_\parallel/\hbar}$. We calculate the linear excitation spectrum in a basis where $\psi_\parallel$ is stationary \cite{Makela11,Makela12} using the Bogoliubov approach, that is, we define $\psi(\varphi;t)=\psi_\parallel(\varphi) +\delta\psi(\varphi;t)$ and expand the time evolution equations to first order in $\delta\psi$. We write $\delta\psi=(\delta\psi_1,\delta\psi_0,\delta\psi_{-1})^T$ as \begin{align} \label{eq_Bogoliubov} \delta\psi_j(\varphi;t) \equiv e^{ik_j\varphi}\sum_{s=0}^{\infty} u_{j;s}(t)\,e^{i s \varphi}- v^*_{j;s}(t)\,e^{-i s\varphi}, \end{align} where $j=0,\pm 1$ and $k_0\equiv 0$. Due to the toroidal geometry, $\delta\psi_j(\varphi+2\pi;t)=\delta\psi_j(\varphi;t)$ has to hold. As a consequence, $s$ needs to be an integer. In the next two sections we analyze the excitation spectrum in detail; the actual calculation of the spectrum can be found in the appendix. The normalized wave function reads \begin{align} \label{eq_psi} \tilde{\psi}(\varphi;t) = c(t)[\psi_\parallel(\varphi)+\delta\psi(\varphi;t)], \end{align} where $c(t)$ is determined by the condition $\|\tilde{\psi}(t)\|=\sqrt{2\pi}$. To characterize the eigenmodes we define \begin{align} \label{eq_exp_Fz} \langle\hat{F}_z\rangle (\varphi;t) \equiv \tilde{\psi}^\dag(\varphi;t)\hat{F}_z\tilde{\psi}(\varphi;t), \end{align} so that $f_z=1/2\pi \int_0^{2\pi}d\varphi\ \langle\hat{F}_z\rangle(\varphi;t)$ for any $t$. Furthermore, we denote the population of the $m_F=0$ spin component by $\rho_0$, $\rho_0(\varphi;t)=|\tilde{\psi}_0(\varphi;t)|^2$. Note that here $\langle\hat{F}_z\rangle$ and $\rho_0$ are calculated in the basis where $\psi_\parallel$ is a stationary state. This basis and the original basis are related by a basis transformation that only affects the phases of the $m_F=\pm 1$ components. The densities of the spin components are thus identical in the original and new basis. The numerical calculations are done in the original basis. The excitation spectrum can be divided into spin and magnetization modes. The spin modes keep the value of $\langle\hat{F}_z\rangle$ unchanged in time, $\langle\hat{F}_z\rangle(\varphi;t)=\langle\hat{F}_z\rangle(\varphi;0)\approx f_z$, but rotate the spin vector by making $\rho_0$ nonzero. The magnetization modes, on the other hand, lead to $\varphi$-dependent $\langle\hat{F}_z\rangle(\varphi;t)$, but leave $\rho_0$ unaffected. There are in total six eigenmodes. We denote them by $\hbar\omega_j$, where $j=1,2,3,4$ labels the magnetization modes and $j=5,6$ the spin modes. We denote the real and imaginary part of $\omega_{l}$ by $\omega^{\textrm{r}}_{l}$ and $\omega^{\textrm{i}}_{l}$, respectively. The mode labeled by $l$ is unstable if $\omega^{\textrm{i}}_l$ is positive. We discuss first the magnetization modes. \section{Magnetization modes} \label{sec_Mag} \subsection{Eigenmodes} We characterize the eigenmodes by the quantities, \begin{align} k_\pm =\frac{1}{2}\left(k_{1}\pm k_{-1}\right). \end{align} Note that the value of $k_\pm$ can be a half-integer. The magnetization modes are independent of $q$ and can be written as \begin{align} \hbar\omega_l(s)=2\epsilon s k_{+}+\hbar\tilde{\omega}_l(s), \end{align} where $l=1,2,3,4$. The expression for $\tilde{\omega}_l$ is too long to be shown here. The value of $\tilde{\omega}_l$ depends on $k_{-}$ but is independent of $k_{+}$. Consequently, modes with differing $k_{+}$ but equal $k_{-}$ have identical stability. If $f_z=0$, the eigenvalues simplify and read \begin{align} \nonumber &\hbar\omega_{1,2,3,4}(s)\big|_{f_z=0} = 2\epsilon s k_{+}\\ &\pm \sqrt{\epsilon s^2\left[4\epsilon k_{-}^2+ w \pm \sqrt{16\epsilon k_{-}^2w +(g_0 -g_2)^2 n^2}\right]}, \label{o1234fz0} \end{align} where \begin{align} w=\epsilon s^2+(g_0+ g_2)n. \end{align} The signs are defined such that $++,-+,+-$, and $--$ correspond to $\omega_1,\omega_2,\omega_3$, and $\omega_4$, respectively. Unstable modes appear when the term inside the square brackets becomes negative. For rubidium and sodium $g_0+g_2 > 0$, which guarantees that $\omega_1$ and $\omega_2$ are real. Only $\omega_3$ can have a positive imaginary part. \begin{figure}[t] \begin{center} \includegraphics[scale=.92]{fig_wi.pdf} \end{center} \caption{(Color online) The amplitudes of the unstable spin and magnetization modes for rubidium and sodium. Here $\epsilon=0.75|g_2|n$, $q=2.5 |g_2|n$, $f_z=0$, and the unit of $\omega^{\textrm{i}}_{3,5}$ is $|g_2|n/\hbar$. The lines have been drawn by treating $s$ as a continuous parameter; dots indicate the actual allowed nonvanishing values of $\omega^{\textrm{i}}_{3,5}$. In (c) and (d) the curves are reflection symmetric with respect to $s=k_{+}=(k_{1}+k_{-1})/2$. \label{fig_wi}} \end{figure} As can be seen from Figs. \ref{fig_wi}(a) and 1(b), the value of $\omega^{\textrm{i}}_3(s)$ grows as $|k_{-}|$ increases. The allowed values of $s$ are non-negative integers. The modes corresponding to $s=0$ are always stable, but unstable modes are present for $s= 1,2,\ldots, \lfloor\sqrt{4k_{-}^2-2 g_2 n/\epsilon}\rfloor$, where $\lfloor\cdots\rfloor$ is the floor function. Therefore, if there are $j$ unstable modes, they have to be the ones corresponding to $s=1,2,\ldots ,j$. A lower bound for the value of $\epsilon$ yielding at least one unstable mode is given by the equation $\epsilon (4k_{-}^2-1)\geq 2g_2 n$. In the case of a sodium BEC ($g_2 >0$) this means that the magnetization modes corresponding to $k_{-}=0$ and $|k_{-}|=1/2$ are always stable. This is visualized in Fig. \ref{fig_wi}(b), where $\omega^{\textrm{i}}_3(s)$ corresponding to $(k_1,k_{-1})=(0,0)$ and $(k_1,k_{-1})=(2,1)$ is seen to vanish for every $s$. In a rubidium condensate $(g_2<0)$ with $k_{-}=0$ unstable modes exist if $\epsilon\leq 2|g_2|n$; if $|k_{-}|>0$, instabilities are present regardless of the value of $\epsilon$. For both rubidium and sodium the wave number $s$ of the fastest-growing instability is approximately given by the integer closest to $\sqrt{2/3}\sqrt{4k_{-}^2-2 g_2 n/\epsilon}$. \subsection{Experimental observability} The properties of unstable magnetization modes can be studied experimentally by measuring $\langle\hat{F}_z\rangle$. We assume that there is one dominant unstable mode and that $f_z=0$. The initial time evolution of $\langle\hat{F}_z\rangle$ reads, then (see the appendix), \begin{align} \label{eq_FzexpApprox} \nonumber &\langle \hat{F}_z\rangle (\varphi;t) \approx c^2(t)\left\{ A e^{\omega^{\textrm{i}}_3 t} \cos\left[\theta+s\left(\varphi-\frac{2\epsilon k_{+} t}{\hbar}\right)\right]\right.\\ &\left.+ B e^{2\omega^{\textrm{i}}_3 t} \cos\left[2\theta+2s\left(\varphi-\frac{2\epsilon k_{+} t}{\hbar}\right)\right]\right\}, \end{align} where $c$ is the normalization factor appearing in Eq.~\eqref{eq_psi} and $A,B,$ and $\theta$ are defined in Eqs.~\eqref{eq_A}, \eqref{eq_B}, and \eqref{eq_theta}, respectively. Because typically $B\ll A$, the first term on the right-hand side of Eq.~\eqref{eq_FzexpApprox} dominates over the second term during the initial time evolution. This leads to $\langle\hat{F}_z\rangle$ having $s$ maxima and minima. If $k_{+}\not =0$, these maximum and minimum regions rotate around the torus as time evolves, indicating that the behavior of $\langle\hat{F}_z\rangle$ depends on $k_{+}$, even though the growth rate of the instabilities $\omega^{\textrm{i}}_3$ is independent of $k_{+}$. We study the validity of Eq.~\eqref{eq_FzexpApprox} by considering a rubidium condensate with $\epsilon=0.75|g_2|n$, $q=2.5|g_2|n$, $k_{1}=2$, and $k_{-1}=1$, corresponding to the blue dash-dotted line in Figs.~\ref{fig_wi}(a) and~\ref{fig_wi}(c). Analytical results predict that the only unstable mode of this system is a magnetization mode corresponding to $s=1$. The numerically calculated time evolution of $\langle\hat{F}_z\rangle$ is shown in Figs.~\ref{fig_num_mag}(a) and \ref{fig_num_mag}(b). \begin{figure}[t] \centering \includegraphics[scale=0.85,clip]{fig_mag_k1_2_km1_1.pdf} \vspace{6mm} \caption{(Color online) (a) Numerically calculated $\langle\hat{F}_z\rangle$ for the parameters corresponding to the blue dash-dotted line in Fig.~\ref{fig_wi}(a), that is, a ${}^{87}$Rb condensate with $\epsilon =0.75 |g_2|n, q=2.5|g_2|n, f_z=0, k_{1}=2,$ and $k_{-1}=1$. (b) Magnification of the region bounded by the dashed vertical lines in (a). Here we plot $|\langle\hat{F}_z\rangle|$ instead of $\langle\hat{F}_z\rangle$ and use a logarithmic scale to make the initial growth of $|\langle\hat{F}_z\rangle|$ visible. (c) Analytically calculated $\langle\hat{F}_z\rangle$, see Eq.~\eqref{eq_FzexpApprox}. \label{fig_num_mag} } \end{figure} The $s=1$ magnetization mode can be seen to be unstable. The rotation of the minimum and maximum of $\langle\hat{F}_z\rangle$ around the torus is clearly visible in Fig.~\ref{fig_num_mag}. The analytically obtained behavior of $\langle\hat{F}_z\rangle$ is shown in Fig. \ref{fig_num_mag}(c). By comparing Figs.~\ref{fig_num_mag}(b) and \ref{fig_num_mag}(c), we see that Eq.~\eqref{eq_FzexpApprox} describes the time evolution of $\langle\hat{F}_z\rangle$ very precisely up to $t\approx 10\hbar/|g_2|n$. The only parameters in Eq.~\eqref{eq_FzexpApprox} that are not fixed by the parameters used in the numerical calculation are the initial global phase and length $\|\delta\psi(t=0)\|$ of $\delta\psi(t=0)$. In Fig.~\ref{fig_num_mag}(c) we have chosen the values of these variables in such a way that the match between the numerical and analytical results is the best possible. \section{Spin modes} \label{sec_Spin} \subsection{Eigenmodes} We now turn to the spin modes. As shown in the appendix, the spin modes read \begin{align} \label{o56} &\hbar\omega_{5,6}(s) =2\epsilon k_{+}(s-k_{+})\\ \nonumber &\pm \sqrt{\left\{\epsilon [(s-k_+)^2-k_{-}^2]+g_2 n-q\right\}^2-(1-f_z^2)(g_2 n)^2}, \end{align} where $+$ ($-$) corresponds to $\omega_5$ ($\omega_6$). If $k_{+}=0$, the effect of vortices can be taken into account by scaling $q\rightarrow q+\epsilon k_{-}^2$, {\it i.e.}, the spin modes of a system with $(k_1,k_{-1})=(k,-k)$ and $q=\tilde{q}$ are equal to the spin modes of a vortex-free condensate with $q=\tilde{q}+\epsilon k^2$. Spin modes are unstable if and only if the term inside the square root is negative. Now only $\omega_5$ can have a positive imaginary part. The fastest-growing unstable mode is obtained at $\epsilon [(s-k_+)^2-k_{-}^2]+g_2 n-q=0$ and has the amplitude $\hbar\omega^{\textrm{i}}_5(s)=|g_2|n\sqrt{1-f_z^2}$. Unlike in the case of the magnetization modes, the maximal amplitude is bounded from above and is independent of the winding numbers [see Figs.~\ref{fig_wi}(c) and \ref{fig_wi}(d)]. By adjusting the strength of the magnetic field, the fastest-growing unstable mode can be chosen to be located at a specific value of $s$, showing that it is easy to adjust the stability properties experimentally. At $f_z=0$ the width of the region on the $s$-axis giving positive $\omega^{\textrm{i}}_5$ is $|\sqrt{k_{-}^2+q/\epsilon}-\sqrt{k_{-}^2+q/\epsilon- 2g_2 n/\epsilon}|$. This region can thus be made narrower by increasing $\epsilon,k_{-}$, or $q$. Since the magnetization modes are insensitive to the magnetic field, the properties of the spin and magnetization modes can be tuned independently. The winding number dependence of unstable spin modes is illustrated in Figs. \ref{fig_wi}(c) and \ref{fig_wi}(d). \subsection{Rotonlike spectrum} Interestingly, by tuning $\epsilon$ and $q$, a rotonlike spectrum can be realized (see the solid and dotted blue lines in Fig. \ref{fig_roton}). \begin{figure}[t] \begin{center} \includegraphics[scale=.90]{fig_roton.pdf} \end{center} \caption{(Color online) The real ($\omega^{\textrm{r}}_5$) and imaginary ($\omega^{\textrm{i}}_5$) component of the spin mode $\omega_5$ for rubidium and sodium. Here $\epsilon=0.2|g_2|n, f_z=0, k_{1}=-k_{-1}$, and $k_1$ is an arbitrary integer. For the blue solid and blue dotted lines $q+\epsilon k_{-}^2 =2.8|g_2|n$ and for the orange dashed line $q+\epsilonk_{-}^2 =-2|g_2|n$. The unit of $\omega_5^{\textrm{r},\textrm{i}}$ is $|g_2|n/\hbar$. The lines have been drawn by treating $s$ as a continuous parameter; dots (open circles) indicate the actual allowed nonvanishing values of $\omega^{\textrm{r}}_5$ ($\omega^{\textrm{i}}_5$). \label{fig_roton}} \end{figure} Now the phonon part of the spectrum is missing, but the roton-maxon feature is present. For $f_z=k_{+}=0$, the roton spectrum exists if $q\geq \max\{0,2g_2 n\}$. Because only integer values of $s$ are allowed, it may happen that $\omega^{\textrm{i}}_{5}$ is nonzero only in some interval of the $s$ axis that does not contain integers [see Figs. \ref{fig_wi}(c) and \ref{fig_wi}(d) for examples of this in the context of magnetization modes]. In this case the rotonic excitations are stable. Alternatively, there can be unstable modes close to the roton minimum (see Fig.~\ref{fig_roton} and Ref.~\cite{Matuszewski10}). As evidenced by the orange dashed lines in Fig. \ref{fig_roton}, the roton spectrum can be made to vanish simply by decreasing $q$. Also the values of $s$ leading to unstable modes can be controlled by varying $q$. For example, using the parameter values corresponding to the blue solid line in Fig. \ref{fig_roton}, we find that by decreasing (increasing) the value of $q+\epsilonk_{-}^2$ from $2.8|g_2| n$ to $|g_2| n$ ($4|g_2| n$), the $s=3$ ($s=5$) mode can be made unstable in a rubidium condensate. This opens the way for quench experiments of the type described in Refs.~\cite{Sadler06,Bookjans11}. Instead of altering $q$, instabilities can also be induced by making $\epsilon$ smaller by changing the trapping frequencies. It is known that a rotonlike spectrum can exist in various types of BECs, such as in a dipolar condensate (see, e.g., Refs.~\cite{Odell03,Santos03,Cherng09}), in a Rydberg-excited condensate~\cite{Henkel10}, or in a spin-1 sodium condensate prepared in a specific state~\cite{Matuszewski10}. In the present case the rotonlike spectrum exists both in a sodium and rubidium BEC and the state [Eq.~\eqref{psipara}] giving rise to it is easy to prepare experimentally. Note that the roton-maxon feature exists also in a vortex-free condensate and for any $|f_z|<1$. These results suggest that the roton-maxon character of the spectrum is rather a rule than an exception in spinor BECs. \subsection{Experimental observability} The properties of unstable spin modes can be studied experimentally by measuring $\rho_0$. Assuming that there is one dominant unstable spin mode located at wave number $s$, we find that (see the Appendix) \begin{align} &\delta\psi_0(\varphi;t)\propto e^{i k_+\varphi + \omega^{\textrm{i}}_5 t} \sin\left[\left(s-k_{+}\right)\left(\varphi-\frac{2\epsilonk_{+} t}{\hbar}\right)+\frac{\tilde{\theta}}{2}\right]. \label{eq_deltapsi0} \end{align} The phase $\tilde{\theta}$ is defined in Eq.~\eqref{eq_thetaSpin}. The sign of $\delta\psi_0$ changes at every point where the density $\rho_0\propto |\delta\psi_0|^2$ vanishes. This is similar to the behavior of the phase of a dark soliton \cite{Frantzeskakis10}. The number of nodes in $\rho_0$ is $2|s-k_{+}|$, that is, if $2k_{+}$ is even (odd), $\rho_0$ has an even (odd) number of nodes. The density peaks resulting from the instability rotate around the torus if $k_{+}(s-k_{+})$ is nonzero. In the special case $s=k_{+}$ the density $\rho_0(\varphi;t)$ is independent of $\varphi$. A numerically obtained example of this is shown in Fig.~\ref{fig_Ramanathan}(a). In Fig.~\ref{fig_num_spin} we compare numerical calculations to analytical results. \begin{figure}[ht] \centering \includegraphics[scale=0.85,clip]{fig_rho0.pdf} \vspace{5mm} \caption{(Color online) (a) Numerically calculated $\rho_0$ for a ${}^{23}$Na condensate with $\epsilon =0.75 g_2n, q=2.5g_2n, f_z=0, k_{1}=2,$ and $k_{-1}=1$, corresponding to the blue dash-dotted line in Fig.~\ref{fig_wi}(d). (b) A magnification of the region bounded by the dashed vertical lines in (a). (c) Analytically calculated $\rho_0$. In (b) and (c) a logarithmic scale has been used. \label{fig_num_spin} } \end{figure} We consider a sodium condensate with $\epsilon=0.75 g_2 n,q=2.5g_2n,k_{1}=2$, and $k_{-1}=1$. For these values the $s=3$ spin mode is the only unstable mode [see the blue dash-dotted line in Figs.~\ref{fig_wi}(b) and \ref{fig_wi}(d)]. Numerical calculations give the same result. By comparing Figs.~\ref{fig_num_spin}(b) and \ref{fig_num_spin}(c) we see that the analytical expression for $\rho_0$ approximates the actual dynamics very precisely up to $t\approx 15\hbar/g_2n$. As in the case of the magnetization modes, we choose the initial length and overall phase of $\delta\psi(t=0)$ in such a way that the agreement between the numerical and analytical results is the best possible. \section{Experiments} \label{sec_Exp} In this section we calculate the ratio $\epsilon/|g_2|n$ corresponding to two recent experiments. To obtain an analytical estimate for $\epsilon$, we assume that the particle density $|\psi_{r;z}(r,z)|^2$ is peaked around $R$ and approximate $1/r^2\approx 1/R^2$ in Eq.~\eqref{eq_epsilon}. This gives $\epsilon\approx \hbar^2/2mR^2$. Approximating $\psi_{r;z}$ by the Thomas-Fermi (TF) wavefunction yields \begin{align} n &\approx\sqrt{\frac{2 m N\omega_r\omega_z}{9\pi^2g_0 R}}. \end{align} We see that $\epsilon/|g_2|n \propto (\omega_r\omega_z N R^3)^{-1/2}$, so that the properties of the excitation spectrum can be controlled by adjusting the trapping frequencies, number of particles, and the radius of the toroid. Using the parameter values of the sodium experiment \cite{Ramanathan11} we get $\epsilon\approx 0.04 g_2 n$. We study numerically the cases $(k_{1},k_{-1})=(0,0)$ and $(k_{1},k_{-1})=(1,0)$. With the help of Eqs.~\eqref{o1234fz0} and \eqref{o56} we find that magnetization modes are stable, but spin modes are unstable in both cases. If $0< q \leq 0.04 g_2 n$, $f_z=0$, and $(k_{1},k_{-1})=(0,0)$, the unstable spin mode leads to a position-independent, homogeneous, increase in $\rho_0$. If $(k_{1},k_{-1})=(1,0)$, we get $\rho_0(\varphi;t)\sim e^{2\omega^{\textrm{i}}_5 t}\sin^2[(\epsilon t+\varphi)/2]$. The $1$D numerical calculations shown in Fig. \ref{fig_Ramanathan} confirm the validity of these analytical predictions. This example illustrates that even a small $\epsilon$ can lead to a strongly winding number-dependent behavior of $\rho_0$. \begin{figure} \center \includegraphics[scale=0.85,clip]{fig_Ramanathan.pdf} \vspace{5mm} \caption{(Color online) Numerically calculated $\rho_0$ for a ${}^{23}$Na condensate with $\epsilon=q=0.04|g_2|n$ and $f_z=0$. In (a) $k_{1}=k_{-1}=0$ and in (b) $k_{1}=1,k_{-1}=0$. The value of $\epsilon$ corresponds to that of \cite{Ramanathan11}. \label{fig_Ramanathan}} \end{figure} The first experimental realization of a toroidal spin-1 BEC was reported recently~\cite{Beattie13}. The stability of a rubidium BEC with a winding number three vortex in the $m_F=1$ and $m_F=0$ components was found to depend strongly on the population difference of the two components, the most unstable situation corresponding to equal population. Although not directly comparable, our analysis agrees qualitatively with this result: The growth rate of unstable spin and magnetization modes increases as the population difference of the $m_F=1$ and $m_F=-1$ components goes to zero. The parameter values of this experiment yield $\epsilon\approx 0.20 |g_2|n$. The $s=1,2,$ and $s=3$ magnetization modes are unstable regardless of the values of winding numbers. If $k_{+}=0$ and $q+\epsilonk_{-}^2 = 2.8|g_2|n$, the spin modes have a rotonlike spectrum (see the left panel of Fig.~\ref{fig_roton}). The $s=4$ mode can be seen to be the only unstable spin mode. This is confirmed by the numerical results shown in Fig.~\ref{fig_Beattie}(a). In this figure we have chosen $k_{1}=-k_{-1}=1$ and $q=2.6|g_2|n$, so that $q+\epsilonk_{-}^2 =2.8|g_2|n$. Because $k_{+}=0$, Eqs.~\eqref{eq_FzexpApprox} and \eqref{eq_deltapsi0} predict that the nodes of $\rho_0$ and $\langle\hat{F}_z\rangle$ do not rotate around the torus as time evolves. This is clearly the case in Fig.~\ref{fig_Beattie}. The $s=3$ magnetization mode can be seen to be the fastest growing unstable mode. However, around $t\approx 12\hbar/g_2 n$, the $s=2$ mode becomes the dominant unstable mode. These observations agree with analytical predictions: Using Eq.~\eqref{o1234fz0} we find that $\hbar\omega^{\textrm{i}}_3(s)/|g_2|n=0.72, 1.26$, and $1.34$ for $s=1,2$, and $s=3$, respectively. For other values of $s$ we get $\omega^{\textrm{i}}_3(s)=0$. \begin{figure} \center \includegraphics[scale=0.85,clip]{fig_Beattie.pdf} \vspace{5mm} \caption{(Color online) Numerically calculated (a) $\rho_0$ and (b) $\langle\hat{F}_z\rangle$ for a ${}^{87}$Rb condensate with $\epsilon=0.2|g_2|n,q=2.6|g_2|n,f_z=0$, and $k_{1}=-k_{-1}=1$. The value of $\epsilon$ corresponds to that of Ref.~\cite{Beattie13}. \label{fig_Beattie}} \end{figure} \section{Conclusions} \label{sec_Con} We have calculated analytically the Bogoliubov spectrum of a toroidal spin-1 BEC that has vortices in the $m_F=\pm 1$ spin components and is subjected to a homogeneous magnetic field. We treated the strength of the magnetic field and the winding numbers of the vortices as free parameters and assumed that the population of the $m_F=0$ component vanishes. We assumed also that the system is quasi-one-dimensional. We found that the spectrum can be divided into spin and magnetization modes. Spin modes change the particle density of the $m_F=0$ component but leave the particle density difference of the $m_F=1$ and $m_F=-1$ components unchanged. The magnetization modes do the opposite. An important parameter characterizing the spectrum is the ratio of the kinetic to interaction energy, $\epsilon/|g_2|n$. The properties of magnetization modes can be tuned by adjusting this ratio, whereas in the case of spin modes also the strength of the magnetic field can be used to control the spectrum. For example, a spin mode spectrum with a roton-maxon structure can be realized both in rubidium and sodium condensates by making the magnetic field strong enough. Furthermore, by changing the strength of the magnetic field or the ratio $\epsilon/|g_2|n$, an initially stable condensate can be made unstable. We also showed that some unstable spin modes lead to a transient dark solitonlike wave function of the $m_F=0$ spin component. Finally, we discussed briefly two recent experiments on toroidal BECs and showed examples of the instabilities that can be realized in these systems. We studied the validity of the analytical results by numerical one-dimensional simulations, finding that the former give a very good description of the stability of the condensate and the initial time evolution of the instabilities. \begin{acknowledgments} This research has been supported by the Alfred \mbox{Kordelin} Foundation and the Academy of Finland through its Centres of Excellence Program (Project No. 251748). \end{acknowledgments}
proofpile-arXiv_067-10553
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A new boson with a mass of about 126 GeV has been discovered at the LHC~\cite{Higgs_ATLAS,Higgs_CMS}. The particle is likely to be the Higgs boson. However, this does not necessarily mean that the Higgs sector in the standard model (SM) is correct, because the scalar boson with the similar property to the SM Higgs boson can be predicted in Higgs sectors extended from the SM one. On the other hand, new physics models beyond the SM have often been considered by introducing extended Higgs sectors to explain new phenomena such as the neutrino oscillation~\cite{typeII,zee,zee-2loop,ma,krauss,aks}, the existence of dark matter~\cite{DM} and the baryon asymmetry of the Universe~\cite{ewbg-thdm}, all of which cannot be explained in the SM. Therefore, determining the Higgs sector is paramountly important to know what kind of the new physics exists at the TeV scale. The electroweak rho parameter is important to determine the structure of the Higgs sector. The experimental value of the rho parameter is close to unity~\cite{PDG}, which suggests that there is a global SU(2) symmetry, so-called the custodial symmetry, in the Higgs sector. The rho parameter strongly depends on the property of the Higgs sector; i.e., the number of Higgs multiplets and their hypercharges. In the Higgs sector composed from only SU(2) doublets and/or singlets, the rho parameter is unity at the tree level because of the custodial symmetry. Thus, these Higgs sectors can be regarded as the natural extension of the SM Higgs sector. On the other hand, the rho parameter deviates from unity at the tree level for the Higgs sector with exotic representation fields such as triplets. In such a model, a vacuum expectation value (VEV) of such an exotic field violates the custodial symmetry, so that the VEV is severely constrained by the rho parameter data. There is another extended Higgs sector in which an alignment of the triplet VEVs makes the rho parameter to be unity at the tree level, named as the Georgi-Machacek (GM) model~\cite{GM,Gunion-Vega-Wudka,Dicus,Gunion-Vega-Wudka2,AK_GM,Haber_Logan}. Furthermore, it is known that the addition of the isospin septet field with the hypercharge $Y=2$\footnote{Throughout the paper, we use the notation $Q=T_3+Y$ with $Q$ and $T_3$ to be the electromagnetic charge and third component of the isospin, respectively. } does not change the rho parameter from unity at the tree level. In order to discriminate these exotic Higgs sectors, we need to measure the other observables which are sensitive to the structure of the Higgs sector. As a striking feature of exotic Higgs sectors, there appears the $H^\pm W^\mp Z$ vertex at the tree level~\cite{Grifols}, where $H^\pm$ are physical singly-charged Higgs bosons. In the multi-doublet model, this vertex is induced at the one-loop level, so that the magnitude of the $H^\pm W^\mp Z$ vertex tends to be smaller than that in exotic Higgs sectors~\cite{HWZ_doublet}. Therefore, a precise measurement of the $H^\pm W^\mp Z$ vertex can be used to discriminate exotic Higgs sectors such as the GM model and the Higgs sector with a septet field. The feasibility of measuring this vertex has been discussed in Ref.~\cite{HWZ-LEP} at the LEPII, in Ref.~\cite{HWZ-Tevatron} at the Tevatron, at the LHC~\cite{HWZ-LHC} and at the International Linear Collider (ILC)~\cite{HWZ-ILC}. In this paper, we discuss the other method to probe or constrain exotic Higgs sectors by focusing on the SM-like Higgs boson couplings with the gauge bosons $hVV$ ($V=W$ and $Z$). At present, this approach is quite timely, because the Higgs boson like particle has already been found. The current accuracy of the measurement of the $hWW$ and $hZZ$ couplings at the LHC has been analysed to be in Ref.~\cite{Plehn1}, where the data collected in 2011 and 2012 are used. The Higgs boson couplings will be measured at future colliders as precisely as possible. For example, the $hVV$ couplings are supposed to be measured with the $\mathcal{O}(10)\%$ accuracy at the Hi-Luminosity LHC with the collision energy to be 14 TeV and the integrated luminosity to be 3000~fb$^{-1}$~\cite{Plehn2}. In addition, they can be measured with the $\mathcal{O}(1)\%$ accuracy at the ILC with the collision energy to be $500$ GeV and the integrated luminosity to be $500$ fb$^{-1}$~\cite{Peskin,Plehn2}. In models with multi-doublet structure, the magnitude of the $hVV$ vertex is smaller than that of the corresponding SM vertices. On the other hand, they can be larger than the SM prediction in exotic Higgs sectors. Thus, measuring the $hVV$ vertex can be the other and more important tool to constrain exotic Higgs sectors in addition to measuring the rho parameter and $H^\pm W^\mp Z$ vertex. We also discuss the deviation in the Yukawa coupling $hf\bar{f}$ as well. We first derive the formula for the $hVV$ vertex in the general Higgs sector. We then discuss the possible deviations in the $hVV$ and $hf\bar{f}$ vertices in several concrete extended Higgs models. We consider the model with a real or complex triplet field, the GM model and the model with a septet field. In addition, we evaluate the deviation in the event rate of the signal for the $h\to WW^*$, $h\to ZZ^*$, $h\to \gamma\gamma$, $h\to \tau\tau$ and $h\to b\bar{b}$ in these models at the LHC. We will find that the deviation in the $hVV$ coupling can be as large as $\mathcal{O}(0.1\%)$, $\mathcal{O}(30\%)$ and $\mathcal{O}(10\%)$ in the model with a real or complex triplet field, the GM model and the model with a septet field, respectively in the allowed parameter regions by the current electroweak precision data. \section{The $hVV$ vertex} We consider an extended Higgs sector which contains $N$ Higgs multiplets $\Phi_i$ ($i=1,\dots , N$) with the isospin $T_i$ and the hypercharge $Y_i$. We assume CP conservation of the Higgs sector. The kinetic term in the general Higgs sector is given by \begin{align} \mathcal{L}_{\text{kin}}=\sum_ic_i|D_i^\mu\Phi_i|^2, \label{cov} \end{align} with $c_i=1~(1/2)$ for a complex (real; i.e., $Y=0$) Higgs field. The $W$ and $Z$ boson masses are calculated as \begin{align} m_W^2=\frac{g^2}{2}\sum_iv_i^2[T_i(T_i+1)-Y_i^2],\quad m_Z^2=g_Z^2\sum_iv_i^2Y_i^2, \text{ with }g_Z=\frac{g}{\cos\theta_W}, \end{align} where $v_i\equiv \sqrt{2c_i}\langle \Phi_i^0 \rangle$ is the VEV of the $i$-th Higgs field. The VEV $v$ ($=(\sqrt{2}G_F)^{-1/2}\simeq$ 246 GeV) can be expressed as \begin{align} &v^2= 2\sum_i C_i'v_i^2,\text{with }C_i' =T_i(T_i+1)-Y_i^2. \end{align} The electroweak rho parameter can then be calculated at the tree level as~\cite{rho_tree} \begin{align} \rho_{\text{tree}}&=\frac{m_W^2}{m_Z^2\cos^2\theta_W} =\frac{\sum_i v_i^2[T_i(T_i+1)-Y_i^2]}{2\sum_i v_i^2Y_i^2}.\label{rho} \end{align} The SM-like Higgs boson $h$ can be defined by \begin{align} \tilde{h}_i = R_{ih}h, \end{align} where $R_{ih}$ is the element of the orthogonal matrix connecting the weak eigenbasis of CP-even scalar states $\tilde{h}_i$ and the mass eigenbasis. In this notation, $\Phi_{i=h}$ should be the isospin doublet field with $Y=1/2$. If there is no mixing among CP-even scalar states, $\tilde{h}_h$ can be regarded as the SM-like Higgs boson $h$. The $hZZ$ and $hWW$ couplings are calculated by \begin{align} g_{hVV} = g_{hVV}^{\text{SM}}\times \sum_i c_{hVV}^i = g_{hVV}^{\text{SM}}c_{hVV} ,\quad \text{with }V=W,~Z, \label{g_hVV} \end{align} where $g_{hVV}^{\text{SM}}$ is the $hVV$ coupling in the SM, and the factor $c_{hVV}^i$ is expressed by \begin{align} c_{hWW}^i = \frac{\sqrt{2}v_i[T_i(T_i+1)-Y_i^2]R_{ih}}{\sqrt{\sum_j v_j^2 [T_j(T_j+1)-Y_j^2]}},\quad c_{hZZ}^i = \frac{2Y_i^2 v_i R_{ih}}{\sqrt{\sum_j Y_j^2 v_j^2}}. \label{c_hVV} \end{align} In the general Higgs sector, the charged (neutral) Nambu-Goldstone (NG) bosons can be separated from physical charged Higgs bosons (CP-odd Higgs bosons) by using the elements of the orthogonal matrices; \begin{align} w_i^\pm = R_{iG^+}G^\pm,\quad z_i^0 = R_{iG^0}G^0, \text{ with }\sum_i R_{iG^+}^2=\sum_iR_{iG^0}^2=1,\label{RG} \end{align} where $w_i^\pm$ ($z_i$) is the singly-charged (CP-odd) scalar state in the weak eigenbasis. From the NG theorem, $R_{iG^+}$ and $R_{iG^0}$ satisfy the following relations; \begin{align} \frac{g}{2}\sum_i \sqrt{c_i}C_i v_i R_{iG^+}=m_W,\quad g_Z\sum_i Y_i v_i R_{iG^0}=m_Z, \end{align} where \begin{align} C_i =\sqrt{T_i(T_i+1)-Y_i^2+Y_i}. \end{align} In the Higgs sector with one pair of a physical singly-charged Higgs boson and a physical CP-odd Higgs boson, the elements given in Eq.~(\ref{RG}) are expressed by \begin{align} R_{iG^+}=\frac{2v_i}{v}\frac{\sqrt{c_i}C_i'}{C_i},\quad R_{iG^0}=\frac{Y_iv_i}{\sqrt{\sum_i Y_i^2v_i^2}}. \end{align} The Yukawa coupling of $h$ can be simply obtained in the Higgs sector where only one doublet Higgs field couples to the fermions. In this case, the $hf\bar{f}$ coupling $g_{hff}$ is expressed as \begin{align} g_{hff} = g_{hff}^{\text{SM}}\times c_{hff},\text{ with}~c_{hff}=\frac{v}{v_i}R_{ih}. \label{c_hff} \end{align} In the model with multi-doublets, a discrete symmetry is necessary to realize such a situation as we discuss in the next section for the two Higgs doublet model (THDM). \section{Examples} \begin{table}[t] \begin{center} {\renewcommand\arraystretch{1.3} \begin{tabular}{|c||c|c|c|c|}\hline Model & $\tan\beta$ &$\tan\beta'$& $c_{hWW}$ & $c_{hZZ}$ \\\hline\hline $\phi_1+\phi_2$ (THDM) &$v_{\phi_2}/v_{\phi_1}$&$v_{\phi_2}/v_{\phi_1}$ &$\sin(\beta-\alpha)$ & $\sin(\beta-\alpha)$ \\\hline $\phi+\chi$ (cHTM) &$\sqrt{2}v_\chi/v_\phi$&$2v_\chi/v_\phi$& $\cos\beta \cos\alpha + \sqrt{2}\sin\beta\sin\alpha$ & $\cos\beta' \cos\alpha + 2\sin\beta'\sin\alpha$ \\\hline $\phi+\xi$ (rHTM) &$2v_\xi/v_\phi$&-& $\cos\beta \cos\alpha + 2\sin\beta\sin\alpha$ & $\cos\alpha$ \\\hline $\phi+\chi+\xi$ (GM model) &$2\sqrt{2}v_\Delta/v_\phi$& $2\sqrt{2}v_\Delta/v_\phi$ &$\cos\beta \cos\alpha +\frac{2\sqrt{6}}{3}\sin\beta \sin\alpha$ &$\cos\beta \cos\alpha +\frac{2\sqrt{6}}{3}\sin\beta \sin\alpha$ \\\hline $\phi+\varphi_7$ &$4v_{\varphi_7}/v_\phi$& $4v_{\varphi_7}/v_\phi$ &$\cos\beta \cos\alpha +4\sin\beta \sin\alpha$ &$\cos\beta \cos\alpha +4\sin\beta \sin\alpha$ \\\hline \end{tabular}} \caption{The deviations in the Higgs boson couplings from the SM values in various extended Higgs sectors. $\phi$, $\chi$, $\xi$ and $\varphi_7$ are respectively denoted as Higgs fields with ($T,Y$)=($1/2,1/2$), ($1,1$), ($1,0$) and ($3,2$). In the second and third column, $v_X$ is the VEV of the Higgs field $X$, and $v_\Delta$ is defined in Eq.~(\ref{VEV_align}). The mixing angle $\alpha$ is defined for each extended Higgs sector in the main text. } \label{models} \end{center} \end{table} We discuss the Higgs boson couplings in several concrete Higgs sectors. As examples, we consider the THDM, the model with a complex triplet field (cHTM), that with a real triplet field (rHTM), the GM model~\cite{GM} and the model with a septet Higgs filed. The Higgs fields content in each model is listed in Table~\ref{models}, where $\phi$ ($\phi_1$ and $\phi_2$ have the same quantum number as $\phi$), $\chi$, $\xi$ and $\varphi_7$ are respectively denoted Higgs fields with ($T,Y$)=($1/2,1/2$), ($1,1$), ($1,0$) and ($3,2$). In this table, $\beta$ ($\beta'$) is the mixing angle which separates the charged (CP-odd) NG boson from the physical singly-charged (CP-odd) Higgs bosons. The mixing angle among CP-even scalar states are expressed as $\alpha$, whose definitions are given in the following subsequent paragraphs in each extended Higgs sector. First, in the THDM, $c_{hVV}$ is calculated as $\sin(\beta-\alpha)$, where $R_{1h}=-\sin\alpha$ and $R_{2h}=\cos\alpha$. Thus, the $hVV$ vertex cannot be larger than that in the SM. The Yukawa couplings of $h$ depends on the variation of the THDMs. When a softly-broken $Z_2$ symmetry is imposed to the model in order to avoid the tree level flavour changing neutral current, there are four types of the Yukawa couplings depending on the charge assignment of the $Z_2$ charge~\cite{type_THDM}. The expression of the Yukawa couplings in each type is given in Ref.~\cite{AKTY}. In the following four extended Higgs sectors: the cHTM, the rHTM, the GM model and the model with $\varphi_7$, the deviation in the Yukawa coupling can be expressed by $c_{hff}=\cos\alpha/\cos\beta$. Second, in both the cHTM and the rHTM, $c_{hWW}$ can be larger than unity as listed in Table~\ref{models}, where the mixing angle $\alpha$ is defined by $R_{1h}=\cos\alpha$ and $R_{2h}=\sin\alpha$. Because there is no additional CP-odd scalar state in the rHTM, the mixing angle $\beta'$ cannot be defined, so that $c_{hZZ}$ is smaller than 1 by non-zero values of the mixing angle $\alpha$. In the cHTM, $c_{hZZ}$ can also be larger than 1, but the pattern of the deviation is different from $c_{hWW}$. In both two models, the rho parameter deviates from unity because of the non-zero VEV of the triplet field. The magnitude of the VEV of the complex (real) triplet field $v_\chi$ ($v_\xi$) is constrained to be less than about 8 GeV in the cHTM (about 6 GeV in the rHTM) from the experimental value of the rho parameter $\rho^{\text{exp}}=1.0008^{+0.0017}_{-0.0007}$~\cite{PDG}. \begin{figure}[t] \begin{center} \includegraphics[width=80mm]{hVV.eps}\hspace{3mm} \includegraphics[width=80mm]{hVV2.eps} \caption{$c_{hWW}$, $c_{hZZ}$ and $c_{hff}$ as a function of $\sin\alpha$ in the cHTM (left panel) and in the rHTM (right panel) for the case of $v_\chi =v_\xi= 5$ GeV. } \label{FIG:hVV} \end{center} \end{figure} In Fig.~\ref{FIG:hVV}, the deviations in the $hWW$, $hZZ$ and $hf\bar{f}$ couplings are shown as a function of $\sin\alpha$ in the cHTM (left panel) and the rHTM (right panel) for $v_\chi=v_\xi=5$ GeV. It is seen that there are regions where both $c_{hWW}$ and $c_{hZZ}$ are larger than 1 in the cHTM\footnote{Even when effects of the radiative corrections to the $hVV$ coupling are taken into account, the results of $c_{hVV}>1$ can be obtained~\cite{AKKY}. }, while only $c_{hWW}$ can be larger than 1 in the rHTM as mentioned in the above. The maximal allowed values for $c_{hWW}$ and $c_{hZZ}$ in the cHTM can be estimated in the case with $\sin\beta\ll 1$, $\sin\beta' \ll 1$ and $\sin\alpha \ll 1$ by \begin{align} c_{hWW} &=-\frac{1}{2}\left(\alpha-\sqrt{2}\beta\right)^2+1+\frac{\beta^2}{2} +\mathcal{O}(\beta^2\alpha^2),\label{hWW_ap} \\ c_{hZZ} &=-\frac{1}{2}\left(\alpha-2\beta^\prime\right)^2+1+\frac{3\beta^{\prime2}}{2} +\mathcal{O}(\beta^{\prime2}\alpha^2). \label{hZZ_ap} \end{align} From above the equations, it can be seen that the $hWW$ ($hZZ$) coupling can be taken to be a maximal value in the case of $\alpha = \sqrt{2}\beta$ ($\alpha = 2\beta'$). When we take $v_\chi =5$ GeV, we obtain $c_{hWW}-1\simeq 4.2\times 10^{-4}$ for $\alpha= \sqrt{2}\beta$ and $c_{hZZ}-1\simeq 2.5\times 10^{-3}$ for $\alpha= 2\beta^\prime$, which are consistent with the result in Fig.~\ref{FIG:hVV}. Third, we discuss the GM model where a complex triplet and a real triplet fields are contained in addition to the doublet field. The doublet Higgs field and the two triplet fields can be respectively represented by the $2\times 2$ and $3\times 3$ matrix forms which are transformed under the global $SU(2)_L\times SU(2)_R$ symmetry as \begin{align} \phi=\left( \begin{array}{cc} \phi^{0*} & \phi^+ \\ \phi^- & \phi^0 \end{array}\right),\quad \Delta=\left( \begin{array}{ccc} \chi^{0*} & \xi^+ & \chi^{++} \\ \chi^- & \xi^0 & \chi^{+} \\ \chi^{--} & \xi^- & \chi^{0} \end{array}\right). \end{align} In this model, there are three CP-even scalar states, so that there are three mixing angles which diagonalize the mass matrix for the CP-even scalar states in general. However, when the Higgs potential is constructed under the custodial $SU(2)_V$ symmetric way, the mass matrix for the CP-even states can be diagonalized by the single mixing angle $\alpha$ as \begin{align} \left( \begin{array}{c} h_\phi \\ h_\xi \\ h_\chi \end{array}\right) = \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \frac{1}{\sqrt{3}} & -\sqrt{\frac{2}{3}} \\ 0 & \sqrt{\frac{2}{3}} & \frac{1}{\sqrt{3}} \end{array}\right) \left( \begin{array}{ccc} \cos\alpha & -\sin\alpha & 0 \\ \sin\alpha & \cos\alpha & 0 \\ 0 & 0 & 1 \end{array}\right) \left( \begin{array}{c} h \\ H_1 \\ H_5 \end{array}\right), \label{alpha_GM} \end{align} where $H_5$ ($H_1$) is the neutral component of the $SU(2)_V$ five-plet (singlet) Higgs boson, and $h_\phi=\sqrt{2}\text{Re}(\phi^0)$, $h_\xi=\xi^0$ and $h_\chi=\sqrt{2}\text{Re}(\chi^0)$. When we assume that the two triplet VEVs are aligned by \begin{align} v_\Delta^2\equiv v_\chi^2=v_\xi^2, \label{VEV_align} \end{align} with $v_\chi = \langle \chi^0\rangle$ and $v_\xi = \langle \xi^0\rangle$, the $SU(2)_L\times SU(2)_R$ symmetry reduces to the custodial $SU(2)_V$ symmetry. Therefore, the rho parameter is predicted to be unity at the tree level, whose value does not depend on the magnitude of $v_\Delta$ as long as we assume the alignment for the triplet VEVs. The expressions for $c_{hVV}$ (both $c_{hWW}$ and $c_{hZZ}$ are the same in this model) are listed in Table~\ref{models}. At the one-loop level, the modified $hVV$ couplings and existence of extra Higgs bosons can affect the oblique corrections to the gauge bosons, namely the Peskin-Takeuchi $S$, $T$ and $U$ parameters~\cite{Peskin_Takeuchi}. They can be expressed as \begin{align} S&=\frac{4s_W^2c_W^2}{\alpha_{\text{em}}}\left[\frac{\Pi^{\text{1PI}}_{\gamma\gamma}(m_Z^2)-\Pi^{\text{1PI}}_{\gamma\gamma}(0)}{m_Z^2} +\frac{c_W^2-s_W^2}{c_Ws_W}\frac{\Pi^{\text{1PI}}_{Z\gamma}(m_Z^2)-\Pi^{\text{1PI}}_{Z\gamma}(0)}{m_Z^2} -\frac{\Pi^{\text{1PI}}_{ZZ}(m_Z^2)-\Pi^{\text{1PI}}_{ZZ}(0)}{m_Z^2} \right],\\ T&=\frac{1}{\alpha_{\text{em}}}\left[\frac{\Pi_{ZZ}^{\text{1PI}}(0)}{m_Z^2}-\frac{\Pi_{WW}^{\text{1PI}}(0)}{m_W^2}+\frac{2s_W}{c_W}\frac{\Pi_{Z\gamma}^{\text{1PI}}(0)}{m_Z^2}+\frac{s_W^2}{c_W^2}\frac{\Pi_{\gamma\gamma}^{\text{1PI}}(0)}{m_Z^2}\right] +\delta T ,\label{Tpara}\\ U&=\frac{4s_W^2}{\alpha_{\text{em}}} \Big[s_W^2\frac{\Pi^{\text{1PI}}_{\gamma\gamma}(m_Z^2)-\Pi^{\text{1PI}}_{\gamma\gamma}(0)}{m_Z^2} +2s_Wc_W\frac{\Pi^{\text{1PI}}_{Z\gamma}(m_Z^2)-\Pi^{\text{1PI}}_{Z\gamma}(0)}{m_Z^2} +c_W^2\frac{\Pi^{\text{1PI}}_{ZZ}(m_Z^2)-\Pi^{\text{1PI}}_{ZZ}(0)}{m_Z^2}\notag\\ &-\frac{\Pi^{\text{1PI}}_{WW}(m_W^2)-\Pi_{WW}^{\text{1PI}}(0)}{m_W^2}\Big], \end{align} where $\Pi_{XY}^{\text{1PI}}(p^2)$ are the 1PI diagrams for the gauge boson two point functions at the one-loop level, whose analytic expressions are given in Appendix~B. In Eq.~(\ref{Tpara}), $\delta T$ is the counter term for the $T$ parameter, which does not appear in models with the custodial symmetry in the kinetic term without imposing any alignment for VEVs as in the multi-doublet models as well as the model with the septet Higgs field. On the other hand, in the GM model, we need an alignment for the triplet VEVs to maintain the custodial symmetry at the tree level as in Eq.~(\ref{VEV_align}). Thus, there appear contributions to the violation of the alignment at the one-loop level, in which the ultra-violet divergence are contained as it has already been pointed out in Ref~\cite{Gunion-Vega-Wudka2}. Therefore, the counter term $\delta T$ exists associated with the parameter which gives the violation of the VEV alignment, and it can absorb the divergence by imposing an additional renormalization condition. In Ref.~\cite{Englert}, $T=0$ is imposed by using this additional renormalization condition, and we apply the same condition in our analysis. The experimental values for $S$ and $ T$ parameters by fixing $ U=0$ are given as~\cite{ST_126} \begin{align} S=0.05\pm 0.09,\quad T=0.08\pm0.07, \label{ST_exp} \end{align} where the correlation coefficient is +0.91, and the reference value of the mass of the SM Higgs boson is set to be 126 GeV. If we further fix $T=0$, then the 95\% confidence level region for $S$ is given by $-0.11<S<0.019$. In Fig.~\ref{S_GM}, the $S$ parameter is shown as a function of $\sin\alpha$ which is defined in Eq.~(\ref{alpha_GM}) for the cases with $v_\Delta = 30$ GeV, 50 GeV and 70 GeV. All the masses of the extra Higgs bosons are taken to be 500 GeV in this analysis. When $v_\Delta$ is taken to be 30 GeV and 50 GeV (70 GeV), the ranges of $-0.99<\sin\alpha<-0.31$ and $-0.72<\sin\alpha<-0.38$ ($-0.93<\sin\alpha<0.46$) are excluded (allowed) by the $S$ parameter at the 95\% confidence level. \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{S_GM_v2.eps} \caption{The value for $S$ as a function of $\sin\alpha$ in the GM model for the cases with $v_\Delta =30$, 50 and 70 GeV. The upper and lower limits at the 95\% confidence level for the $S$ parameter are shown by the dashed curves. } \label{S_GM} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{c_GM_v2.eps} \caption{$c_{hWW}$, $c_{hZZ}$ and $c_{hff}$ as a function of $\sin\alpha$ in the GM model for the cases with $v_\Delta =30$, 50 and 70 GeV. } \label{FIG:hVV_GM} \end{center} \end{figure} In Fig.~\ref{FIG:hVV_GM}, the deviations in the $hWW$, $hZZ$ and $hf\bar{f}$ couplings are shown as a function of $\sin\alpha$ in the GM model for the cases with $v_\Delta=30$, 50 and 70 GeV. The deviations in $hWW$ and $hZZ$ couplings are the same. The allowed maximal values for $c_{hWW}~(=c_{hZZ})$ are about 1.1, 1.3 and 1.2 for the cases of $v_\Delta=30$, 50 and 70 GeV, respectively. When $c_{hWW}$ and $c_{hZZ}$ are getting the maximal values, $c_{hff}$ is smaller than 1. \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{ST_7plet_v2.eps} \caption{Prediction of the $S$ and $T$ parameters in the model with the septet Higgs field. The regions within the black (blue) solid ellipse are allowed at the 68\% (95\%) confidence level. Each dashed (dotted) curve shows the results in the cases with $v_7=0$, 5, 10, 15 and 20 GeV, where the value of $\sin\alpha$ is taken to be from 0 to 1 (from 0 to $-1$). The left and right points where the dashed and dotted curves are crossing correspond to the case with $\sin\alpha=0$ and $\sin\alpha=\pm1$. When $\sin\alpha$ shifts from 0 to positive (negative) values, the predictions are moved to the upper (lower) region along the dashed (dotted) curves. } \label{ST_7plet} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[width=100mm]{c_7_v2.eps} \caption{$c_{hWW}$, $c_{hZZ}$ and $c_{hff}$ as a function of $\sin\alpha$ in the model with the septet Higgs field $\varphi_7$ for the cases with $v_7 =30$ and 50 GeV. } \label{FIG:hVV_7} \end{center} \end{figure} Finally, we discuss the model with the septet Higgs field $\varphi_7$ with $Y=2$. The expressions for $c_{hVV}$ are listed in Table~\ref{models}. Similar to the GM model, both $c_{hWW}$ and $c_{hZZ}$ are coincide with each other. Detailed calculations of the Higgs potential in this model are given in Appendix~A. In the GM model, although we need the alignment for the VEVs of the two triplet fields in order to keep $\rho_{\text{tree}}=1$, it can be directly confirmed that in the model with $\varphi_7$, the VEV of $\varphi_7$ does not change $\rho_{\text{tree}}=1$ from Eq.~(\ref{rho}). However, non-zero values for $v_7$ and the mixing angle between the CP-even Higgs bosons $\alpha$ which is defined in Eq.~(\ref{alpha_7}) can be constrained by the $S$ and $T$ parameters. Unlike the GM model, this model respects the custodial symmetry without any alignment for VEVs, the counter term $\delta T$ in the $T$ parameter does not exist. Thus, constraints from the $S$ and $T$ parameters can be applied to this model in the same way as in the SM. The analytic expressions for the $\Pi_{XY}^{\text{1PI}}(p^2)$ functions are given in Appendix~B. In Fig.~\ref{ST_7plet}, the predictions of the $S$ and $T$ parameters are plotted in the cases with several fixed values of $v_7$ in the model with the septet Higgs field. All the masses of the extra Higgs bosons are taken to be 500 GeV. The dashed (dotted) curves show the prediction when the value of $\sin\alpha$ is changed from 0 to $1$ (0 to $-1$). It can be seen that the case with $v_7>20$ GeV is highly constrained by the $S$ parameter. The allowed maximal (minimal) values for $\sin\alpha$ at the 95\% confidence level can be obtained as $0.73(-0.73)$, 0.65($-0.098$), 0.27($-0.11$), 0.13$(-0.13)$ and 0.042($-0.15$) in the cases with $v_7=0$, 5 GeV, 10 GeV, 15 GeV and 20 GeV, respectively. In Fig.~\ref{FIG:hVV_7}, we show $c_{hVV}$ and $c_{hff}$ as a function of $\sin\alpha$ in the model with $\varphi_7$ for the cases of $v_7 = 5$, 10 GeV and 15 GeV. The value of $\sin\alpha$ is taken to be positive. Only the parameter regions allowed by the $S$ and $T$ parameters at the 95\% confidence level are shown in this plot. The allowed maximal values of $c_{hVV}$ is about 1.05, 1.12 and 1.09 in the cases with $v_7=5$ GeV, 10 GeV and 15 GeV, respectively. \section{The event rates} The production cross section and decay branching fraction of the SM-like Higgs boson $h$ can be modified by the deviation in $c_{hWW}$, $c_{hZZ}$ and $c_{hff}$ from unity. In order to clarify the deviations in the event rates of various modes for $h$ from the SM prediction, we define the ratio of the event rate: \begin{align} R_X = \frac{\sigma_{h}\times \text{BR}(h\to X)}{\sigma_{h}^{\text{SM}}\times \text{BR}(h\to X)^{\text{SM}}}, \end{align} where $\sigma_{h}^{\text{SM}}$ and $\text{BR}(h\to X)^{\text{SM}}$ [$\sigma_{h}$ and $\text{BR}(h\to X)$] are the production cross section of $h$ and the branching fraction of the $h\to X$ decay mode in the SM [in extended Higgs sectors]. So far, the Higgs boson search has been done in the following five channels at the LHC~\cite{Higgs_ATLAS,Higgs_CMS}; $pp\to h \to \gamma\gamma$, $pp\to h\to ZZ^{*}$, $pp\to h\to WW^*$, $pp\to h\to \tau\tau$ and $q\bar{q}' \to Vh \to Vb\bar{b}$, where $pp\to h$ is the inclusive Higgs boson production and $q\bar{q}' \to Vh$ is the weak vector boson associated production. The inclusive production cross section is almost determined by the gluon fusion production process: $gg\to h$, so that the modified one can be approximately expressed by $\sigma_h^{\text{SM}}(gg\to h)\times c_{hff}^2$. The cross section for the gauge boson associate production process can be changed by $\sigma_h^{\text{SM}}(q\bar{q}'\to Vh)\times c_{hVV}^2$. In general, there are charged Higgs bosons in extended Higgs sectors, and they can contribute to the $h\to\gamma\gamma$ mode in addition to the W boson and top quark loop contributions. In the following analysis, we ignore these contributions in order to focus on the effects of the modified $hWW$ and $hf\bar{f}$ couplings to the $h\to \gamma\gamma$ mode. \begin{figure}[t] \begin{center} \includegraphics[width=75mm]{R_HTM.eps}\hspace{5mm} \includegraphics[width=75mm]{R_rHTM.eps} \caption{$R_{X}$ for the various modes as a function of $\sin\alpha$ in the cHTM (left panel) and in the rHTM (right panel) for the case of $v_\chi=v_\xi=5$ GeV. } \label{FIG:R1} \end{center} \end{figure} In Fig.~\ref{FIG:R1}, the $R_X$ values are plotted as a function of $\sin\alpha$ in the cHTM (left panel) and in the rHTM (right panel) for the case of $v_\chi=v_\xi=5$ GeV. In both the cHTM and the rHTM, $R_{\tau\tau}$ is larger than 1 in the region of $0\leq\sin\alpha\leq 0.1$, because both the production cross section and the decay rate of $h\to \tau\tau$ are enhanced by the factor $c_{cff}^2\simeq 1/\cos^2\beta$ for the case of $\sin\alpha\sim 0$. When $\sin\alpha$ is taken to be larger values, $R_{\tau\tau}$ is getting smaller due to the suppression of $\cos^4\alpha$, while the other $R_X$ values monotonically increase in the cHTM. In the rHTM, $R_{ZZ}$ shows a similar behavior to that of $R_{\tau\tau}$. \begin{figure}[t] \begin{center} \includegraphics[width=50mm]{R_vt30.eps}\hspace{3mm} \includegraphics[width=50mm]{R_vt50.eps}\hspace{3mm} \includegraphics[width=50mm]{R_vt70_v2.eps} \caption{$R_{X}$ for the various modes as a function of $\sin\alpha$ in the GM model. The left, center and right panels show the case of $v_\Delta = 30$ GeV, 50 GeV and 70 GeV, respectively. In the right panel, $R_X$ values indicated by dotted curves are excluded by the $S$ and $T$ parameters at the 95\% confidence level. } \label{FIG:R2} \end{center} \end{figure} In Fig.~\ref{FIG:R2}, the $R_X$ values are shown as a function of $\sin\alpha$ in the GM model for the cases of $v_\Delta=30$, 50 and 70 GeV. In the case of $v_\Delta=70$ GeV, the region of $\sin\alpha \gtrsim 0.48$ is excluded by the constraint from the $S$ parameter at the 95\% confidence level. Similar to the cases in the cHTM and rHTM, $R_{\tau\tau}$ is larger than 1 in the regions of small $\sin\alpha$. The $R_{VV}$ and $R_{b\bar{b}}$ values are larger than 1 when $\sin\alpha\gtrsim 0.07$, $0.15$ and 0.24 in the cases of $v_\Delta=30$, 50 and 70 GeV, respectively. The $\sin\alpha$ dependence of $R_{\gamma\gamma}$ is similar to that for $R_{VV}$ and $R_{b\bar{b}}$, but the maximal allowed values of $R_{\gamma\gamma}$ are larger than those of $R_{VV}$ and $R_{b\bar{b}}$. The allowed maximal values for $R_{VV}$ and ($R_{\gamma\gamma}$) are about 1.1 (1.2) for $v_\Delta=30$ GeV, 1.3 (1.4) for $v_\Delta=50$ GeV and 1.5 (1.3) for $v_\Delta=70$ GeV. \begin{figure}[t] \begin{center} \includegraphics[width=50mm]{R_v7_5.eps}\hspace{3mm} \includegraphics[width=50mm]{R_v7_10.eps}\hspace{3mm} \includegraphics[width=50mm]{R_v7_15.eps} \caption{$R_{X}$ for the various modes as a function of $\sin\alpha$ in the model with the septet Higgs field $\varphi_7$. The left, center and right panels show the cases of $v_\Delta = 5$ GeV, 10 GeV and 15 GeV, respectively. In all the figures, $R_X$ values indicated by dotted curves are excluded by the $S$ and $T$ parameters at the 95\% confidence level.} \label{FIG:R3} \end{center} \end{figure} Finally, we show the $\sin\alpha$ dependence in $R_X$ in the model with $\varphi_7$ in Fig.~\ref{FIG:R3}. We take the septet VEV $v_7$ to be 5 GeV (left panel), 10 GeV (center panel) and 15 GeV (right panel). The values of $R_X$ excluded by the constraint from the $S$ and $T$ parameters at the at the 95\% confidence level are shown as the dotted curves. The maximal allowed values of $R_{VV}$ ($R_{\gamma\gamma}$) are about 1.05 (1.10) for $v_7=5$ GeV, about 1.16 (1.25) for $v_7=10$ GeV and about 1.14 (1.20) for $v_7=15$ GeV. \section{Conclusions} We have calculated the Higgs boson couplings with the gauge bosons $hZZ$ and $hWW$ as well as the fermions $hf\bar{f}$ in the general Higgs sector at the tree level. We have found that the $hZZ$ and $hWW$ couplings in Higgs sectors with exotic representation fields can be larger than those in the SM. We also have studied the ratio of the event rates $R_X$ for $X=WW^*$, $ZZ^*$, $\gamma\gamma$, $b\bar{b}$ and $\tau\tau$ in various Higgs sectors. We have numerically evaluated the deviations in the Higgs boson couplings $c_{hVV}$ and $c_{hff}$ and the values for $R_X$ in the cHTM, the rHTM, the GM model and the model with the septet scalar field. We have found that the possible allowed magnitude of the deviation in the $hVV$ coupling can be as large as $\mathcal{O}(0.1)\%$ in the cHTM and rHTM, $\mathcal{O}(30)\%$ in the GM model and $\mathcal{O}(10)\%$ in the model with the septet field in the allowed parameter regions by the $S$ and $T$ parameters. By measuring the Higgs boson couplings precisely, we can get useful information to determine the structure of the Higgs sector. \\\\ \noindent $Acknowledgments$ The authors would like to thank Lu-Hsing Tsai for useful discussions. S.K. was supported in part by Grant-in-Aid for Scientific Research, Nos. 22244031, 23104006 and 24340046. K.Y. was supported in part by the National Science Council of R.O.C. under Grant No. NSC-101-2811-M-008-014. \begin{appendix} \section{Model with a septet Higgs field} We discuss the Higgs sector with the doublet field $\phi$ and the septet field $\varphi_7$ with $Y=2$. These two Higgs fields can be expressed in the tensor form as $\phi_a$ and $(\varphi_7)_{ijklmn}$ where interchanges among the subscripts ($i,j,k,l,m,n$) are symmetric. Component scalar fields can be specified as \begin{align} \phi_1 = \phi^+,~\phi_2=\phi^0=\frac{1}{\sqrt{2}}(h_\phi+v_\phi+iz_\phi), \end{align} and \begin{align} &(\varphi_7)_{111111}=\varphi_7^{5+},~ (\varphi_7)_{211111}=\frac{\varphi_7^{4+}}{\sqrt{6}},~ (\varphi_7)_{221111}=\frac{\varphi_7^{3+}}{\sqrt{15}},~ (\varphi_7)_{222111}=\frac{\varphi_7^{++}}{\sqrt{20}},\notag\\ &(\varphi_7)_{222211}=\frac{\varphi_7^+}{\sqrt{15}},~ (\varphi_7)_{222221}=\frac{\varphi_7^0}{\sqrt{6}}=\frac{1}{\sqrt{12}}(h_7+v_7+iz_7),~ (\varphi_7)_{222222}=\bar{\varphi}_7^-. \end{align} The most general Higgs potential is given by \begin{align} V(\phi,\varphi_7)&=m^2|\phi|^2+m_7^2 |\varphi_7|^2+\lambda|\phi|^4+\lambda_1(|\varphi_7|^4)_1 +\lambda_2(|\varphi_7|^4)_2+\lambda_3(|\varphi_7|^4)_3+\lambda_4(|\varphi_7|^4)_4\notag\\ &+\kappa_1 (|\phi|^2|\varphi_7|^2)_1+\kappa_2 (|\phi|^2|\varphi_7|^2)_2, \end{align} where $|\varphi_7|^2$ term can be expanded by \begin{align} |\varphi_7|^2&=(\varphi_7^*)_{ijklmn}(\varphi_7)_{ijklmn}\notag\\ &=\varphi_7^{5+}\varphi_7^{5-}+\varphi_7^{4+}\varphi_7^{4-} +\varphi_7^{3+}\varphi_7^{3-}+\varphi_7^{++}\varphi_7^{--}+\varphi_7^{+}\varphi_7^{-} +\varphi_7^{0*}\varphi_7^{0}+\bar{\varphi}_7^{+}\bar{\varphi}_7^{-}. \end{align} There are four independent $(|\varphi_7|^4)_\alpha$ ($\alpha=1,\dots,4$) terms, they can be explicitly written by \begin{align} (|\varphi_7|^4)_1 &= (\varphi_7^*)_{ijklmn} (\varphi_7^*)_{abcdef} (\varphi_7)_{ijklmn} (\varphi_7)_{abcdef}, \\ (|\varphi_7|^4)_2 &= (\varphi_7^*)_{ijklmn} (\varphi_7^*)_{abcdef} (\varphi_7)_{ijklmf} (\varphi_7)_{abcden}, \\ (|\varphi_7|^4)_3 &= (\varphi_7^*)_{ijklmn} (\varphi_7^*)_{abcdef} (\varphi_7)_{ijklef} (\varphi_7)_{abcdmn}, \\ (|\varphi_7|^4)_4 &= (\varphi_7^*)_{ijklmn} (\varphi_7^*)_{abcdef} (\varphi_7)_{ijkdef} (\varphi_7)_{abclmn}. \end{align} In addition to the $(|\varphi_7|^4)_\alpha$ terms, there are two independent $(|\phi|^2|\varphi|^2)_\beta$ ($\beta=1,2$) terms, they can be explicitly written by \begin{align} (|\phi^2||\varphi_7|^2)_1 &= (\phi^*)_a(\phi)_a(\varphi_7^*)_{ijklmn} (\varphi_7)_{ijklmn} , \\ (|\phi^2||\varphi_7|^2)_2 &= (\phi^*)_a(\phi)_b(\varphi_7^*)_{ijklmb} (\varphi_7)_{ijklma}. \end{align} The other combination $(\phi^*)_a(\epsilon)_{ab}(\varphi_7^*)_{ijklmb}(\phi)_c (\epsilon)_{cd}(\varphi_7)_{ijklmd}$ is the same as $(|\phi^2||\varphi_7|^2)_1-(|\phi^2||\varphi_7|^2)_2$. There is an accidental global $U(1)$ symmetry in the Higgs potential; i.e., the potential is invariant under the phase transformation of $\varphi_7$~\cite{Logan}. This $U(1)$ symmetry is spontaneously broken down due to the non-zero value of the VEV of $\varphi_7$. Thus, there appears a massless NG boson in addition to the usual NG bosons absorbed by the longitudinal components of $W$ and $Z$ bosons. There are several ways to avoid the appearance of the additional NG boson. For example, this NG boson can be absorbed by the additional neutral gauge boson by extending this global symmetry to the gauge symmetry via the Higgs mechanism. By introducing terms which break the $U(1)$ symmetry explicitly, we can also avoid such a massless scalar boson\footnote{ Quite recently, it has been considered to introduce higher dimensional operators to break the $U(1)$ symmetry explicitly\cite{Hisano_Tsumura} in the Higgs sector with the septet. }. From the tadpole condition, we obtain \begin{align} m^2 &=-v_\varphi^2\lambda -\frac{v_7^2}{12} (6\kappa_1+5\kappa_2),\\ m_7^2 &= -\frac{v_\varphi^2 }{12}(6\kappa_1+5\kappa_2)-\frac{v_7^2}{18}(18\lambda_1+13\lambda_2+10\lambda_3+9\lambda_4). \end{align} The mass matrix for the CP-even Higgs states in the basis of ($h_\phi,h_7$) is given by \begin{align} M_{\text{CP-even}}^2 = \left( \begin{array}{cc} 2v_\varphi^2\lambda & v_\varphi v_7 \left(\kappa_1+\frac{5}{6}\kappa_2\right)\\ v_\varphi v_7 \left(\kappa_1+\frac{5}{6}\kappa_2\right) & \frac{v_7^2}{9}(18\lambda_1+13\lambda_2+10\lambda_3+9\lambda_4) \end{array}\right). \end{align} That for the singly-charged Higgs states in the basis of ($\phi^\pm,\varphi_7^\pm,\bar{\varphi}_7^\pm$) is calculated by \begin{align} M_+^2 = \left( \begin{array}{ccc} -\frac{v_7^2}{3}\kappa_2 & \frac{\sqrt{5}}{6\sqrt{2}}v_\varphi v_7 \kappa_2 & \frac{v_\varphi v_7}{2\sqrt{6}}\kappa_2 \\ \frac{\sqrt{5}}{6\sqrt{2}}v_\varphi v_7 \kappa_2 & -\frac{v_\varphi^2}{12}\kappa_2+\frac{v_7^2}{30}\bar{\lambda}& \frac{v_7^2}{6\sqrt{15}}\bar{\lambda} \\ \frac{v_\varphi v_7}{2\sqrt{6}}\kappa_2 & \frac{v_7^2}{6\sqrt{15}}\bar{\lambda} & \frac{v_\varphi^2}{12}\kappa_2+\frac{v_7^2}{18}\bar{\lambda} \end{array}\right), \end{align} where \begin{align} \bar{\lambda}=5\lambda_2+8\lambda_3+9\lambda_4. \end{align} The mass eigenstates of the CP-even Higgs bosons as well as the singly-charged Higgs bosons can be defined by introducing the following orthogonal matrix: \begin{align} \left( \begin{array}{c} h_\phi\\ h_7 \end{array}\right) =O_{\text{CP-even}} \left( \begin{array}{c} h \\ H \end{array}\right),~ \left( \begin{array}{c} \phi^\pm\\ \varphi_7^\pm \\ \bar{\varphi}_7^\pm \end{array}\right) =O_+ \left( \begin{array}{c} G^\pm \\ H^\pm \\ \bar{H}^\pm \end{array}\right), \end{align} where \begin{align} &O_{\text{CP-even}} = \left( \begin{array}{cc} \cos\alpha & -\sin\alpha \\ \sin\alpha & \cos\alpha \end{array}\right), \label{alpha_7} \\ &O_+ = R_\theta R_{G^+}= \left( \begin{array}{ccc} 1 & 0 & 0 \\ 0 & \cos\theta & -\sin\theta \\ 0 & \sin\theta & \cos\theta \end{array}\right) \left( \begin{array}{ccc} -\frac{v_\phi}{\sqrt{v_\phi^2+16v_7^2}} & \frac{\sqrt{10}v_7}{\sqrt{v_\phi^2+10v_7^2}} & \frac{v_\phi}{\sqrt{v_\phi^2+16v_7^2}}\frac{\sqrt{6} v_7}{\sqrt{v_\phi^2+10v_7^2}} \\ -\frac{\sqrt{10}v_7}{\sqrt{v_\phi^2+16v_7^2}} &-\frac{v_\phi}{\sqrt{v_\phi^2+10v_7^2}} & \frac{\sqrt{6}v_7}{\sqrt{v_\phi^2+16v_7^2}}\frac{\sqrt{10}v_7}{\sqrt{v_\phi^2+10v_7^2}} \\ \frac{\sqrt{6}v_7}{\sqrt{v_\phi^2+16v_7^2}} & 0 & \frac{\sqrt{v_\phi^2+10v_7^2}}{\sqrt{v_\phi^2+16v_7^2}} \end{array}\right). \label{theta7} \end{align} The mass matrix is then transformed as \begin{align} O_+^T M_+^2 O_+ = R_\theta^T \left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & (M_+^2)_{11} & (M_+^2)_{12} \\ 0 & (M_+^2)_{12} & (M_+^2)_{22} \end{array}\right)R_\theta =\left( \begin{array}{ccc} 0 & 0 & 0 \\ 0 & m_{H^+}^2 & 0 \\ 0 & 0 & m_{\bar{H}^+}^2 \end{array}\right), \end{align} with \begin{align} (M_+^2)_{11} &= \frac{1}{v_\phi^2+10v_7^2}\left[-\frac{1}{12}\left(v_\phi^4+20v_\phi^2v_7^2+40v_7^4\right)\kappa_2+\frac{v_7^2v_\phi^2}{30}\bar{\lambda}\right],\\ (M_+^2)_{22} &= \frac{v_\phi^2+16v_7^2}{36(v_\phi^2+10v_7^2)}\left[3v_\phi^2\kappa_2+2v_7^2\bar{\lambda}\right], \\ (M_+^2)_{12} &= \frac{v_7^2v_\phi\sqrt{v_\phi^2+16v_7^2}}{6\sqrt{15}(v_\phi^2+10v_7^2)} \left(15\kappa_2-\bar{\lambda}\right). \end{align} The other multi-charged Higgs boson masses are calculated as \begin{align} m_{\varphi_7^{++}}^2 &= -\frac{1}{6}\left(\kappa_2v_\phi^2+\frac{4}{3}v_7^2\lambda_2\right)-\frac{4}{45}v_7^2\lambda_3,\\ m_{\varphi_7^{3+}}^2 &= -\frac{1}{4}\left(\kappa_2v_\phi^2+\frac{4}{3}v_7^2\lambda_2\right)-\frac{3v_7^2}{9}\lambda_3-\frac{3v_7^2}{10}\lambda_4,\\ m_{\varphi_7^{4+}}^2 &= -\frac{1}{3}\left(\kappa_2v_\phi^2+\frac{4}{3}v_7^2\lambda_2\right)-\frac{4v_7^2}{9}\lambda_3-\frac{v_7^2}{2}\lambda_4,\\ m_{\varphi_7^{5+}}^2 &= -\frac{5}{12}\left(\kappa_2v_\phi^2+\frac{4}{3}v_7^2\lambda_2\right)-\frac{5v_7^2}{9}\lambda_3-\frac{v_7^2}{2}\lambda_4. \end{align} \section{Gauge boson two point functions} In this appendix, we give the analytic expressions for the 1PI diagram contribution to the gauge boson two point functions $\Pi_{XY}^{\text{1PI}}(p^2)$ in terms of the Passarino-Veltman functions~\cite{PV}, which are used to calculate the $S$ and $T$ parameters. Calculations are performed in the 't Hooft-Feynman gauge, so that the masses of the NG bosons $m_{G^+}$ and $m_{G^0}$ should be replaced by those corresponding gauge bosons; i.e., $m_{G^+}=m_W$ and $m_{G^0}=m_Z$. The following expressions are subtracted the contributions which appear in the SM. We first show the formulae in the GM model. In this model, when the Higgs potential is constructed under the custodial $SU(2)_V$ symmetric way, the physical Higgs bosons can be classified into the $SU(2)_V$ 5-plet ($H_5^{\pm\pm},H_5^\pm,H_5^0$), 3-plet ($H_3^\pm,H_3^0$) and singlet Higgs bosons $H_1^0$ and $h$. The masses of the Higgs bosons belonging to the same $SU(2)_V$ multi-plet are the same; namely, $m_{H_5^{++}}=m_{H_5^+}=m_{H_5^0}$ and $m_{H_3^+}=m_{H_3^0}$. Detailed expressions for the relations between the mass eigenstates and the weak eigenstates of the scalar bosons and those masses are given in Refs.~\cite{Gunion-Vega-Wudka,AK_GM,Haber_Logan}. The $\Pi_{XY}^{\text{1PI}}(p^2)$ functions are then expressed by \begin{align} \Pi_{WW}^{\text{1PI}}(p^2) &=\frac{g^2}{16\pi^2}\Big[ \frac{1}{2}B_5(p^2,m_{H_5^{++}},m_{H_5^{+}}) +\frac{c_\beta^2}{2}B_5(p^2,m_{H_5^{++}},m_{H_3^{+}}) +\frac{s_\beta^2}{2}B_5(p^2,m_{H_5^{++}},m_{G^{+}})\notag\\ &+\frac{3}{4}B_5(p^2,m_{H_5^{+}},m_{H_5^{0}}) +\frac{c_\beta^2}{4}B_5(p^2,m_{H_5^{+}},m_{H_3^{0}}) +\frac{s_\beta^2}{4}B_5(p^2,m_{H_5^{+}},m_{G^{0}}) \notag\\ &+\frac{c_\beta^2}{12}B_5(p^2,m_{H_3^{+}},m_{H_5^{0}}) +\frac{s_\beta^2}{12}B_5(p^2,m_{G^{+}},m_{H_5^{0}}) +\frac{1}{4}B_5(p^2,m_{H_3^{+}},m_{H_3^{0}})\notag\\ &+\frac{1}{4}\left(\frac{2}{3}\sqrt{6}c_\alpha c_\beta+s_\alpha s_\beta\right)^2B_5(p^2,m_{H_3^{+}},m_{H_1^0}) +\frac{1}{4}\left(-\frac{2}{3}\sqrt{6}s_\alpha c_\beta+c_\alpha s_\beta\right)^2B_5(p^2,m_{H_3^{+}},m_{h}) \notag\\ &+\frac{1}{4}\left(-\frac{2}{3}\sqrt{6}c_\alpha s_\beta+s_\alpha c_\beta\right)^2B_5(p^2,m_{G^{+}},m_{H_1^0}) +\frac{1}{4}\left[\Big(c_\alpha c_\beta+\frac{2}{3}\sqrt{6}s_\alpha s_\beta\Big)^2-1\right]B_5(p^2,m_{G^{+}},m_{h})\Big]\notag\\ &+\frac{g^2m_W^2}{16\pi^2}\Big[2s_\beta^2B_0(p^2,m_{H_5^{++}},m_W)+\frac{s_\beta^2}{c_W^2}B_0(p^2,m_{H_5^{+}},m_Z) +\frac{s_\beta^2}{3}B_0(p^2,m_{H_5^{0}},m_W)\notag\\ &+\left(-s_\alpha c_\beta+\frac{2}{3}\sqrt{6}c_\alpha s_\beta\right)^2B_0(p^2,m_{H_1^{0}},m_W) +\left[\Big(c_\alpha c_\beta+\frac{2}{3}\sqrt{6}s_\alpha s_\beta\Big)^2-1\right]B_0(p^2,m_h,m_W) \Big],\\ \Pi_{ZZ}^{\text{1PI}}(p^2) &=\frac{g_Z^2}{16\pi^2}\Big[c_{2W}^2B_5(p^2,m_{H_5^{++}},m_{H_5^{++}}) +\frac{c_{2W}^2}{4}B_5(p^2,m_{H_5^{+}},m_{H_5^{+}}) +\frac{c_{2W}^2}{4}B_5(p^2,m_{H_3^{+}},m_{H_3^{+}})\notag\\ &+\frac{c_\beta^2}{2}B_5(p^2,m_{H_5^{+}},m_{H_3^{+}}) +\frac{s_\beta^2}{2}B_5(p^2,m_{H_5^{+}},m_{G^{+}})\notag\\ &+\frac{c_\beta^2}{3}B_5(p^2,m_{H_5^{0}},m_{H_3^{0}}) +\frac{s_\beta^2}{3}B_5(p^2,m_{H_5^{0}},m_{G^{0}})\notag\\ &+\frac{1}{4}\left(\frac{2}{3}\sqrt{6}c_\alpha c_\beta+s_\alpha s_\beta\right)^2B_5(p^2,m_{H_3^{0}},m_{H_1^{0}}) +\frac{1}{4}\left(-\frac{2}{3}\sqrt{6}s_\alpha c_\beta+c_\alpha s_\beta\right)^2B_5(p^2,m_{H_3^{0}},m_h)\notag\\ &+\frac{1}{4}\left(-s_\alpha c_\beta+\frac{2}{3}\sqrt{6}c_\alpha s_\beta\right)^2B_5(p^2,m_{H_1^0},m_{G^{0}}) +\frac{1}{4}\left[\Big(c_\alpha c_\beta+\frac{2}{3}\sqrt{6}s_\alpha s_\beta\Big)^2-1\right]B_5(p^2,m_{h},m_{G^{0}}) \Big]\notag\\ &+\frac{g_Z^2m_Z^2}{16\pi^2}\Big[ \frac{4}{3}s_\beta^2B_0(p^2,m_{H_5^0},m_{Z})+2s_\beta^2c_W^2B_0(p^2,m_{H_5^+},m_W)\notag\\ &+\left(-s_\alpha c_\beta+\frac{2}{3}\sqrt{6}c_\alpha s_\beta\right)^2B_0(p^2,m_{H_1^{0}},m_Z) +\left[\Big(c_\alpha c_\beta+\frac{2}{3}\sqrt{6}s_\alpha s_\beta\Big)^2-1\right]B_0(p^2,m_h,m_Z)\Big],\\ \Pi_{\gamma\gamma}^{\text{1PI}}(p^2) &=\frac{e^2}{16\pi^2}\Big[4B_5(p^2,m_{H_5^{++}},m_{H_5^{++}}) +B_5(p^2,m_{H_5^{+}},m_{H_5^{+}}) +B_5(p^2,m_{H_3^{+}},m_{H_3^{+}})\Big],\\ \Pi_{Z\gamma}^{\text{1PI}}(p^2) &=\frac{eg_Z}{16\pi^2}\Big[2c_{2W}B_5(p^2,m_{H_5^{++}},m_{H_5^{++}}) +\frac{c_{2W}}{2}B_5(p^2,m_{H_5^{+}},m_{H_5^{+}})+\frac{c_{2W}}{2}B_5(p^2,m_{H_3^{+}},m_{H_3^{+}}) \Big], \end{align} where $B_5(p^2,m_1,m_2)=A(m_1)+A(m_2)-4B_{22}(p^2,m_1,m_2)$~\cite{HHKM}. Next, $\Pi_{XY}^{\text{1PI}}(p^2)$ functions are calculated in model with the septet Higgs field in the case of $\theta=0$ defined in Eq.~(\ref{theta7}) as \begin{align} \Pi_{WW}^{\text{1PI}}(p^2) &=\frac{g^2}{16\pi^2}\Big[ 3B_5(p^2,m_{\varphi_7^{5+}},m_{\varphi_7^{4+}}) +5B_5(p^2,m_{\varphi_7^{4+}},m_{\varphi_7^{3+}}) +6B_5(p^2,m_{\varphi_7^{3+}},m_{\varphi_7^{2+}})\notag\\ &+\frac{48c_\beta^2}{5+3c_\beta^2}B_5(p^2,m_{\varphi_7^{2+}},m_{H^+}) +\frac{45s_\beta^4}{4(5+3c_\beta^2)}B_5(p^2,m_{\varphi_7^{2+}},m_{\bar{H}^+}) +\frac{15s_\beta^2}{4}B_5(p^2,m_{\varphi_7^{2+}},m_{G^+})\notag\\ &+\frac{5(-4s_\alpha c_\beta+c_\alpha s_\beta)^2}{4(5+3c_\beta^2)}B_5(p^2,m_{H^+},m_{h}) +\frac{5(4c_\alpha c_\beta+s_\alpha s_\beta)^2}{4(5+3c_\beta^2)}B_5(p^2,m_{H^+},m_{H})\notag\\ &+\frac{5(5+3c_{2\beta})^2}{16(5+3c_\beta^2)}B_5(p^2,m_{H^+},m_{A}) +\frac{45s_\beta^2c_\beta^2}{4(5+3c_\beta^2)}B_5(p^2,m_{H^+},m_{G^0})\notag\\ &+\frac{3c_\beta^2(4s_\alpha c_\beta-c_\alpha s_\beta)^2}{4(5+3c_\beta^2)}B_5(p^2,m_{\bar{H}^+},m_{h}) +\frac{3c_\beta^2(4c_\alpha c_\beta+s_\alpha s_\beta)^2}{4(5+3c_\beta^2)}B_5(p^2,m_{\bar{H}^+},m_{H})\notag\\ &+\frac{12c_\beta^2}{5+3c_\beta^2}B_5(p^2,m_{\bar{H}^+},m_{A}) +\frac{75s_\beta^2}{4(5+3c_\beta^2)}B_5(p^2,m_{\bar{H}^+},m_{G^0})\notag\\ &+\frac{1}{4}(s_\alpha c_\beta-4c_\alpha s_\beta)^2B_5(p^2,m_{G^+},m_{H}) +\frac{1}{4}\left[(c_\alpha c_\beta+4s_\alpha s_\beta)^2-1\right]B_5(p^2,m_{G^+},m_{h}) \Big]\notag\\ &+\frac{g^2m_W^2}{16\pi^2}\Big[15s_\beta^2B_0(p^2,m_{\varphi_7^{2+}},m_W) +\frac{45c_\beta^2s_\beta^2}{(5+3c^2_\beta)c_W^2}B_0(p^2,m_{H^+},m_Z)\notag\\ &+\frac{75s_\beta^2}{(5+3c_\beta^2)c_W^2}B_0(p^2,m_{\bar{H}^+},m_Z) \notag\\ &+(-s_\alpha c_\beta+4c_\alpha s_\beta)^2B_0(p^2,m_{H},m_W) +\left[(c_\alpha c_\beta+4s_\alpha s_\beta)^2-1\right]B_0(p^2,m_h,m_W)\Big],\\ \Pi_{ZZ}^{\text{1PI}}(p^2) &=\frac{g_Z^2}{16\pi^2}\Big[(5c_W^2-2)^2B_5(p^2,m_{\varphi_7^{5+}},m_{\varphi_7^{5+}}) +(4c_W^2-2)^2B_5(p^2,m_{\varphi_7^{4+}},m_{\varphi_7^{4+}})\notag\\ &+(3c_W^2-2)^2B_5(p^2,m_{\varphi_7^{3+}},m_{\varphi_7^{3+}}) +(2c_W^2-2)^2B_5(p^2,m_{\varphi_7^{2+}},m_{\varphi_7^{2+}})\notag\\ &+\left[\frac{c_{2W}(3c_\beta^2+5)-24c_\beta^2}{10+6c_\beta^2}\right]^2B_5(p^2,m_{H^+},m_{H^+})\notag\\ &+\left(c_W^2+\frac{9}{2}-\frac{20}{5+3c_\beta^2}\right)^2B_5(p^2,m_{\bar{H}^+},m_{\bar{H}^+}) +\frac{135s_\beta^4c_\beta^2}{2(5+3c_\beta^2)^2}B_5(p^2,m_{H^+},m_{\bar{H}^+})\notag\\ &+\frac{45s_\beta^2c_\beta^2}{2(5+3c_\beta^2)}B_5(p^2,m_{H^+},m_{G^+}) +\frac{75s^2_\beta}{2(5+3c_\beta^2)}B_5(p^2,m_{\bar{H}^+},m_{G^+}) \notag\\ &+\frac{1}{4}(-4s_\alpha c_\beta +c_\alpha s_\beta)^2B_5(p^2,m_{A},m_{h}) +\frac{1}{4}(4c_\alpha c_\beta +s_\alpha s_\beta)^2B_5(p^2,m_{A},m_{H})\notag\\ &+\frac{1}{4}(s_\alpha c_\beta -4c_\alpha s_\beta)^2B_5(p^2,m_{G^0},m_{H}) +\frac{1}{4}\left[(c_\alpha c_\beta +4s_\alpha s_\beta)^2-1\right]B_5(p^2,m_{G^0},m_{h})\Big]\notag\\ &+\frac{g_Z^2m_Z^2}{16\pi^2}\Big[ \frac{90c_\beta^2s_\beta^2c_W^2}{5+3c^2_\beta}B_0(p^2,m_{H^+},m_W) +\frac{150s_\beta^2c_W^2}{5+3c_\beta^2}B_0(p^2,m_{\bar{H}^+},m_W) \notag\\ &+\left(-s_\alpha c_\beta+4c_\alpha s_\beta\right)^2B_0(p^2,m_{H},m_Z) +\left[(c_\alpha c_\beta+4s_\alpha s_\beta)^2-1\right]B_0(p^2,m_h,m_Z)\Big], \end{align} \begin{align} \Pi_{\gamma\gamma}^{\text{1PI}}(p^2) &=\frac{e^2}{16\pi^2}\Big[25B_5(p^2,m_{\varphi_7^{5+}},m_{\varphi_7^{5+}}) +16B_5(p^2,m_{\varphi_7^{4+}},m_{\varphi_7^{4+}}) +9B_5(p^2,m_{\varphi_7^{3+}},m_{\varphi_7^{3+}})\notag\\ &+4B_5(p^2,m_{\varphi_7^{++}},m_{\varphi_7^{++}})+B_5(p^2,m_{H^{+}},m_{H^{+}}) +B_5(p^2,m_{\bar{H}^{+}},m_{\bar{H}^{+}})\Big],\\ \Pi_{Z\gamma}^{\text{1PI}}(p^2) &=\frac{eg_Z}{16\pi^2}\Big[5(5c_W^2-2)B_5(p^2,m_{\varphi_7^{5+}},m_{\varphi_7^{5+}}) +4(4c_W^2-2)B_5(p^2,m_{\varphi_7^{4+}},m_{\varphi_7^{4+}})\notag\\ &+3(3c_W^2-2)B_5(p^2,m_{\varphi_7^{3+}},m_{\varphi_7^{3+}}) +2(2c_W^2-2)B_5(p^2,m_{\varphi_7^{++}},m_{\varphi_7^{++}})\notag\\ &+\frac{c_{2W}(3c_\beta^2+5)-24c_\beta^2}{10+6c_\beta^2}B_5(p^2,m_{H^{+}},m_{H^{+}}) +\left(c_W^2+\frac{9}{2}-\frac{20}{5+3c_\beta^2}\right)B_5(p^2,m_{\bar{H}^{+}},m_{\bar{H}^{+}}) \Big]. \end{align} \end{appendix}
proofpile-arXiv_067-10624
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Our main purpose in this article is to derive the Schr\"odinger maps system at the level of the differentiated, gauged system using a variational approach. In part this is to provide a basis for resolving a certain tension: Schr\"odinger maps are usually introduced as constrained geometric evolution equations, whereas state-of-the-art results on Schr\"odinger maps are proved at the level of the gauged system, with little if any reference to the underlying map. These two formulations are related in a simple way. The gauged system is obtained by representing the differentiated Schr\"odinger maps system with respect to a space and time dependent orthonormal frame. Using the Frobenius theorem, one recovers the Schr\"odinger map system from the gauged system. In spite of this close relationship, certain gaps have persisted in what might be called the dictionary that translates between these two formulations. In particular, at the level of maps, the equation is easily seen to be Hamiltonian, though the variational formulation is not entirely satisfactory thanks to topological obstructions. At the level of the differentiated system, topological obstructions cease to exist, though different difficulties emerge, and only partial Hamiltonian and variational descriptions were known. In this article we fill in this gap, providing a natural variational formulation and Hamiltonian. In particular we study the energy-critical Schr\"odinger map system with target $\mathbb{S}^2$ or target $\H^2$. Our first result is a natural variational formulation of the differentiated, gauged system. That is, in \S \ref{Sec:Lagrangian} we introduce the action. Next, in \S \ref{Sec:ConservationLaws}, we introduce a natural stress-energy tensor and derive conservation laws. It is here that we introduce the Hamiltonian, as it may be rewritten in a simple way in terms of the stress-energy tensor. In \S \ref{Sec:CSS}, we take up comparing Schr\"odinger maps with the Chern-Simons-Schr\"odinger system, which is suggested in part by a shared Chern-Simons term in their actions. Finally, we consider in the Appendix gradient flow and solitons from the gauged point of view. These objects are not only interesting in their own right but also are important because they are needed to construct the caloric gauge. \subsection{Geometric map equations} Suppose we have $\phi : \mathbb{R}^d \to M$, where $\mathbb{R}^d$ is Euclidean space, $M$ is a Riemannian manifold with metric $h$, and $\phi$ is a smooth map. Consider the Lagrangian \begin{equation} \label{Lag} \frac12 \int_{\mathbb{R}^d} \langle \partial_j \phi, \partial_j \phi \rangle_{h(\phi(x))} dx \end{equation} where here and throughout we sum repeated Latin indices over all spatial variables. The associated Euler-Lagrange equation is \begin{equation} \label{HM} (\phi^* \nabla)_j \partial_j \phi = 0 \end{equation} the solutions of which are called \emph{harmonic maps}. Here $\nabla$ denotes the Levi-Civita connection on $M$ and $\phi^* \nabla$ denotes the pullback of this connection to $\mathbb{R}^d$. The downward gradient flow associated to \eqref{Lag} generates the \emph{harmonic map heat flow} equation \begin{equation} \label{HMHF} \partial_t \phi = (\phi^* \nabla)_j \partial_j \phi \end{equation} If the target manifold $M$ is K\"ahler with complex structure $J$, then to derive a Schr\"odinger evolution variationally we need to introduce in the action a suitable term. As this term ought only to carry one derivative, the natural pairing is with a 1-form. A drawback of this Lagrangian formulation is that there can be topological obstructions to global nonvanishing 1-forms, such as is the case with $\mathbb{S}^2$. This particular case may be handled by first stereographically projecting to $\mathbb{C}$ and then on that level writing down a suitable action \cite{MaPaSo94, GrSt02}, though this procedure does not genuinely circumvent the fundamental topological issue. In any case, at the level of maps we are led to the \emph{Schr\"odinger map} equation \begin{equation} \label{SM} \partial_t \phi = J(\phi) (\phi^* \nabla)_j \partial_j \phi \end{equation} Equation \eqref{SM} arises in ferromagnetism as a Heisenberg model for a ferromagnetic spin system and describes the classical spin \cite{La67, PaTo91, MaPaSo94, ChShUh00, NaStUh03}. Solutions of \eqref{HMHF} and \eqref{SM} are preserved by the rescalings \[ \phi(t, x) \mapsto \phi(\lambda^2 t, \lambda x) \quad \quad \lambda > 0 \] and solutions of \eqref{HM} are preserved by such scalings in the spatial variable. For each of these equations, the natural energy is given by \eqref{Lag}, which also obeys a scaling law: \[ E(\phi) := \frac12 \int_{\mathbb{R}^d} \langle \partial_j \phi, \partial_j \phi \rangle_{h(\phi(x))} dx, \quad \quad E(\phi(x)) = \lambda^{2-d} E(\phi(\lambda x)) \] Energy is formally conserved by \eqref{SM}, and as noted the flow of \eqref{HMHF} is the downward gradient flow associated to the energy. In dimension $d = 2$, both the energy and the equations are preserved by rescalings, and for this reason this is called the energy-critical setting. From now on we assume $d = 2$. \subsection{Gauges} One theme unifying the study of equations \eqref{HM}---\eqref{SM} is the use of \emph{gauges} or \emph{moving frames}: for each point in the domain, e.g.~each $(t, x) \in I \times \mathbb{R}^2$ in cases \eqref{HMHF} and \eqref{SM}, we choose an orthonormal basis of $TM_{\phi(t, x)}$. Frames have been used extensively in studying harmonic maps \cite{He91}, and their use in the setting of Schr\"odinger maps in proving wellposedness was initiated in \cite{ChShUh00}. Our notation and perspective follow closely that in \cite[Chapter 6]{Tao06}. In the energy-critical case with a surface as the target, we have one degree of freedom in our choice of orthonormal frame for each $(t, x)$. For maps from $\mathbb{R}^2$ into $M \in \{\mathbb{S}^2, \H^2\}$ evolving on some time interval $I$, a gauge choice may be represented by the diagram \[ \begin{CD} \mathbb{R}^2 \times \mathbb{R}^2 @>e>> \phi^* TM @>>> TM \\ @AA\psi_\alpha A @AA\partial_\alpha \phi A @VV\pi V\\ I \times \mathbb{R}^2 @>id>> I \times \mathbb{R}^2 @>\phi >> M \end{CD} \] Here $\psi_\alpha = e^* \partial_\alpha \phi$ denotes the vector $\partial_\alpha \phi$ written with respect to the choice of orthonormal frame, represented by the map $e(t, x)$. The Levi-Civita connection pulls back to the covariant derivatives $D_\alpha := \partial_\alpha + i A_\alpha$, which generate curvatures $F_{\alpha \beta} := \partial_\alpha A_\beta - \partial_\beta A_\alpha$. Orthonormality of the frame ensures $A_\alpha \in \mathbb{R}$. The zero-torsion property of the connection enforces the compatibility condition $D_\alpha \psi_\beta = D_\beta \psi_\alpha$. Using the fact that $\mathbb{S}^2$ has constant curvature $+1$, one may calculate directly that $F_{\alpha \beta} = \Im(\bar{\psi}_\beta \psi_\alpha)$. Similarly, the constant $-1$ curvature of $\H^2$ leads to $F_{\alpha \beta} = -\Im(\bar{\psi}_\beta \psi_\alpha)$. So that we can consider both cases simultaneously, we write $F_{\alpha \beta} = \mu \Im(\bar{\psi}_\beta \psi_\alpha)$, taking $\mu = +1$ for the sphere and $\mu = -1$ for the hyperbolic plane. Thus for any map $\phi$ and any choice of frame $e(t, x)$, it holds that \[ F_{\alpha \beta} = \mu \Im(\bar{\psi}_\beta \psi_\alpha) \quad \quad \text{and} \quad \quad D_\alpha \psi_\beta = D_\beta \psi_\alpha \] These relations are all preserved by the transformations \begin{equation}\label{gauge-freedom} \phi \mapsto e^{-i \theta} \phi \quad \quad A \mapsto A + d \theta \end{equation} where $\theta(t, x)$ is a compactly supported real-valued function (we only use time-independent functions in the case of \eqref{HM-gauge}). This gauge invariance corresponds to the freedom we have in the choice of frame $e(t, x)$. Here and throughout we use $\partial_0$ and $\partial_t$ interchangeably. We also adopt the convention that Greek indices are allowed to assume values from the set $\{0, 1, 2\}$, whereas Roman indices are restricted $\{1, 2\}$, meaning that Roman indices indicate only spatial variables. At the gauge field level, the energy-critical harmonic maps equation \eqref{HM} assumes the form \begin{equation} \label{HM-gauge} \begin{cases} 0 &= D_j \psi_j \\ F_{12} &= \mu \Im(\bar{\psi}_2 \psi_1) \\ D_1 \psi_2 &= D_2 \psi_1 \end{cases} \end{equation} The procedure for obtaining gauge field representations of evolution equations is slightly less straightforward. For the harmonic map heat flow, for instance, we begin by pulling back the left and right hand sides of equation \eqref{HMHF}: \begin{equation} \label{HMHF-gauge0} \psi_t = D_j \psi_j \end{equation} To obtain an evolution equation from \eqref{HMHF-gauge0}, we covariantly differentiate in a spatial direction by applying $D_k$ and then invoke the compatibility condition $D_k \psi_t = D_t \psi_k$: \[ D_t \psi_k = D_k D_j \psi_j \] By using the curvature relation to commute $D_k$ and $D_j$ and then applying the compatibility condition $D_j \psi_k = D_k \psi_j$, we obtain a covariant heat equation for $\psi_k$. All told, we arrive at the system \begin{equation} \label{HMHF-gauge} \begin{cases} D_t \psi_k &= D_j D_j \psi_k - i F_{jk} \psi_j \\ F_{01} &= \mu \Im(\bar{\psi}_1 D_j \psi_j) \\ F_{02} &= \mu \Im(\bar{\psi}_2 D_j \psi_j) \\ F_{12} &= \mu \Im(\bar{\psi}_2 \psi_1) \\ D_1 \psi_2 &= D_2 \psi_1 \end{cases} \end{equation} Note that we have eliminated the field $\psi_t$. The gauge field equations for Schr\"odinger maps are similarly derived. For \eqref{SM}, the analogue of \eqref{HMHF-gauge0} is \[ \psi_t = i D_j \psi_j \] and we arrive at the system \begin{equation} \label{SM-gauge} \begin{cases} D_t \psi_k &= i D_j D_j \psi_k + F_{jk} \psi_j \\ F_{01} &= \mu \Re(\bar{\psi}_1 D_j \psi_j) \\ F_{02} &= \mu \Re(\bar{\psi}_2 D_j \psi_j) \\ F_{12} &= \mu \Im(\bar{\psi}_2 \psi_1) \\ D_1 \psi_2 &= D_2 \psi_1 \end{cases} \end{equation} \begin{rem} All three of the above systems, i.e., \eqref{HM-gauge}, \eqref{HMHF-gauge}, and \eqref{SM-gauge}, are preserved by gauge transformations \eqref{gauge-freedom}. In order to obtain well-defined flows, one must eliminate the gauge freedom by making a gauge choice. See \cite[Chapter 6]{Tao06} for a survey of various gauge choices. It appears that the best gauge for handling arbitrary Schr\"odinger maps (e.g., maps without any symmetry assumption) is the caloric gauge, which was introduced in \cite{Tao04} in the context of wave maps and first applied to Schr\"odinger maps in \cite{BeIoKeTa11a}. The preferred gauge for studying Schr\"odinger maps with equivariant symmetry and harmonic maps is the Coulomb gauge. \end{rem} \begin{rem} \label{Return} It is natural to ask whether solutions of \eqref{HM-gauge}, \eqref{HMHF-gauge}, or \eqref{SM-gauge} must arise from an underlying map. Assuming sufficient decay and regularity, this is indeed the case. We demonstrate this for Schr\"odinger maps into $\mathbb{S}^2 \hookrightarrow \mathbb{R}^3$, where the embedding is the usual one. Let $\phi$ be a Schr\"odinger map and let $e_1, e_2$ denote the two vectors of the orthonormal frame $e$. Define for $\alpha = 0, 1, 2$, \[ \Phi = \begin{bmatrix} e_1 & e_2 & \phi \end{bmatrix}, \quad \quad R_\alpha = \begin{bmatrix} 0 & -A_\alpha & \Re(\psi_\alpha) \\ A_\alpha & 0 & \Im(\psi_\alpha) \\ -\Re(\psi_\alpha) & -\Im(\psi_\alpha) & 0 \end{bmatrix} \] Then using \eqref{SM} and the definitions, we find that $\Phi, R$ satisfy the Mayer-Lie system \begin{equation} \label{recover-eq} \partial_\alpha \Phi = \Phi R_\alpha \quad \quad \alpha = 0, 1, 2 \end{equation} That \eqref{recover-eq} satisfies the Frobenius integrability condition may be described most succinctly using the Maurer-Cartan 1-form $\omega = \Phi^{-1} d\Phi$, which satisfies \begin{equation} \label{MC} d \omega + \frac12 [\omega, \omega] = 0 \end{equation} In this perspective $\Phi$ is interpreted as an element of $SO(3)$ and each $R_\alpha$ as an element of the corresponding Lie algebra $so(3)$. To obtain $\Phi$ from $R$, we reverse the argument: if $(\psi, A)$ satisfy \eqref{SM-gauge}, then the integrability condition \eqref{MC} is satisfied with $\omega = R_\alpha dx^\alpha$. If $(\psi, A)$ are rapidly decaying, then we can specify a (uniform) boundary condition for $\Phi$ at spatial infinity and recover $\Phi$ at all points by integrating in from infinity. If we have special structure such as equivariance, then instead we can specify $\Phi$ at a point $x \in \mathbb{R}^2$ and integrate out. Analogous statements hold for $\H^2$ embedded in $\mathbb{R}^3$ endowed with the Minkowski metric. In that setting $\Phi$ is interpreted as an element of the Lorentz group $SO(2, 1)$ and the $R_\alpha$ as elements of the associated Lie algebra $so(2, 1)$. For more details in the $\H^2$ setting and for additional related comments, see \cite[\S 2]{Tao04}. \end{rem} \subsection{Topology} \begin{defin} The \emph{charge} $c_1$ of a vector bundle over $\mathbb{R}^2$ with connection $A$ is the integral \[ c_1 := \frac{1}{2\pi} \int_{\mathbb{R}^2} d\underline{A} = \frac{1}{2\pi} \int_{\mathbb{R}^2} F_{12} dx^1 \wedge dx^2 \] The charge is also known as the first Chern number. \end{defin} The ``underline" notation introduced here we will also use in the sequel: an underlined form means that we take only the spatial components of that form. For instance, if $A = A_0 dt + A_j dx^j$, then $\underline{A} = A_j dx^j$. \begin{lem} For rapidly decaying solutions of \eqref{HMHF-gauge} or \eqref{SM-gauge}, charge is conserved, i.e., \[ \partial_t \frac{1}{2\pi} \int_{\mathbb{R}^2} F_{12} dx = 0 \] \end{lem} \begin{proof} Because $d^2 A = 0$ for any 1-form $A$, \begin{equation} \label{geocon} \partial_t F_{12} - \partial_1 F_{02} + \partial_2F_{01} = 0 \end{equation} \end{proof} A less obvious fact is that for the system \eqref{HM-gauge}, charge is \emph{quantized}, which is to say that it is integer-valued. At the level of maps, this follows from the Gauss-Bonnet theorem and the fact that $d\underline{A}$ is the pullback by the map of the volume form on the target. Charge in fact characterizes the homotopy class. To prove quantization at the gauge field level, one may exhaust $\mathbb{R}^2$ with nested discs, apply Stokes' theorem to the integral of $d\underline{A}$ over each disc, and then control the resulting integrals that arise on the boundary. The field equations are of course essential in establishing quantization. See \cite[Chapter 3]{MaSu04} for further discussion. \section{Lagrangian formulation} \label{Sec:Lagrangian} In this section we show that the system \eqref{SM-gauge} arises as the Euler-Lagrange equations of a suitable gauge-invariant action. The difficulties encountered at the level of the map do not arise. In view of Remark \ref{Return}, this furnishes a Lagrangian formulation for the Schr\"odinger map system. In carrying out variations we work formally, assuming smoothness of all quantities and assuming that fields and variations are rapidly decaying. \begin{thm} \label{thm:SchLag} The energy-critical gauged Schr\"odinger map system \eqref{SM-gauge} is generated by the action \[ \begin{split} L_{Sch}(\psi, A) :=& \int_{\mathbb{R}^{2+1}} \left[ \Re(\bar{\psi}_2 D_t \psi_1) - \Im(\overline{D_j \psi_2} D_j \psi_1) \right] dx^1 \wedge dx^2 \wedge dt \\ & + \frac12 \int_{\mathbb{R}^{2+1}} (|\psi_1|^2 + |\psi_2|^2) dt \wedge dA + \mu \frac12 \int_{\mathbb{R}^{2+1}} A \wedge dA \end{split} \] provided that the compatibility condition $D_1 \psi_2 = D_2 \psi_1$ holds at the initial time. \end{thm} \begin{proof} We verify the claim by calculating the variation. \noindent \textbf{Variation of $\psi$}. The variation of $\psi_1, \psi_2$ respectively give rise to the $D_t \psi_2$ and $D_t \psi_1$ evolutions of \eqref{SM-gauge}. Under the variation $\psi_1 \mapsto \psi_1 + \varepsilon \phi$, the terms linear in $\varepsilon$ from \[ \Re(\bar{\psi}_2 D_t \psi_1), \quad -\Im(\overline{D_j \psi_2} D_j \psi_1), \quad \frac12 F_{12} |\psi_1|^2, \] are, respectively, \[ \Re(\bar{\psi}_2 D_t \phi), \quad - \Im(\overline{D_j \psi_2} D_j \phi), \quad F_{12} \Re(\bar{\psi}_1 \phi) \] Integrating by parts in \[ \int_{\mathbb{R}^{2+1}} \left[ \Re(\bar{\psi}_2 D_t \phi) - \Im(\overline{D_j \psi_2} D_j \phi) + F_{12} \Re(\bar{\psi}_1 \phi) \right] dx dt \] yields \[ \int_{\mathbb{R}^{2+1}} \left[ -\Re(\bar{\phi} D_t \psi_2) - \Im(\bar{\phi} D_j D_j \psi_2) + F_{12} \Re(\bar{\phi} \psi_1) \right] dx dt \] which leads to the evolution equation \[ D_t \psi_2 = i D_j D_j \psi_2 + F_{12} \psi_1 \] Similarly, under the variation $\psi_2 \mapsto \psi_2 + \varepsilon \phi$ we obtain the $\varepsilon$-linear terms \[ \Re(\bar{\phi} D_t \psi_1), \quad -\Im(\overline{D_j \phi} D_j \psi_1), \quad F_{12} \Re(\bar{\phi} \psi_2) \] which lead to the evolution equation \[ D_t \psi_1 = i D_j D_j \psi_1 - F_{12} \psi_2 \] \noindent \textbf{Variation of $A$}. The variation of $A_0$ leads to the $F_{12}$ curvature equation. Varying $A_1$ and $A_2$ respectively yield preliminary $F_{02}$ and $F_{01}$ equations. To obtain the compatibility condition $D_1 \psi_2 = D_2 \psi_1$, we enforce it at time zero and then show using Gronwall's inequality that the condition persists. Once we have the compatibility condition, we can substitute it back into the preliminary $F_{0j}$ equations to obtain the equations appearing in \eqref{SM-gauge}. Under the variation $A \to A + \varepsilon B$, we get from $\mu \frac12 \int A \wedge dA$ the $\varepsilon$-linear term \[ \mu B \wedge dA \] which can be verified using Stokes and the fact that for 1-forms $A, B$ we have $d(A \wedge B) = dA \wedge B - A \wedge dB$. Upon expansion, the term appears as \begin{equation} \label{AdA-variation} \mu \int_{\mathbb{R}^{2+1}} \left( B_t F_{12} - B_1 F_{02} + B_2 F_{01} \right) dx dt \end{equation} From $\Re(\bar{\psi}_2 D_t \psi_1)$, we get from the variation of $A$ the $\varepsilon$-linear term \begin{equation} \label{At-variation} - B_t \Im(\bar{\psi}_2 \psi_1) \end{equation} As there are no other $A_t$ variation terms, we conclude from \eqref{AdA-variation} and \eqref{At-variation} that \begin{equation} \label{F12} F_{12} = \mu \Im(\bar{\psi}_2 \psi_1) \end{equation} We also have $\varepsilon$ terms coming from the variation of the $A_j$. In particular, $-\Im(\overline{D_j \psi_2} D_j \psi_1)$ contributes \begin{equation} \label{var-2} B_j \Re(\bar{\psi}_2 D_j \psi_1) - B_j \Re(\overline{D_j \psi_2} \psi_1) \end{equation} Finally, we have to handle \[ \frac12 \int_{\mathbb{R}^{2+1}} (|\psi_1|^2 + |\psi_2|^2) dt \wedge dA \] To do so we first invoke Stokes to obtain \[ \frac12 \int_{\mathbb{R}^{2 + 1}} d\left[ (|\psi_1|^2 + |\psi_2|^2) dt\right] \wedge A \] and then expand to get \begin{equation} \label{Last-variation} \frac12 \int_{\mathbb{R}^{2 + 1}} \left( A_2 \partial_1 |\psi_x|^2 dx^1 \wedge dt \wedge dx^2 + A_1 \partial_2 |\psi_x|^2 \right) dx^2 \wedge dt \wedge dx^1 \end{equation} Varying \eqref{Last-variation} with respect to $A$ and then expanding yields the $\varepsilon$-linear terms \begin{equation} \label{var-3} \int_{\mathbb{R}^{2 + 1}} \left[ B_1 \Re(\bar{\psi}_1 D_2 \psi_1) + B_1 \Re(\bar{\psi}_2 D_2 \psi_2) - B_2 \Re(\bar{\psi}_1 D_1 \psi_1) - B_2 \Re(\bar{\psi}_2 D_1 \psi_2) \right] dx dt \end{equation} Comparing the $B_1$ terms in \eqref{AdA-variation}, \eqref{var-2}, and \eqref{var-3} leads to \[ \int \left[ \Re(\bar{\psi}_2 D_1 \psi_1) - \Re(\overline{D_1 \psi_2} \psi_1) + \Re(\bar{\psi}_1 D_2 \psi_1) + \Re(\bar{\psi}_2 D_2 \psi_2) - \mu F_{02} \right] = 0 \] This yields \begin{equation} \label{F02-prelim} \mu F_{02} = \Re(\bar{\psi}_2 D_j \psi_j) + \Re(\bar{\psi}_1 (D_2 \psi_1 - D_1 \psi_2)) \end{equation} Similarly, comparing $B_2$ terms leads to \[ \int \left[ \Re(\bar{\psi}_2 D_2 \psi_1) - \Re(\overline{D_2 \psi_2} \psi_1) - \Re(\bar{\psi}_1 D_1 \psi_1) - \Re(\bar{\psi}_2 D_1 \psi_2) + \mu F_{01} \right] = 0 \] and hence \begin{equation} \label{F01-prelim} \mu F_{01} = \Re(\bar{\psi}_1 D_j \psi_j) + \Re(\bar{\psi}_2 (D_1 \psi_2 - D_2 \psi_1)) \end{equation} By direct calculation one may verify that \eqref{geocon} holds with \eqref{F12}, \eqref{F02-prelim}, and \eqref{F01-prelim}. \noindent \textbf{The compatibility condition.} Set \[ \Theta := D_1 \psi_2 - D_2 \psi_1 \] Then \begin{equation} \label{DtTheta} D_t \Theta = D_1 D_t \psi_2 - D_2 D_t \psi_1 + iF_{01} \psi_2 - i F_{02} \psi_1 \end{equation} By direct calculation, \[ D_t \psi_1 = i D_1 D_j \psi_j - i D_2 \Theta, \quad \quad D_t \psi_2 = i D_2 D_j \psi_j + i D_1 \Theta \] which, upon substituting into \eqref{DtTheta}, yield \[ \begin{split} D_t \Theta &= i(D_1 D_2 - D_2 D_1) D_j \psi_j + i D_j D_j \Theta + i F_{01} \psi_2 - i F_{02} \psi_1 \\ &= -F_{12} D_j \psi_j + i F_{01} \psi_2 - i F_{02} \psi_1 + i D_j D_j \Theta \end{split} \] Invoking \eqref{F12}, \eqref{F02-prelim}, and \eqref{F01-prelim}, we find \[ -F_{12} D_j \psi_j + i F_{01} \psi_2 - i F_{02} \psi_1 = \mu i \left[ \psi_2 \Re(\bar{\psi}_2 \Theta) + \psi_1 \Re(\bar{\psi}_1 \Theta) \right] \] Therefore \[ D_t \Theta = i D_j D_j \Theta + \mu i \left[ \psi_2 \Re(\bar{\psi}_2 \Theta) + \psi_1 \Re(\bar{\psi}_1 \Theta) \right] \] so that in particular \[ \Re(\bar{\Theta} D_t \Theta) = \partial_j \Re(\bar{\Theta} i D_j \Theta) - \mu \Im(\bar{\Theta} \left[ \psi_2 \Re(\bar{\psi}_2 \Theta) + \psi_1 \Re(\bar{\psi}_1 \Theta) \right]) \] Consequently \[ \partial_t \frac12 \int_{\mathbb{R}^2} |\Theta|^2 dx \leq \sup_{\mathbb{R}^2} \left( |\psi_1|^2 + |\psi_2|^2 \right) \int_{\mathbb{R}^2} |\Theta|^2 dx \] Therefore if $\Theta = 0$ at time zero, then we conclude by Gronwall's inequality that $\Theta$ is zero for all later times for which the solution exists. By time reversibility of the system, this means that the compatibility condition \begin{equation} \label{compatibility} D_1 \psi_2 = D_2 \psi_1 \end{equation} holds for all times on the interval of existence provided that it holds at at least one point in the interval. Finally, by using the compatibility condition \eqref{compatibility} in \eqref{F01-prelim} and \eqref{F02-prelim}, we recover the $F_{0j}$ equations of \eqref{SM-gauge}. \end{proof} \begin{rem} The initial data of $(\psi, A)$ may be chosen in any way that is consistent with the curvature constraints and compatibility condition. \end{rem} \begin{rem} The time compatibility conditions $D_0 \psi_k = D_k \psi_0$ are not present because we have no need for---and therefore have not introduced---the derivative field $\psi_0$. \end{rem} \begin{rem} A Lagrangian approach to Schr\"odinger maps into $\mathbb{S}^2$ appears in \cite{MaPaSo94}, though the Euler-Lagrange equations there derived do not include the compatibility condition $D_1 \psi_2 = D_2 \psi_1$. Instead, such a constraint must be imposed. One of the key differences between our action and that introduced in \cite{MaPaSo94} is that, instead of using a term quartic in $\psi$, we introduce a term that is quadratic in $\psi$ and linear in $dA$, which has the effect of coupling $\psi$ and $dA$. \end{rem} \section{Conservation laws} \label{Sec:ConservationLaws} Some conservation laws are written at the gauge level in \cite{MaPaSo94} and derived at the level of maps in \cite{GrSt02}. Our approach here is at the gauge level, in the spirit of \cite{BeIoKeTa11, CoCzLe11}. We begin by introducing the symmetric pseudo-stress-energy tensor $T_{\alpha \beta}$, defined by \begin{equation} \label{Tensor} \begin{cases} T_{00} &= \frac12 (|\psi_1|^2 + |\psi_2|^2) \\ T_{0j} &= \Im(\bar{\psi}_\ell D_j \psi_\ell) \\ T_{jk} &= 2 \Re(\overline{D_j \psi_\ell} D_k \psi_\ell) - \delta_{jk} \Delta T_{00} \end{cases} \end{equation} \begin{thm} \label{thm:laws} Solutions $(\psi, A)$ of the energy-critical gauged Schr\"odinger map system \eqref{SM-gauge} satisfy the conservation law \begin{equation} \label{law1} \partial_\alpha T_{0 \alpha} = 0 \end{equation} and the balance law \begin{equation} \label{law2} \partial_\alpha T_{j \alpha} = 2 F_{\alpha j} T_{0 \alpha} \end{equation} \end{thm} \begin{proof} First we establish \eqref{law1}. Using the evolution equations in \eqref{SM-gauge}, we have \[ \begin{split} \frac12 \partial_t |\psi_1|^2 = \Re(\bar{\psi}_1 D_t \psi_1) &= \Re(\bar{\psi}_1 i D_j D_j \psi_1) + \Re(\bar{\psi}_1 F_{j1} \psi_j) \\ &= \partial_j \Re(\bar{\psi}_1 i D_j \psi_1) + F_{21} \Re(\bar{\psi}_1 \psi_2) \end{split} \] and \[ \frac12 \partial_t |\psi_2|^2 = \partial_j \Re(\bar{\psi}_2 i D_j \psi_2) + F_{12} \Re(\bar{\psi}_2 \psi_1) \] Consequently, \[ \frac12 \partial_t (|\psi_1|^2 + |\psi_2|^2) = \partial_j \Re(\bar{\psi}_\ell i D_j \psi_\ell) \] Next we show \eqref{law2}, which is more involved. We start by using the evolution and curvature conditions to obtain \begin{align} \partial_t T_{0j} &=\Im(\overline{D_t \psi_\ell} D_j \psi_\ell) + \Im(\bar{\psi}_\ell D_t D_j \psi_\ell) \nonumber \\ &= \Im(\overline{i D_k D_k \psi_\ell} D_j \psi_\ell) + \Im(\overline{F_{k \ell} \psi_k} D_j \psi_\ell) + \Im(\bar{\psi}_\ell D_j D_t \psi_\ell) + \Im(\bar{\psi}_\ell i F_{0j} \psi_\ell) \label{tToj} \end{align} The rightmost term of \eqref{tToj} can be rewritten as \[ \Im(\bar{\psi}_\ell i F_{0j} \psi_\ell) = F_{0j}(|\psi_1|^2 + |\psi_2|^2) = 2 F_{0j} T_{00} \] In view of the evolution equation, curvature conditions, and compatibility condition, the second-to-last term of \eqref{tToj} expands as \begin{align} \Im(\bar{\psi}_\ell D_j D_t \psi_\ell) &= \Im(\bar{\psi}_\ell i D_j D_k D_k \psi_\ell) + \Im(\bar{\psi}_\ell D_j(F_{k \ell} \psi_k)) \nonumber \\ &= \Im(\bar{\psi}_\ell i D_k D_j D_k \psi_\ell) - \Im(\bar{\psi}_\ell F_{jk} D_k \psi_\ell) + \Im(\bar{\psi}_\ell D_j(F_{k \ell} \psi_k)) \nonumber \\ &= \Im(\bar{\psi}_\ell i D_k D_j D_k \psi_\ell) + \Im(\bar{\psi}_\ell D_j(F_{k \ell} \psi_k)) + F_{kj} T_{0k} \nonumber \\ &= \partial_k \Im(\bar{\psi}_\ell i D_j D_k \psi_\ell) - \Im(\overline{D_k \psi_\ell} i D_j D_k \psi_\ell) + \Im(\bar{\psi}_\ell D_j(F_{k \ell} \psi_k)) + F_{kj} T_{0k} \label{take1} \end{align} Appealing only to the curvature conditions and compatibility condition, we rewrite the first term of \eqref{tToj} as \[ \begin{split} \Im(\overline{i D_k D_k \psi_\ell} D_j \psi_\ell) &= \partial_k \Im(\overline{i D_k \psi_\ell} D_j \psi_\ell) - \Im(\overline{i D_k \psi_\ell} D_k D_j \psi_\ell) \\ &= \partial_k \Im(\overline{i D_\ell \psi_k} D_j \psi_\ell) - \Im(\overline{i D_\ell \psi_k} D_k D_j \psi_\ell) \end{split} \] where we then rewrite $-\Im(\overline{i D_\ell \psi_k} D_k D_j \psi_\ell)$ as \[ \begin{split} -\Im(\overline{i D_\ell \psi_k} D_k D_j \psi_\ell) &= - \Im(\overline{i D_\ell \psi_k} D_j D_k \psi_\ell) - \Im(\overline{i D_\ell \psi_k} i F_{kj} \psi_\ell) \\ &= - \Im(\overline{i D_\ell \psi_k} D_j D_k \psi_\ell) - F_{kj} \Im(\overline{D_k \psi_\ell} \psi_\ell) \\ &= - \Im(\overline{i D_\ell \psi_k} D_j D_k \psi_\ell) + F_{kj} T_{0k} \end{split} \] so that \begin{equation} \label{take2} \Im(\overline{i D_k D_k \psi_\ell} D_j \psi_\ell) = \partial_k \Im(\overline{i D_\ell \psi_k} D_j \psi_\ell) - \Im(\overline{i D_\ell \psi_k} D_j D_k \psi_\ell) + F_{kj} T_{0k} \end{equation} Taking \eqref{take1} and \eqref{take2} together, we get \[ \begin{split} &\Im(\bar{\psi}_\ell D_j D_t \psi_\ell) + \Im(\overline{i D_k D_k \psi_\ell} D_j \psi_\ell) \\ &\quad = \partial_k \Im(\bar{\psi}_\ell i D_j D_k \psi_\ell) + \partial_k \Im(\overline{i D_\ell \psi_k} D_j \psi_\ell) + \Im(\bar{\psi}_\ell D_j(F_{k \ell} \psi_k)) + 2 F_{kj} T_{0k} \end{split} \] Therefore \begin{equation} \label{progress} \begin{split} \partial_t T_{0j} &= 2 F_{\alpha j} T_{0 \alpha} \\ &\quad + \partial_k \Im(\bar{\psi}_\ell i D_j D_k \psi_\ell) + \partial_k \Im(\overline{i D_\ell \psi_k} D_j \psi_\ell) + \Im(\bar{\psi}_\ell D_j(F_{k \ell} \psi_k)) + \Im(\overline{F_{k \ell} \psi_k} D_j \psi_\ell) \end{split} \end{equation} The last line of \eqref{progress} may be rewritten as \begin{equation} \label{pent} \partial_k \partial_j \Im(\bar{\psi}_\ell i D_k \psi_\ell) - 2 \partial_k \Im(\overline{D_j \psi_\ell} i D_k \psi_\ell) + \partial_j \Im(\bar{\psi}_\ell F_{k \ell} \psi_k) - 2 \Im(\overline{D_j \psi_\ell} F_{k\ell} \psi_k) \end{equation} The first term of \eqref{pent} can be rewritten as $\partial_j \Delta T_{00}$. The third term of \eqref{pent} is $\mu \partial_j F_{12}^2$. For the fourth term, we have \[ - 2 \Im(\overline{D_j \psi_\ell} F_{k\ell} \psi_k) = - 2 F_{12} \mu \partial_j F_{12} = - \mu \partial_j F_{12}^2 \] Therefore we may rewrite \eqref{progress} as follows: \[ \partial_t T_{0j} = - 2 \partial_k \Im(\overline{D_j \psi_\ell} i D_k \psi_\ell) + \partial_j \Delta T_{00} + 2 F_{\alpha j} T_{0 \alpha} \] \end{proof} \begin{cor} For rapidly decaying solutions of \eqref{SM-gauge}, the following quantity is conserved: \begin{equation} \label{g-en} \frac12 \int_{\mathbb{R}^2} \left( |\psi_1|^2 + |\psi_2|^2\right) dx \end{equation} \end{cor} Note that \eqref{g-en} is simply \eqref{Lag} written at the level of frames. \begin{lem}[Hamiltonian] Let \begin{equation} \label{Hamiltonian} H_{Sch} := \int_{\mathbb{R}^2} \left( -\Im(\overline{D_j \psi_2} D_j \psi_1) + \frac12 (|\psi_1|^2 + |\psi_2|^2) F_{12} \right) dx^1 \wedge dx^2 \end{equation} Then, for rapidly decaying solutions of the gauged Schr\"odinger map system \eqref{SM-gauge}, it holds that \[ H_{Sch} = \frac12 \int_{\mathbb{R}^2} \left( \partial_1 T_{02} - \partial_2 T_{01} \right) dx^1 \wedge dx^2 = 0 \] \end{lem} \begin{proof} Using the compatibility and curvature conditions, we calculate \begin{equation} \label{string} \begin{split} \Im(\overline{D_1 \psi_2} D_1 \psi_1) &= \Im(\overline{D_2 \psi_1} D_1 \psi_1) \\ &= \partial_1 \Im(\overline{D_2 \psi_1} \psi_1) - \Im(\overline{D_1 D_2 \psi_1} \psi_1) \\ &= \partial_1 \Im(\overline{D_2 \psi_1} \psi_1) - \Im(\overline{D_2 D_1 \psi_1} \psi_1) - \Im(\overline{i F_{12} \psi_1} \psi_1) \end{split} \end{equation} The right hand side we may expand as \[ \partial_1 \Im(\overline{D_2 \psi_1} \psi_1) - \partial_2 \Im(\overline{D_1 \psi_1} \psi_1) + \Im(\overline{D_1 \psi_1} D_2 \psi_1) + F_{12} |\psi_1|^2 \] which, by virtue of the string of equalities in \eqref{string}, is equal to $\Im(\overline{D_2 \psi_1} D_1 \psi_1)$. This implies \[ 2 \Im(\overline{D_2 \psi_1} D_1 \psi_1) = \partial_1 \Im(\overline{D_2 \psi_1} \psi_1) - \partial_2 \Im(\overline{D_1 \psi_1} \psi_1) + F_{12} |\psi_1|^2 \] and hence \[ \Im(\overline{D_1 \psi_2} D_1 \psi_1) = \frac12 \left( \partial_1 \Im(\overline{D_2 \psi_1} \psi_1) - \partial_2 \Im(\overline{D_1 \psi_1} \psi_1) + F_{12} |\psi_1|^2 \right) \] By conjugating and reversing the roles of the indices, we similarly conclude \[ \Im(\overline{D_2 \psi_2} D_2 \psi_1) = \frac12 \left(\partial_2 \Im(\bar{\psi}_2 D_1 \psi_2) - \partial_1 \Im(\bar{\psi}_2 D_2 \psi_2) + F_{12} |\psi_2|^2 \right) \] Therefore \[ -\Im(\overline{D_j \psi_2} D_j \psi_1) + \frac12 (|\psi_1|^2 + |\psi_2|^2) F_{12} = \frac12 \left( \partial_1 \Im(\bar{\psi}_j D_2 \psi_j) - \partial_2 \Im(\bar{\psi}_j D_1 \psi_j) \right) \] \end{proof} Define the virial potential and Morawetz action respectively by \[ V_a(t) = \int_{\mathbb{R}^2} a(x) T_{00} dx, \quad \quad M_a(t) = \int_{\mathbb{R}^2} T_{0 j} \partial_j a \;dx \] The conservation law \eqref{law1} followed by integration by parts implies \[ \partial_t V_a(t) = M_a(t) \] We recover the generalized virial identity of \cite[Lemma 3.1]{CoCzLe11}, adapted to the setting of Schr\"odinger maps. \begin{lem} Let $a:\mathbb{R}^2 \to \mathbb{R}$ and let $(\psi, A)$ be a solution of \eqref{SM-gauge}. Then \[ M_a(T) - M_a(0) = \int_0^T \int_{\mathbb{R}^2} \left[ 2 \Re(\overline{D_j \psi_\ell} D_k \psi_\ell) \partial_j \partial_k a - T_{00} \Delta^2 a + 2 F_{\alpha j} T_{0\alpha} \partial_j a \right] dx dt \] \end{lem} \begin{proof} Using the Morawetz action, balance law, and integration by parts, we have \[ \partial_t M_a(t) = 2 \int_{\mathbb{R}^2} \left( T_{jk} \partial_k \partial_j a + F_{\alpha j} T_{0\alpha} \partial_j a \right) dx \] \end{proof} \begin{cor} If $a$ is convex, then we can further conclude that \[ \int_0^T \int_{\mathbb{R}^2} \left( 2 F_{\alpha j} T_{0 \alpha} \partial_j a - T_{00} \Delta^2 a \right) dx dt \lesssim \sup_{[0, T]} |M_a(t)| \] \end{cor} Virial identities are established in the context of equivariant Schr\"odinger maps in \cite{BeIoKeTa11, BeIoKeTa12}. For virial and Morawetz identities in the context of radial Schr\"odinger maps, see \cite{GuKo11}. \section{Comparison with Chern-Simons-Schr\"odinger} \label{Sec:CSS} In two spatial dimensions, the Chern-Simons-Schr\"odinger equation arises as the second-quantization of a nonrelativistic anyon system. For background, see \cite{JaTe81, DeJaTe82, Wi90, EzHoIw91, EzHoIw91b, JaPi91b, JaPi91, JaPiWe91, MaPaSo91}. Local wellposedness at high regularity is established in \cite{BeBoSa95} using the Coulomb gauge and at low-regularity for small data in \cite{LiSmTa12} using the heat gauge, which, in the setting of Chern-Simons-Schr\"odinger systems, appears to have been first introduced in \cite{De07}. \noindent \textbf{Lagrangian formulation.} The action is \[ L(\phi, A) = \frac12 \int_{\mathbb{R}^{2+1}} \left[ \Im (\bar \phi D_t \phi) + |D_x \phi|^2 - \frac{g}{2} |\phi|^4 \right] dx^1 \wedge dx^2 \wedge dt + \frac12 \int_{\mathbb{R}^{2+1}} A \wedge dA \] with Euler-Lagrange equations \begin{equation} \label{CSS} \begin{cases} D_t \phi &= i D_\ell D_\ell \phi + i g \lvert \phi \rvert^2 \phi \\ F_{01} &= - \Im(\bar{\phi} D_2 \phi) \\ F_{02} &= \Im(\bar{\phi} D_1 \phi) \\ F_{12} &= -\frac{1}{2} \lvert \phi \rvert^2 \end{cases} \end{equation} both of which enjoy the gauge freedom \eqref{gauge-freedom}. It is interesting to note that \eqref{CSS} is Galilean invariant, whereas \eqref{SM-gauge} is not; the obstruction lies with the compatibility condition. In the above $g$ is a coupling constant. The so-called ``critical coupling" is $g = \frac12$, and this is what we consider below. \noindent \textbf{Conservation laws.} For the Chern-Simons-Schr\"odinger system \eqref{CSS}, we set, following \cite{CoCzLe11}, \[ \begin{cases} T_{00} &= \frac12 |\phi|^2 \\ T_{0j} &= \Im(\bar{\phi} D_j \phi) \\ T_{jk} &= 2 \Re(\overline{D_j \phi} D_k \phi) - \delta_{jk}( T_{00} + \Delta) T_{00} \end{cases} \] Here there is no distinction between the conservation law \eqref{law1} and the curvature relation \eqref{geocon}. Note that for $\phi$ not identically zero we always have \[ \int_{\mathbb{R}^2} d\underline{A} = \int_{\mathbb{R}^2} F_{12} dx = - \int_{\mathbb{R}^2} T_{00} dx < 0 \] The balance law \eqref{law2} is still valid in this context. Its right hand side, however, vanishes thanks to $F_{01} = - T_{02}$, $F_{02} = T_{01}$, and $F_{12} = - T_{00}$, so that \begin{equation} \label{CSS-conserv} \partial_\alpha T_{j \alpha} = 0 \end{equation} The conserved energy for this system is \[ E(\phi) := \frac12 \int_{\mathbb{R}^2} \left( |D_x \phi|^2 - \frac14 |\phi|^4 \right) dx = \frac14 \int_{\mathbb{R}^2} \left( T_{11} + T_{22} \right) dx \] \noindent \textbf{Virial identities.} In spite of \eqref{CSS-conserv}, the focusing nature of the critical coupling adds a term in the generalized virial identity with a sign that is unfavorable for establishing Morawetz estimates. In particular, this term is the $-\delta_{jk} T_{00}^2$ term appearing in the definition of $T_{jk}$. \begin{lem} Let $a: \mathbb{R}^2 \to \mathbb{R}$ and let $(\phi, A)$ be a solution of \eqref{CSS}. Then the Morawetz action $M_a(t)$ satisfies \begin{equation} \label{Mora} M_a(T) - M_a(0) = \int_0^T \int_{\mathbb{R}^2} \left[ 2 \Re(\overline{D_j \phi} D_k \phi) \partial_j \partial_k a - T_{00} \Delta^2 a - T_{00}^2 \Delta a \right] dx dt \end{equation} \end{lem} In the defocusing case, the sign of $T_{00}^2 \Delta a$ in \eqref{Mora} is $``+"$, providing the desired positivity. \begin{cor} For $a = |x|^2$, it holds that \begin{equation} \label{CSS-virial} \partial_t^2 \int_{\mathbb{R}^2} |x|^2 T_{00} dx = \partial_t M_{\{a = |x|^2\}}(t) = 2 \int_{\mathbb{R}^2} \left( |D_x \phi|^2 - T_{00}^2 \right) dx = 4 E(\phi) \end{equation} \end{cor} Equation \eqref{CSS-virial} was used in \cite{BeBoSa95} to establish the existence of finite-time blow-up solutions by taking data with negative energy or data with positive energy and sufficiently large weighted momentum. We remark that \cite{Hu09} constructs finite-time blow-up solutions that have zero energy. The key tool in the construction is pseudo-conformal invariance. The fact that \eqref{CSS-virial} holds is closely tied with exact conservation laws and pseudo-conformal invariance \cite[\S 2.4]{Tao06}. In the case of Schr\"odinger maps, \eqref{law2} is not an exact conservation law. Moreover, pseudo-conformal invariance fails to hold, the obstruction being the compatibility condition \cite{Hu08}. If the compatibility condition were dropped, then $H_{Sch}$ introduced in \eqref{Hamiltonian} could be made to be nonzero, but not otherwise for maps with sufficient decay. Constructing blow-up solutions for Schr\"odinger maps is therefore more involved \cite{MeRaRo11, MeRaRo11a, Pe12}; see also the complementary stability result \cite{BeTa10}.
proofpile-arXiv_067-10659
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The classical problem of best analytic approximation in $L^p$ on the unit circle $\T$ reads as follows: given a function $g \in L^p$, find a function $p_g$ in the Hardy space $H^p$ such that $$ \|g - p_g\|_{L^p} = \dist_{L^p} (g, H^p). $$ In 1920, F.Riesz proved \cite{MR1555162} that best $H^1$--approximation in $L^1$ of a trigonometric polynomial of degree $n$ is an analytic polynomial of degree at most~$n$. His result was generalized in 1950 by A.\ Macintyre and W.\ Rogosinski \cite{MR0036314}, who treat the problem of best analytic approximation in $L^p$ for rational functions with finite number of poles in the open unit disk. \begin{Thm}[A.\ Macintyre, W.\ Rogosinski] \label{mrt} Let $1 \le p \le \infty$, and let $g$ be a rational function with $n$ poles $\beta_i$ in $|z|<1$, each counted according to multiplicity. Then best $H^p$--approximation $p_g$ of the function $g$ exists uniquely. Moreover, there exist $n-1$ numbers $\alpha_i$ with $|\alpha_i| \le 1$ such that \begin{equation}\label{e1} g - p_g = const \cdot \prod{}^{'} \frac{z - \alpha_i}{1 - \bar \alpha_i z} \prod_{1}^{n-1} (1 - \bar \alpha_i z)^{2/p} \prod_{1}^{n} \frac{1 - \bar \beta_i z}{z - \beta_i} (1 - \bar \beta_i z)^{-2/p}, \\ \end{equation} where $\prod{}^{'}$ is extended over all, some, or none of the $\alpha_i$ with $|\alpha_i| < 1$. \end{Thm} Among other things, this result shows that best $H^1$--approximation of a rational function is a rational function as well. The same holds for best $H^\infty$--approximation. \smallskip In 1953, W.\ Rogosinski and H.\ Shapiro \cite{MR0059354} presented a uniform approach to the problem of best analytic approximation in $L^p$ based on duality for classes $H^p$. Their paper contains a refined (but still rather complicated) proof of Theorem \ref{mrt}. \smallskip The matrix-valued case of the problem of best analytic approximation has been studied extensively in the last years. In particular, V.Peller and V.Vasyunin \cite{PV} consider this problem for rational matrix-valued functions motivated by applications in $H^\infty$--\! Control Theory. A survey of results related to best analytic approximation in $L^p$ of matrix-valued functions can be found in L.Baratchart, F.Nazarov, V.Peller~\cite{BNP}. \smallskip Our aim in this note is to give a short proof of Theorem \ref{mrt} and present its analogue in more general situation. We will consider the problem of best analytic approximation for functions of the form $h/\theta$, where $h \in H^p$ and $\theta$ is an inner function. If $\theta$ is a finite Blaschke product we are in the setting of Theorem \ref{mrt}. In general, functions of the form $h/\theta$ may have much more complex behaviour near the unit circle than rational functions. To be more specific, we need some definitions. A bounded analytic function $\theta$ in the open unit disk is called inner if $|\theta| = 1$ almost everywhere on the unit circle $\T$ in the sense of angular boundary values. Given an inner function $\theta$, define the coinvariant subspace $\Kthp$ of the Hardy space $H^p$ by the formula $\Kthp = H^p \cap \bar z \theta \ov{H^p}$. Here and in what follows we identify the Hardy space $H^p$ in the open unit disc $\D$ with the corresponding subspace of the space $L^p$ on the unit circle $\T$ via angular boundary values. All the information we need about Hardy spaces is available in Sections II and IV of \cite{MR628971}. Basic theory of coinvariant subspaces $\Kthp$ can be found in \cite{MR827223}, \cite{MR1289670}. \medskip Our main result is the following. \begin{Thm}\label{t1} Let $\theta$ be an inner function and let $1 \le p < \infty$. Take a function $g \in \bar \theta H^p$ and denote by $p_g$ its best $H^p$--approximation. The function $g - p_g$ can be uniquely represented in the form $g - p_g = c \cdot \bar \theta I F^{2/p}$, where $c = \dist_{L^p}(g , H^p)$, $F$ is an outer function in $\Kth$ of unit norm, $F(0)>0$, and $I$ is an inner function such that $I F \in \Kth$. \end{Thm} Taking $\theta = z^n$ and $p=1$ in Theorem \ref{t1}, we get the mentioned result by F.Riesz: trigonometric polynomials are preserved under best analytic approximation in $L^1$. Our paper contains the fourth proof of this fact (see Section \ref{s3}); previous proofs can be found in \cite{MR1555162}, \cite{MR0036314}, \cite{MR0382963}. Similarly, Theorem \ref{mrt} follows from Theorem \ref{t1} by taking the function $\theta$ to be a finite Blaschke product. The choice $\theta=e^{iaz}$ leads to the following fact. \begin{Thm}\label{t2} Let $g$ be a function in $L^1(\R)$ with compact support of Fourier transform: $\supp \hat g \subset [-a, a].$ Then we have $\supp \hat p_g \subset [0, a]$ for best $H^1$--approximation~$p_g$ of the function $g$ . \end{Thm} Proofs of Theorems \ref{mrt}, \ref{t1}, \ref{t2} are given in Sections \ref{s3}, \ref{s2}, \ref{s4}, correspondingly. In Section \ref{s5} we discuss how the problem of best analytic approximation in $L^p$ for functions from $\bar \theta H^p$ can be reduced to a special problem of interpolation. \section{Proof of Theorem \ref{t1}}\label{s2} We need the following known result from \cite{MR1097956}. \begin{Lem}[K.Dyakonov]\label{l1} A nonnegative function $\phi$ can be represented in the form $\phi =|F|^2$ for some outer function $F \in \Kth$ if and only if $\phi \in z \bar \theta H^1$. \end{Lem} The proof is included for completeness. \beginpf Let $\phi$ be a function of the form $\phi =|F|^2$, where $F \in H^2 \cap \bar z \theta \ov{H^2}$. Take a function $G \in H^2$ such that $F = \bar z \theta \bar G$. We have $\phi = z \bar \theta GF \in z \bar \theta H^1$, as required. Conversely, consider a nonnegative function $\phi \in z \bar \theta H^1$. Since $\theta$ is unimodular on the unit circle $\T$, we have $\log\phi \in L^1$. Let $F$ be the outer function in $H^2$ with modulus $\sqrt{\phi}$ on $\T$. We have $\bar z \theta |F|^2 \in H^1$. Hence, $\bar z \theta |F|^2 = IF^2$ for an inner function $I$. Thus, the function $F = \bar z \theta \ov{IF}$ belongs to the subspace $\bar z \theta \ov{H^2}$. It follows that $F \in \Kth$, which completes the proof. \qed \bigskip \noindent {\bf Proof of Theorem 2.} Let $g$ be a function in the subspace $\bar \theta H^p$, where $1 \le p <\infty$. Denote by $p'$ the conjugate exponent to $p$. There exist functions $p_g \in H^p$, $h_g \in zH^{p'}$ satisfying \begin{equation}\label{e3} \|g - p_g\|_{L^p} = \dist_{L^p}(g, H^p) = \int_{\T} (g-p_g)h_g\,dm, \quad \|h_g\|_{L^{p'}} = 1, \end{equation} where $m$ denotes the normalized Lebesgue measure on $\T$. This well-known fact was first established in \cite{MR0059354}; its modern proof can be found, e.g., in Section IV of~\cite{MR628971}. Denote by $f$ the function $g - p_g \in \bar \theta H^p$ and set $c = \|f\|_{L^p} = \dist_{L^p} (g, H^p)$. It follows from \eqref{e3} that we have equality in the H\"{o}lder inequality $\|fh_g\|_{L^1} \le \|f\|_{L^p}\|h_g\|_{L^{p'}}$. Therefore, $fh_g = c^{1-p}\cdot|f|^{p}$. The function $fh_g$ belongs to the subspace $z \bar \theta H^1$. Hence, the function $|f|^{p}$ belongs to $z \bar \theta H^1$ as well, and we see from Lemma \ref{l1} that $|f|^{p} = c^{p} |F|^2$ for an outer function $F \in \Kth$ of unit norm. We may assume that $F(0) > 0$. The function $\theta f$ lies in $H^p$ and has modulus $c |F|^{2/p}$. It follows that $\theta f = c I F^{2/p}$ for an inner function $I$. Let us prove that $IF \in \Kth$. By the construction, we have \begin{equation}\label{e4} c^p|F|^2 = |f|^p = c^{p-1} f h_g = c^p\cdot \bar \theta I F^{2/p} h_g. \end{equation} Hence, the function $h_g \in z H^{p'}$ has the form $h_g = z J F^{2/{p'}}$ where $J$ is an inner function. From \eqref{e4} we get the formula $z \bar \theta IJ F = \bar F$. This yields the fact that $IF \in \bar z \theta \ov{H^2}$. Thus, the inclusion $IF \in \Kth$ is proved. By the construction, $f = c \cdot \bar \theta I F^{2/p}$. Now prove that functions $I,F$ in the statement of the theorem are determined uniquely. For $1 \le p < \infty$, best $H^p$--approximation $p_g$ of the function $g$ is unique, see \cite{MR0059354} or Section IV in \cite{MR628971}. Hence, the function $c \cdot I F^{2/p} = \theta (g - p_g)$ is determined uniquely. It remains to use uniqueness in the inner-outer factorization for functions in $H^p$. \qed \medskip \noindent {\bf Remark 2.1.} In the case $p= \infty$, Theorem \ref{t1} holds provided the dual extremal function $h_g \in z H^1$ in formula \eqref{e3} exists. Indeed, under this assumption best $H^\infty$--approximation $p_g$ is unique and we get from \eqref{e3} that $f h_g = c|h_g|$, where $f = g - p_g$ and $c = \|f\|_\infty = \dist_{L^\infty}(f, H^\infty)$. As above, there exists an outer function $F \in \Kth$ such that $|h_g| = |F|^2$. Hence, $f h_g = c|F|^2$ and we have $z \bar \theta IF^2 = c|F|^2$ for some inner function $I$. It follows that $IF \in \Kth$ and $f = c\bar \theta I$, as required. It can be shown that the dual extremal function $h_g$ exists for every continuous function $g$ on the unit circle $\T$, see \cite{MR0049322} or Section IV in \cite{MR628971}. In particular, it exists for every rational function with poles in the open unit disk. This will allow us to prove Theorem~\ref{mrt} in the case $p=\infty$, see details in the next section. \medskip \noindent {\bf Remark 2.2.} As we have seen in the proof of Theorem \ref{t1}, the dual extremal function $h_g$ to the function $g$ is given by the formula $h_g = z J F^{2/p'}$, where $J$ is the inner function such that $IJF = \bar z \theta \bar F$. It can be shown that every inner function $U$ for which $UF \in \Kth$ is a divisor of the function $IJ$, see Theorem 2 in \cite{MR1097956}. \section{Proof of Theorem \ref{mrt}}\label{s3} Let us first prove the classical result by F.Riesz on best analytic approximation in $L^1$ of trigonometric polynomials. By a trigonometric (correspondingly, analytic) polynomial of degree $n$ we mean a linear combination of harmonics $z^k$, $|k| \le n$ (correspondingly, $0 \le k \le n$). Every trigonometric polynomial can be regarded as a rational function with multiple pole at the origin. Hence, the result below can be readily obtained from Theorem \ref{mrt}. However, we would like to give a separate proof as an example of using Theorem \ref{t1}. \begin{Prop}\label{p4} Let $g$ be a trigonometric polynomial of degree $n \ge 1$ and let $p_g$ be its best $H^1$--approximation. Then $p_g$ is an analytic polynomial of degree at most~$n$. Moreover, the function $g - p_g$ has the form \begin{equation}\label{e10} g - p_g = const \cdot \bar z^n \prod_{1}^{K}(1 - \bar \lambda_k z)(z - \lambda_k)\prod_{1}^{M}(1 - \bar \mu_m z)^2, \end{equation} where $|\lambda_k|< 1$, $|\mu_m| \le 1$, and $K + M \le n-1$. \end{Prop} \beginpf Consider the inner function $\theta_n = z^n$. By the assumption, $g \in \bar \theta_n H^1 \cap \theta_n \ov{H^1}$. The coinvariant subspace $K^{2}_{\theta_n}$ consists of analytic polynomials of degree at most $n-1$. It follows from Theorem \ref{t1} that $g - p_g = \bar z^{n} I F^2$, where $F$ is an analytic polynomial of degree at most $n-1$ and without zeroes in the open unit disk; $I$ is a finite Blaschke product such that $IF$ is an analytic polynomial of degree at most $n-1$. Denote by $\lambda_k$ the zeroes of $I$ and by $1/\bar \mu_m$ those zeroes of $F$ that are not poles of $I$, taking into account multiplicities. It is now evident that the function $g - p_g$ is of form \eqref{e10}. Since $g$ and the right side in \eqref{e10} are trigonometric polynomials of degree at most $n$, the function $p_g$ is an analytic polynomial of degree at most~$n$. \qed \medskip \noindent {\bf Proof of Theorem \ref{mrt}.} Let $1 \le p \le \infty$, and let $g$ be a rational function with $n$ poles $\beta_i$ in the open unit disk, each counted according to multiplicity. Then $g = h/B$, where $h \in H^p$ and $B$ is the Blaschke product with zeroes $\beta_i$, $$ B = \prod_{i=1}^{n}\frac{z - \beta_i}{1 - \bar\beta_i z}. $$ On the unit circle $\T$ we have $g = \bar B h$. Let $p_g$ denote best $H^p$--approximation of $g$. By Theorem \ref{t1} (see also Remark 2.1 for the case $p=\infty$), the function $g - p_g$ can be uniquely represented in the form $g - p_g = c \bar B I F^{2/p}$, where $F$ is an outer function in $K_B^2$ and $I$ is an inner function such that $IF \in K_{B}^{2}$. It follows from the definition of $K^2_B$ that every function $f \in K_{B}^{2}$ has the form $P_f/Q_B$, where $Q_B=\prod_{i=1}^{n} (1 - \bar \beta_{i} z)$ and $P_f$ is an analytic polynomial of degree at most $n - 1$. Since the function $F$ is outer, the polynomial $P_F$ has no zeroes in the open unit disk. Let us write it in the form $P_F = c_1 \cdot \prod_{1}^{n-1} (1 - \bar \alpha_{i} z)$, where $c_1$ is a constant and $|\alpha_i| \le 1$ (if $\deg P_F< n-1$, we let some of $\alpha_i$'s equal to zero). By the construction, $IF \in K^2_B$. Hence, we have $I = \prod{}^{'} \frac{z - \alpha_i}{1 - \bar \alpha_i z}$, where the product $\prod{}^{'}$ is extended over all, some, or none of the $\alpha_i$ with $|\alpha_i|<1$. This yields formula \eqref{e1}. The theorem is proved. \qed \medskip \noindent {\bf Remark 3.1.} The dual extremal function $h_g$ to the function $g$ has the form \begin{equation}\notag h_g = c_2 \cdot z \prod{}^{''} \frac{z - \alpha_i}{1 - \bar \alpha_i z} \prod_{1}^{n-1} (1 - \bar \alpha_i z)^{2/p'} \prod_{1}^{n} (1 - \bar \beta_i z)^{-2/p'}, \\ \end{equation} where $\prod{}^{''}$ is complementary to $\prod{}^{'}$ with respect to the $\alpha_i$ with $|\alpha_i|< 1$ and $c_2$ is a constant. Indeed, this follows from Remark 2.2. \section{Proof of Theorem \ref{t2}}\label{s4} A bounded analytic function $\theta$ in the upper halfplane $\C_+$ of the complex plane $\C$ is called inner if $|\theta|=1$ almost everywhere on the real line $\R$ in the sense of angular boundary values. Coinvariant subspaces of the Hardy space $\H^p$ in $\C_+$ have the form $\mathcal K^{p}_{\theta} =\H^p\cap \theta \ov{\H^p}$. Theorem \ref{t1} holds for functions $g$ in $\bar \theta \H^p$, as can be easily seen from its proof. We will deduce Theorem \ref{t2} from the following more general result. \begin{Prop}\label{p1} Let $\theta$ be an inner function in $\C_+$ and let $g \in \ov{\theta} \H^1 \cap \theta \ov{\H^{1}}$. Then we have $p_g \in \mathcal K^{1}_{\theta}$ for best $\H^1$--approximation $p_g$ of $g$. \end{Prop} \beginpf By Theorem \ref{t1}, we have $g - p_g = \bar \theta I F^{2}$, where $F$, $IF$ are functions in $\mathcal K^{2}_{\theta} $. Hence, the function $g - p_g$ belongs to the subspace $$\bar \theta \cdot (\H^2 \cap \theta \ov{\H^2})\cdot (\H^2 \cap \theta \ov{\H^2}) \subset \bar \theta \cdot (\H^1 \cap \theta^2 \ov{\H^1}) \subset \bar \theta \H^1 \cap \theta \ov{\H^1}.$$ It follows that the function $p_g$ lies in the subspace $\mathcal K^{1}_{\theta} = \H^1 \cap \theta \ov{\H^1}$. \qed \bigskip \noindent {\bf Proof of Theorem \ref{t2}.} Consider the inner function $S^{a}: z \mapsto e^{iaz}$ in the upper halfplane~$\C_+$. A function $f$ in $L^1(\R)$ belongs to the Hardy space $\H^1$ if and only if $\supp \hat f \subset [0, +\infty)$. It follows that every function $g \in L^1(\R)$ with $\supp \hat g \subset [-a, a]$ belongs to the subspace $\ov{S^a} \H^1 \cap S^a \ov{\H^{1}}$. By Proposition~\ref{p1}, we have $p_g \in \H^1 \cap S^a \ov{\H^{1}}$. Hence, $\supp \hat g \subset [0, a]$ and the result follows. \qed \section{Interpolation problems related to best analytic approximation}\label{s5} The problem of best $H^p$--approximation for functions in $\bar \theta H^p$ can be rewritten in the following form: given a function $g \in H^p$, find a function $h \in H^p$ such that the norm $\|g - \theta h\|_{L^p}$ is minimal. This is the problem of {\it constrained interpolation in $H^p$} with respect to the inner function $\theta$. An account of results related to constrained interpolation in $H^\infty$ is available in Chapter 3 of \cite{MR1864396}. We will consider the same problem in $H^p$, $1 \le p \le \infty$. Our observations are in line with \cite{MR0036314}, where the problem of best analytic approximation in $L^p$ for rational functions is reduced to a problem of interpolation. For $\theta$ is an inner function, define the class $E_{\theta,p}$ by the formula \begin{equation}\label{e7} E_{\theta,p} = \{cIF^{2/p}: \; c \in \C, \; I \mbox{ is inner, }\; F \mbox{ is outer,}\; IF \in \Kth\}. \end{equation} If $p$ is finite, there is no need in the constant $c$ in \eqref{e7}. We say that a function $f_2 \in H^p$ interpolates a function $f_1 \in H^p$ with respect to the inner function $\theta$ if $f_1 - f_2 \in \theta H^p$. For example, $f_1$ interpolates $f_2$ with respect to $z^n$ if and only if $f_{1}^{(k)}(0) = f_{2}^{(k)}(0)$ for all integers $0 \le k \le n-1$. Another example: for $\theta$ is a Blaschke product with simple zeroes $\Lambda$, $f_1$ interpolates $f_2$ with respect to $\theta$ if and only if $f_1(\lambda) = f_2(\lambda)$ for all $\lambda \in \Lambda$. \medskip The main result of this section is following. \begin{Prop}\label{p2} Let $1 \le p < +\infty$ and let $\theta$ be an inner function. Each function $f_1 \in H^p$ can be interpolated by a unique function $f_2 \in E_{\theta, p}$ with respect to $\theta$. Moreover, we have $\|f_2\|_{L^p} = \dist_{H^p} (f_1, \theta H^p)$. \end{Prop} \beginpf Take a function $f_1 \in H^p$ and set $g = \bar \theta f_1$. Let $p_g$ denote best $H^p$--approximation of $g$. By Theorem \ref{t1}, we have $g - p_g = c \cdot \bar \theta IF^{2/p}$, where the function $f_2 = c \cdot IF^{2/p}$ belongs to the class $E_{\theta, p}$. Note that $f_1 - f_2 \in \theta H^p$. Hence, the function $f_2$ interpolates $f_1$ with respect to the inner function $\theta$. By the construction, we have $\|f_2\|_{L^p} = \dist_{H^p} (f_1, \theta H^p)$. Let us now prove that the interpolating function $f_2$ is unique. Suppose that there is an another function $f_2^* = c^*I^* {F^{*}}^{2/p}$ in $E_{\theta, p}$ that interpolates $f_1$ with respect to~$\theta$. We may assume that functions $F$, $F^*$ are of unit norm in $\Kth$ and have positive values at the origin. Let also $c >0$, $c^* > 0$. Consider the inner function $J$ such that $IJF = \bar z \theta \bar F$. Since $cIF^{2/p} - c^*I^*{F^{*}}^{2/p}$ lies in $\theta H^p$, we have \begin{equation}\label{e5} c = \int_\T c \bar \theta IF^{2/p} \cdot zJF^{2/p'} \, dm = \int_\T c^* \bar \theta I^*{F^{*}}^{2/p} \cdot zJF^{2/p'}\, dm \le c^*. \end{equation} Symmetric argument tells us that $c_* \le c$. Hence, we have equality in \eqref{e5}. It follows that outer functions $F$, $F^*$ have the same modulus on $\T$. Since $F(0)> 0$ and $F^*(0) > 0$, we have $F=F^*$. Again by equality in \eqref{e5}, inner functions $I$ and $I^*$ have the same argument on $\T$. Hence, $I=I^*$ and $f_2 = f_{2}^{*}$. \qed \medskip A function $g \in L^p$ is called $H^p$--badly approximable if the zero function is the best analytic approximation of $g$ in $L^p$. Theorem \ref{t1} and Proposition \ref{p2} allow us to describe all $H^p$--badly approximable functions in $\bar \theta H^p$, where $\theta$ is an inner function and $1 \le p < \infty$. \begin{Prop}\label{p3} Let $1 \le p< \infty$. A function $g \in \bar \theta H^p$ is $H^p$--badly approximable if and only if $\theta g \in E_{\theta, p}$. \end{Prop} \beginpf By Theorem \ref{t1}, we have $\theta g \in E_{\theta, p}$ for every $H^p$--badly approximable function $g \in \bar \theta H^p$. Conversely, take a function $g \in \bar \theta E_{\theta, p}$ and consider its best $H^p$--approximation $p_g$. Set $f_1=\theta g$. The function $f_2 = f_1 - \theta p_g$ interpolates $f_1$ with respect to the inner function $\theta$. By Theorem \ref{t1}, we have $f_2 \in E_{\theta, p}$. Hence, two functions $f_1, f_2 \in E_{\theta, p}$ interpolate the function $f_1$ with respect to $\theta$. It follows from Proposition \ref{p2} that $f_1 = f_2$ and so $p_g =0$. \qed \medskip We conclude this section with some examples. \medskip \noindent {\bf Example 1.} The class $E_{z^n, 1}$ consists of polynomials of the form \begin{equation}\label{e8} const \cdot \prod_{1}^{K}(1 - \bar \lambda_k z)(z - \lambda_k)\prod_{1}^{M}(1 - \bar \mu_m z)^2, \end{equation} where $|\lambda_k|< 1$, $|\mu_m| \le 1$, and $K + M \le n-1$. As to the author knowledge, the problem of {\it constructive} interpolation by polynomials of form \eqref{e8} with respect to~$z^n$ is open until now. A detailed discussion of this problem can be found in Section 5 of~\cite{MR0036314}. \medskip \noindent {\bf Example 2.} Let us compute the upper bound of quantities $|f(0) + f'(0)|$ over all $f \in H^1$ of unit norm. By duality and Proposition \ref{p2}, this problem reduces to interpolation of $1 + z$ with respect to $z^2$ by a polynomial in $E_{z^2, 1}$. It is easy to see that the polynomial $\frac{1}{4}(2+z)^2$ solves this problem. Hence, we have $$\sup\left\{|f(0) + f'(0)|: \; f \in H^1, \|f\|_{H^1} = 1\right\} = \frac{1}{4}\int_\T|z+2|^2\, dm = \frac{5}{4}.$$ The general problem of calculation of $\sup\left\{|a_0 f(0) + a_1 f'(0)|: \; f \in H^1, \|f\|_{H^1} = 1\right\}$ is solved in Section 5 of~\cite{MR0036314}. \medskip \noindent {\bf Example 3.} The problem of constrained interpolation in $H^\infty$ with respect to the inner function $z^n$ is the classical Schur problem. It can be stated as follows: given $n$ numbers $a_0, \ldots a_{n-1}$, find a function $f \in H^\infty$ of minimal norm such that $f^{(k)}(0)/k! = a_k$ for all $k$. This problem can be solved constructive, see \cite{Schur17} or Paragraph 3.4.2.(ii).(b) in \cite{MR1864396}. The solution has the form \begin{equation}\label{e9} const \cdot \prod_{k=1}^{K} \frac{z - \lambda_k}{1 - \bar \lambda_k z}, \qquad |\lambda_k| < 1, \quad K \le n-1. \end{equation} Clearly, class $E_{z^n, \infty}$ consists of functions of form \eqref{e9}. This agrees well with Proposition \ref{p2} (its proof works in the case $p = \infty$ if there exists the dual extremal function to the function $\bar \theta f_1$). Similarly, the classical Nevanlinna-Pick problem reduces to interpolation by functions of form \eqref{e9} with respect to a finite Blaschke product. For more information, see Chapter 3 in \cite{MR1864396}.
proofpile-arXiv_067-10694
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:intro}Introduction} Physicists are widely convinced now that they have discovered what closely resembles the Higgs boson~\cite{ATLAS,CMS} postulated in the standard electroweak model (SM)~\cite{GG2a,GG2b,GG2c,Higgs-originala,Higgs-originalb, Higgs-originalc,Higgs-originald,Higgs-originale,Higgs-originalf}. Along with widespread exhilaration, such a development brings in questions on whether this particle carries some signature of physics beyond the standard model. Many studies in this direction have appeared~\cite{before-discoverya,before-discoveryb, before-discoveryc,before-discoveryd,before-discoverye,before-discoveryf, before-discoveryg,before-discoveryh,before-discoveryi,before-discoveryj, Desai-2011yj,before-discoveryk,before-discoveryl,higgs-phenoa,higgs-phenob, higgs-phenoc,higgs-phenod,higgs-phenoe,higgs-phenof,higgs-phenog,higgs-phenoh, higgs-phenoi,higgs-phenoj,higgs-phenok,higgs-phenol,higgs-phenom,higgs-phenon, higgs-phenoo,higgs-phenop,higgs-phenoq,higgs-phenor,higgs-phenos,higgs-phenot, higgs-phenou,higgs-phenov,higgs-phenow,higgs-phenox,higgs-phenoy,higgs-phenoz, higgs-phenoaa,higgs-phenoab,higgs-phenoac,higgs-phenoad,higgs-phenoae,higgs-phenoaf} in the context of the Large Hadron Collider (LHC) where the data available so far still allow some departure from SM behaviour. Even a finite invisible branching ratio ($BR$) for the Higgs cannot, at the moment, be ruled out~\cite{ATLAS-inv,CMS-inv}. The issue can be probed through careful measurements of the couplings of the Higgs (or Higgs-like scalar) to various pairs of SM particles. Among them, the couplings to pairs of vector bosons ($HVV$) are measured in a relatively more reliable manner. This possibility has been explained in the context of an $ep$ collider too~\cite{Han:2009pe,Biswal:2012mp}. In view of the cumulative demand for a closer probe on the $HVV$ couplings (and of course the couplings to other SM particles), the most desirable endeavour, however, is to build an electron-positron collider which provides a clean environment for precise measurements of Higgs interaction strengths. The first step is of course to develop a Higgs factory (at $\sqrt{s} \approx$ 250 - 300 GeV). Such a machine will not only produce the Higgs boson copiously near resonance, but is also the first step before an $e^+ e^-$ machine at even higher energies is developed. In this paper, we incorporate some observations regarding the signatures of anomalous $HVV$ couplings, manifest through higher dimensional operators (HDOs), at a Higgs factory. Other studies performed for an $e^+e^-$ machine can be found in~\cite{Biswal:2005fh}. If the couplings arise through physics at a scale higher than that of electroweak symmetry breaking, then the resulting higher-dimensional effective interactions are expected to be gauge invariant. Such interactions have not only been identified, but constraints on their coefficients have also been obtained from the LHC data~\cite{higgs-phenoab, HD-opsc,HD-opsa,HD-opsb,HD-opsd,HD-ops_eff}. In view of such analyses, the coefficients are often restricted to such values where many cherished kinematic distributions may fail to reveal their footprints. In the current study, we point out some features which influence the detectability (or otherwise) of the higher-dimensional couplings at a Higgs factory. At the same time, we emphasise some possible measurements that can elicit their signatures even for relatively small coefficients of such operators. We concentrate on two Higgs production channels, namely, $e^+ e^- \longrightarrow ZH$ (the $s$-channel process) and $e^+ e^- \longrightarrow \nu{\bar{\nu}}H$ (the $t$-channel process, which we separate with the help of a simple kinematic cut around the Higgs boson energy). In principle, the HDOs that will constitute our report can influence the rates in both channels. In contrast, the most obvious kinematic distributions, namely, those based on the angular dependence of matrix elements, drawn with moderate values of their coefficients do not show a perceptible difference with respect to the SM situation. Keeping this in view, we underscore the following points here: \begin{enumerate} \item The $s$-channel process has substantial rates at $\le$ 300 GeV or thereabout. We show, through an analysis of the production amplitude squared, why one cannot expect significantly different angular distributions in this channel at such energies, if one uses moderate values of the operator coefficients. \item The $t$-channel process can have appreciable production rates at high energies ($\approx$ a TeV), too. Because of the production of two neutrinos in the final state, this process provides limited phase-space for the exploration of the tensor structure of the $HWW$ coupling. Here it is attempted to exploit the full kinematics of the Higgs boson by means of a correlated two-dimensional likelihood analysis. \item We show that, given such impediment, it is possible to uncover signatures of the aforementioned BSM operators through measurements of rates at two different energies, which also cancels many systematic uncertainties. In general, the energy dependence of the rates can be sensitive to anomalous couplings. \item The very fact that the additional operators should be electroweak gauge invariant imply not only higher-dimensional $HVV$ interactions ($V = W\,, Z\,, \gamma$) but also anomalous $WWV$ interactions ($V = Z, \gamma$) whose strengths are related to the former. We show that the concomitant variations in Higgs production and W-pair production at Higgs factories may elicit the presence of such BSM interactions. \item We also show that if the centre-of-mass energy (CME) of the colliding particles is $\approx 500$ GeV or more, then even moderate values of the operator coefficients can show some differences in the kinematic distributions. \item Lastly, we perform the analysis in a framework that allows one to retain all the gauge-invariant operators at the same time. \end{enumerate} We summarise the gauge invariant couplings in the next section, and subsequently point out the `phenomenological' anomalous couplings they lead to. In section~\ref{sec:pheno}, we take up the $s$ and $t$-channel Higgs production cross-sections in turn, and explain why one cannot expect too much out of kinematic distributions at Higgs factory energies, so long as the BSM coupling coefficients are subject to constraints imposed by the LHC data. Their detectable signatures through event ratios at two energies, and also via the simultaneous measurement of $W$-pair production are predicted in section~\ref{sec:pheno}. A likelihood analysis and some related issues, mostly in terms of the phenomenological forms to which all new couplings reduce, are found in section~\ref{sec:likelihood}. We summarise our conclusions in section~\ref{sec:disc}. \section{Effective Lagrangian Formalism} \label{sec:ELF} In this paper, we adopt two types of effective Lagrangian parametrizations which are commonly used in the literature to probe the anomalous $HVV$ (where $V=W,Z,\gamma$) interactions. In one parametrization, we take the most general set of dimension-6 gauge invariant operators which give rise to such anomalous $HVV$ interactions. In the other one, we parametrize the $HVV$ vertices with the most general Lorentz invariant structure. Although, this formalism is not the most transparent one from the viewpoint of the gauge structure of the theory, it is rather simple and more experiment-friendly. Both formalisms modify the $HVV$ vertices by introducing non-standard momentum-dependent terms. We assume that the SM is a low-energy effective theory of a more complete perturbation theory valid below a cut-off scale $\Lambda$. In the present study, we are concerned mainly with the Higgs sector. The first order corrections to the Higgs sector will come from gauge invariant dimension 6 operators as there is only one dimension-5 operator which contributes to the neutrino masses. The relevant additional Lorentz structures in $HVV$ interactions are necessarily of dimensions higher than four. If they arise as a consequence of integrating out physics at a higher scale, all such operators will have to be invariant under $SU(2)_L\times U(1)_Y$. A general classification of such operators is found in the literature~\cite{Buchmueller,min-basis,Hagiwara,Garcia}. The lowest order CP-conserving operators which are relevant for Higgs phenomenology are \begin{itemize} \item The operators containing the Higgs doublet $\Phi$ and its derivatives: \begin{equation} \mathcal{O}_{\Phi,1} = (D_{\mu}\Phi)^{\dagger}\Phi\Phi^{\dagger}(D^{\mu}\Phi);~~~ \mathcal{O}_{\Phi,2} = \frac{1}{2}\partial_{\mu}(\Phi^{\dagger}\Phi)\partial^{\mu}(\Phi^{\dagger}\Phi);~~~ \mathcal{O}_{\Phi,3} = \frac{1}{3}(\Phi^{\dagger}\Phi)^{3} \end{equation} \item The operators containing the Higgs doublet $\Phi$ (or its derivatives) and bosonic field strengths : \begin{equation} \mathcal{O}_{GG} = \Phi^{\dagger}\Phi G_{\mu\nu}^a G^{a\,\mu\nu};~~~ \mathcal{O}_{BW} = \Phi^{\dagger}\hat{B}_{\mu \nu} \hat{W}^{\mu \nu} \Phi;~~~ \mathcal{O}_{WW} = \Phi^{\dagger}\hat{W}_{\mu \nu} \hat{W}^{\mu \nu} \Phi \nonumber \end{equation} \begin{equation} \mathcal{O}_{W} = (D_{\mu}\Phi)^{\dagger} \hat{W}^{\mu \nu} (D_\nu \Phi);~~~ \mathcal{O}_{BB} = \Phi^{\dagger}\hat{B}_{\mu \nu} \hat{B}^{\mu \nu} \Phi;~~~ \mathcal{O}_{B} = (D_{\mu}\Phi)^{\dagger} \hat{B}^{\mu \nu} (D_\nu \Phi), \end{equation} \end{itemize} where $\hat{W}^{\mu \nu}=i\,\frac{g}{2} \sigma_{a}W^{a \; \mu \nu}$ and $\hat{B}^{\mu \nu}=i\,\frac{g}{2}' B^{\mu \nu}$ and $g$, $g'$ are respectively the $SU(2)_L$ and $U(1)_Y$ gauge couplings. $W^a_{\mu \nu} = \partial_{\mu}W^a_{\nu}-\partial_{\nu}W^a_{\mu} - g \epsilon^{abc}W^b_{\mu} W^c_{\nu}$, $B_{\mu \nu} = \partial_{\mu}B_{\nu}-\partial_{\nu}B_{\mu}$ and $G^a_{\mu \nu} = \partial_{\mu}G^a_{\nu}-\partial_{\nu}G^a_{\mu} - g_s f^{abc}G^b_{\mu} G^c_{\nu}$. The Higgs doublet is denoted by $\Phi$ and its covariant derivative is given as $D_{\mu}\Phi=(\partial_{\mu}+\frac{i}{2}g' B_{\mu} + i g \frac{\sigma_a}{2}W^a_{\mu})\Phi$. Following are the properties of the aforementioned HDOs: \begin{itemize} \item $\mathcal{O}_{\Phi,1}$: Does not preserve custodial symmetry and is therefore severely constrained by the $T$-parameter (or equivalently the $\rho$ parameter). It modifies the SM $HZZ$ and $HWW$ couplings by unequal multiplicative factors. \item $\mathcal{O}_{\Phi,2}$: Preserves custodial symmetry and modifies the SM $HZZ$ and $HWW$ couplings by multiplicative factors. This operator modifies the Higgs self-interaction as well. \item $\mathcal{O}_{\Phi,3}$: Modifies only the Higgs self-interaction. \item $\mathcal{O}_{GG}$: Introduces $HGG$ coupling which is same in structure as the SM effective $HGG$ coupling. Since our discussion is limited to the context of an $e^+e^-$ collider and as we will also not consider the gluonic decay mode of the Higgs, we will not discuss this operator any further. \item $\mathcal{O}_{BW}$: Drives the tree-level $Z\leftrightarrow\gamma$ mixing and is therefore highly constrained by the electroweak precision test (EWPT) data~\cite{HD-opsc}. \item $\mathcal{O}_{WW}$, $\mathcal{O}_{W}$, $\mathcal{O}_{BB}$, $\mathcal{O}_{B}$: Modifies the $HVV$ couplings by introducing new Lorentz structure in the Lagrangian. They are not severely constrained by the EWPT data\cite{HD-opsa,HD-opsb}. \end{itemize} Hence for the Higgs sector, we will choose our basis as $\mathcal{O}_i \in \{\mathcal{O}_{WW},\mathcal{O}_{W},\mathcal{O}_{BB},\mathcal{O}_B\}$. In the presence of the above operators, the Lagrangian is parametrised as \begin{equation} \mathcal{L} = \kappa\left(\frac{2 m_W^2}{v} H W_{\mu}^+ W^{\mu -}+\frac{ m_Z^2}{v} H Z_{\mu} Z^{\mu } \right) + \sum_{i} \frac{f_{i}}{\Lambda^2}\mathcal{O}_{i} \label{Lag} \end{equation} where $\kappa$ is the scale factor of the SM-like coupling, something which needs to be accounted for when considering BSM physics. $f_i$ is a dimensionless coefficient which denotes the strength of the $i^{th}$ operator and $\Lambda$ is the cut-off scale above which new physics must appear. We keep $\kappa$ to be the same for the $HWW$ and $HZZ$ couplings so that there is no unacceptable contribution to the $\rho$-parameter. Another operator considered in this work is $\mathcal{O}_{WWW} = Tr[\hat{W}_{\mu \nu} \hat{W}^{\nu \rho} \hat{W}^{\mu}_{\rho}]$. This only affects the triple gauge boson couplings and does not affect the Higgs sector. \\ The effective Lagrangian which affects the Higgs sector is \begin{align} \label{eq:lagHVV} \mathcal{L}_{eff} &= g_{HWW}^{(1)}~(W_{\mu\nu}^{+}W^{-\mu}\partial^{\nu}H + h.c.) + g_{HWW}^{(2)}~HW_{\mu\nu}^{+}W^{-\mu\nu} \nonumber \\ &+ g_{HZZ}^{(1)}~Z_{\mu\nu}Z^{\mu}\partial^{\nu}H + g_{HZZ}^{(2)}~HZ_{\mu\nu}Z^{\mu\nu} \nonumber \\ &+ g_{HZ\gamma}^{(1)}~A_{\mu\nu}Z^{\mu}\partial^{\nu}H + g_{HZ\gamma}^{(2)}~HA_{\mu\nu}Z^{\mu\nu}+g_{H\gamma\gamma}H A_{\mu \nu} A^{\mu \nu}, \end{align} where \begin{align} \label{eq:lagHVVcoeff} g^{(1)}_{HWW}&=\left(\frac{g M_W}{\Lambda^2}\right) \frac{f_W}{2};~~~ g^{(2)}_{HWW}=-\left(\frac{g M_W}{\Lambda^2}\right)f_{WW} \nonumber \\ g^{(1)}_{HZZ}&=\left(\frac{g M_W}{\Lambda^2}\right) \frac{c^2 f_W + s^2 f_B}{2 c^2};~~~g^{(2)}_{HZZ}=-\left(\frac{g M_W}{\Lambda^2}\right) \frac{s^4 f_{BB} + c^4 f_{WW}}{2 c^2} \nonumber \\ g^{(1)}_{HZ\gamma}&=\left(\frac{g M_W}{\Lambda^2}\right)\frac{s(f_W-f_B)}{2 c};~~~g^{(2)}_{HZ\gamma}=\left(\frac{g M_W}{\Lambda^2}\right)\frac{s(s^2 f_{BB}-c^2 f_{WW})}{c} \nonumber \\ g_{H\gamma\gamma}&=-\left(\frac{g M_W}{\Lambda^2}\right)\frac{s^2(f_{BB}+f_{WW})}{2} \end{align} with $s\,(c)$ being the sine (cosine) of the Weinberg angle. The operators $\mathcal{O}_W$, $\mathcal{O}_B$ and $\mathcal{O}_{WWW}$ contribute to the anomalous triple gauge boson interactions. The interactions can be summarised as \begin{align} \label{eq:lagWWV} \mathcal{L}_{WWV}=-i g_{WWV}\left\{g_1^V\left(W_{\mu\nu}^+W^{-\mu}V^{\nu}-W_{\mu}^+V_{\nu}W^{-\mu \nu}\right)+\kappa_V W_{\mu}^+W_{\nu}^-V^{\mu \nu} + \frac{\lambda_V}{M_W^2}W_{\mu \nu}^+ W^{-\nu \rho} V_{\rho}^{\mu}\right\}, \end{align} where $g_{WW\gamma}=g \, s$, $g_{WWZ} = g \, c$, $\kappa_V=1+\Delta\kappa_V$ and $g_1^Z=1+\Delta g_1^Z$ with \begin{align} \label{eq:lagWWVcoeff} \Delta \kappa_{\gamma}&=\frac{M_W^2}{2 \Lambda^2}\left(f_W+f_B\right);~~~\lambda_{\gamma}=\lambda_Z=\frac{3g^2M_W^2}{2\Lambda^2} f_{WWW} \nonumber \\ \Delta g_1^Z&=\frac{M_W^2}{2 c^2 \Lambda^2} f_W;~~~\Delta \kappa_Z=\frac{M_W^2}{2 c^2 \Lambda^2}\left(c^2 f_W - s^2 f_B\right) \end{align} The limits on these operators have been derived in many references. The most comprehensive of these are listed in references~\cite{higgs-phenoab,HD-opsc,HD-opsa,HD-opsb,HD-opsd}. These operators, even within their current limits, have been shown to modify the efficiencies of the various selection cuts for the relevant final states in the context of the LHC~\cite{HD-ops_eff}. All of the aforementioned HDOs lead essentially to one effective coupling (each for $HWW$ and $HZZ$), when $CP$-violation is neglected. These can be alternatively used in a phenomenological way for example, the $H(k)W_\mu^+(p)W_\nu^-(q)$ vertex can be parametrised as \cite{probingSpinParity}: \begin{equation} i\Gamma^{\mu\nu}(p,q) \epsilon_\mu(p)\epsilon^*_\nu(q), \end{equation} where deviations from the SM form of $\Gamma^{\mu\nu}_{SM}(p,q)=-gM_Wg^{\mu\nu} $ would indicate the presence of BSM physics. These BSM deviations, including $CP$-violating ones (not considered among the gauge invariant operators), can be specified as \begin{equation} \label{eq:LIP} \Gamma^{BSM}_{\mu\nu}(p,q)=\frac{g}{M_W}[\lambda(p.q g_{\mu\nu} - p_\nu q_\mu) + \lambda^\prime \epsilon_{\mu\nu\rho\sigma}p^\rho q^\sigma], \end{equation} where $\lambda $ and $\lambda^\prime $ are the effective strengths for the anomalous CP-conserving and CP-violating operators respectively. Precise identification of the non-vanishing nature of $\lambda, \lambda^\prime$ is a challenging task. If ever accomplished, it can tell us whether the modification in $HVV$-couplings are $CP$-conserving or $CP$-violating in nature and, if both are present, what their relative proportion is. Here we analyse the process $e^+ e^- \to H \nu \bar{\nu}$ and see if there is any BSM physics involved by incorporating a likelihood analysis of the SM hypothesis tested against BSM hypotheses. A few comments are in order on the two ways of parametrizing the anomalous Higgs couplings. The latter, of course, encapsulates all possible modified Lorentz invariant couplings in the lowest possible order, including both $CP$-conserving and $CP$-violating ones, in the coefficients $\lambda$ and $\lambda^\prime$ respectively. All of the anomalous $HWW$ and $HZZ$ couplings listed in the gauge-invariant formulation reduce basically to one term if one confines oneself to a $CP$-conserving scenario. Thus we can say that the latter parametrization shows us a rather `economic' way of relating the anomalous $HVV$ interactions to collider phenomenology. On the other hand, the process of relating the anomalous couplings to specific effective interactions is more transparent from the viewpoint of gauge structures when one uses the gauge invariant HDOs. It paves an easier path towards understanding the ultraviolet completion of the scenario. In addition to this, the formulation in terms of gauge-invariant operators relates the anomalous $HWW$ and $HZZ$ interactions. One finds, in this way, a pattern in the departure of the $ZH$ and $\nu\bar{\nu}H$ final state production rates from the corresponding SM prediction. Finally, some of the gauge-invariant operators lead simultaneously to anomalous triple gauge boson interactions. There is thus an associated variation in the $ZH$, $\nu\bar{\nu}H$ and $W^+W^-$ production rates as well as in the kinematic distributions associated with each final state. Such an association enables one to use various pieces of data to determine each new operator. \section{Phenomenology at an $e^+e^-$ Collider} \label{sec:pheno} In this section, we discuss various important Higgs production mechanisms through $HVV$ vertices at an $e^+e^-$ collider. For the collider phenomenology, we have implemented the Lagrangians of Eqs.~(\ref{eq:lagHVV})~and~(\ref{eq:lagWWV}) in \texttt{FeynRules}~\cite{Alloul:2013bka} to generate \texttt{Universal FeynRules Model (UFO)}~\cite{Degrande:2011ua} files suitable for interfacing with \texttt{MadGraph}~\cite{Alwall:2011uj}. We also use \texttt{FORM}~\cite{Vermaseren:2000nd} to compute many cross-sections analytically. \subsection{Higgs production at an $e^+e^-$ collider} We concentrate on two main Higgs production mechanisms {\it viz.} $e^+e^-\to ZH$ and $e^+e^- \to \nu\bar{\nu}H$, at an $e^+e^-$ collider with energies ranging from 250 GeV to 500 GeV. The $e^+e^-\to ZH$ channel includes only the $s$-channel processes -- $e^+e^-\to Z^*/\gamma^*\to ZH$ (shown in Fig.~\ref{fig:FD}(a)). Whereas $e^+e^- \to \nu \bar{\nu} H$ includes both the $s$-channel processes, $e^+e^-\to Z^*/\gamma^* \to ZH\to \nu \bar{\nu} H$ as well as the $t$-channel process $e^+e^-\to \nu\bar{\nu}W^*W^*\to \nu\bar{\nu}H$ ($WW$ fusion process as shown in Fig.~\ref{fig:FD}(b)). \begin{figure} \centering \subfloat{ \begin{tabular}{ccc} \resizebox{55mm}{!}{\includegraphics{ee2zh.pdf}} &&~~~ \resizebox{55mm}{!}{\includegraphics{ee2vvh.pdf}} \\ \hspace{0mm}(a)&&\hspace{8mm}(b) \end{tabular}} \caption{(a) $s$-channel Feynman diagrams (b) $t$-channel Feynman Diagram.} \label{fig:FD} \end{figure} The $s$ and $t$-channel processes have different kinematics and hence are affected differently by the inclusion of the HDOs. Moreover, the $t$-channel process allows us to explore the tensor structure of the $HWW$ vertex alone, free from any contamination from the $HZZ$ and $HZ\gamma$ vertices. On the other hand, the $s$-channel process is free from any contamination due to the $HWW$ vertex. Hence, the measurement of the $s$-channel contribution will shed light on the tensorial nature of the $HZZ$ and $HZ\gamma$ vertices. We, therefore, analyse the $s$ and $t$-channel processes separately to shed more light on the anomalous behaviour of the $HVV$ vertices. We separate the $s$-channel ($t$-channel) contribution from the $e^+e^-\to \nu\bar{\nu}H$ events by applying a simple kinematic cut on the Higgs energy ($E_H$) as follows: \begin{equation} \label{eq:stcut} E_H\textrm{-cut:}~~ \Big\vert E_H-\frac{s+M_H^2-M_Z^2}{2\sqrt{s}}\Big\vert \leq \Delta~~~ \left(E_H^c\textrm{-cut:}~~ \Big\vert E_H-\frac{s+M_H^2-M_Z^2}{2\sqrt{s}}\Big\vert \geq \Delta\right), \end{equation} where $\sqrt{s}$ is the CME of the two colliding $e^+e^-$ beams and $\Delta$ is an energy-window around $E_H$. Here, $E_H^c$-cut is complementary to the $E_H$-cut. We use $\Delta=5$ GeV throughout our analysis~\footnote{Typical values of $\Delta$ can be estimated from the energy uncertainties of the $b$-jets coming from the Higgs decay. The jet energy uncertainty $\Delta E_{jet}$ (1$\sigma$) of a jet having energy $E_{jet}$ are related as, $\Delta E_{jet}/E_{jet} \lesssim 0.3/\sqrt{E_{jet}}$ at the ILC~\cite{ILC}. For example, if there are two $b$-jets each with energy 100 GeV, the total uncertainty in their energy measurement is $\sqrt{2\times (0.3\times \sqrt{100})^2}\sim 4$ GeV (added in quadrature).}. We must mention here that for the rest of this paper the $s$-channel process will be studied at the $ZH$ level without any cuts, unless otherwise specified. One can easily get an estimate of the cross-section for any decay modes of $Z$ by multiplying the appropriate BR. This is because for the $e^+e^-\to l^+l^-H$ channel, a simple invariant mass cut on the two leptons about the $Z$ boson mass will separate the $s$-channel to a very high degree. For $e^+e^-\to \nu \bar{\nu}H$, on the other hand, the cut on $E_H$ separates the $s$ and $t$-channels. The $s$-channel contribution surviving the cut is found to be very close to what one would have found from the rate for $l^+l^-H$, through a scaling of BRs. One is thus confident that the $E_H$-cut is effective in minimising mutual contamination of the $s$ and $t$-channel contributions. It should also be mentioned here that the effects of beam energy spread are not taken into account in Eq.~\ref{eq:stcut} for simplification. While we present the basic ideas of distinguishing anomalous interactions of the Higgs, the relevant energy window for precision studies has to factor in the effects of bremsstrahlung as well as beamstrahlung (depending on whether the Higgs factory is a circular or a linear collider). \begin{table}[t] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline $\sqrt{s}$ & Benchmark & $\sigma^{tot}_{\nu\bar{\nu}H}$ & $\sigma^s_{\nu\bar{\nu}H}$ & $\sigma^t_{\nu\bar{\nu}H}$ & $\sigma^{int}_{\nu\bar{\nu}H}$ & $\sigma^{s,ac}_{\nu\bar{\nu}H}$ & $\sigma^{t,ac}_{\nu\bar{\nu}H}$ \\ (GeV) & point & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) \\ \hline 300 & SM & 52.43 & 36.35 & 17.83 & -1.75 & 37.24 & 15.19 \\ & BP1 & 52.11 & 35.29 & 18.83 & -2.01 & 36.76 & 15.35 \\ \hline 500 & SM & 84.80 & 11.64 & 74.07 & -1.11 & 11.93 & 72.83 \\ & BP1 & 87.38 & 7.37 & 81.50 & -1.49 & 7.83 & 79.55 \\ \hline \end{tabular} \caption{\label{tab:sigst} We show the total $\nu\bar{\nu}H$ cross-section ($\sigma_{\nu\bar{\nu}H}^{tot}$), only $s$-channel cross-section ($\sigma_{\nu\bar{\nu}H}^{s}$), only $t$-channel cross-section ($\sigma_{\nu\bar{\nu}H}^{t}$) and their interference contribution ($\sigma_{\nu\bar{\nu}H}^{int}$) for the SM ($\kappa=1,f_{WW}=0,f_W=0,f_{BB}=0,f_B=0$) and for HDO benchmark point BP1 ($\kappa=1,f_{WW}=-3,f_W=8,f_{BB}=-4,f_B=3$) for two different CMEs. We also present the $s$ ($\sigma^{s,ac}_{\nu\bar{\nu}H}$) and $t$-channel ($\sigma^{t,ac}_{\nu\bar{\nu}H}$) cross-sections separated from the $\nu\bar{\nu}H$ events after applying the cut defined in Eq.~\ref{eq:stcut}. The superscript $ac$ means after cut.} \end{table} In Table~\ref{tab:sigst}, we show the effect of the $E_H$-cut on the $\nu\bar{\nu}H$ channel in the SM and in presence of HDOs for one benchmark point, BP1 ($\kappa=1,f_{WW}=-3,f_W=8,f_{BB}=-4,f_B=3$) which closely mimics the SM cross-section. The $E_H$-cut keeps almost all the $s$-channel contribution but the $E_H^c$-cut cuts out a small portion around $E_H$ from the $t$-channel contribution. Therefore, the $s$-channel cross-sections after this cut increase slightly from their without-cut values due to this small $t$-channel contamination. On the other hand, the $t$-channel cross-sections after cut decrease slightly from their without-cut values. We also estimate the interference between the $s$ and $t$-channel diagrams and present the numbers in Table~\ref{tab:sigst}. Interference contribution is expected to be tiny in the $\sqrt{s}$ region sufficiently away from the $s$-channel threshold energy $(M_H+M_Z) \approx 226$ GeV. We find that the interference contribution is only $\sim 3.5$\% of the total cross-section for $\sqrt{s}=300$ GeV, in the SM. This re-affirms the statement at the end of the previous paragraph. We also note that the inclusion of HDOs with moderate values of coefficients does not affect this contribution much. Hence, by neglecting the interference term, we approximate the total $\nu\bar{\nu}H$ cross-section as \begin{equation} \sigma^{tot}_{\nu \bar{\nu}H}\approx \sigma_{ZH}\times BR_{Z\to \nu \bar{\nu}}+\sigma_{\nu\bar{\nu}H}^t, \end{equation} where $\sigma_{ZH}$ is the $s$-channel cross section and $BR_{Z\to \nu \bar{\nu}}$ is the invisible branching fraction ($\approx 20\%$) of the $Z$-boson. Fig.~\ref{fig:Mvv} shows the invariant mass distribution of the neutrino pair for the process $e^+e^-\to\nu\bar{\nu}H$ at $\sqrt{s}=300$ GeV and for the benchmark point BP1. We separately show the distributions for the total process (which includes the $s$ and $t$ channels as well as the interference) and also the $s$ and $t$ channels separately. In an inset plot we show the distribution due to this interference. This clearly shows that it is negligible when compared to the $s$ and $t$ channel contributions. This nature generally holds for the parameter space under consideration. \begin{figure} \includegraphics{ee2vvh_BP1.pdf} \caption{Invariant mass distributions of $\nu\bar{\nu}$ of the process $e^+e^-\to\nu\bar{\nu}H$ at $\sqrt{s}=300$ GeV and for the benchmark point BP1 ($\kappa=1,f_{WW}=-3,f_W=8,f_{BB}=-4,f_B=3$). The red, green, blue histograms are for the total ($s+t+ interference$), $s$ and $t$ channels respectively. The inset (orange) plot shows the interference ($total-s-t$) contribution.} \label{fig:Mvv} \end{figure} \subsection{A general expression for the cross-sections} In this analysis, we keep $\kappa$, $f_{WW}/\textrm{TeV}^2$, $f_{W}/\textrm{TeV}^2$, $f_{BB}/\textrm{TeV}^2$ and $f_{B}/\textrm{TeV}^2$ as free parameters. The $HWW$ vertex depends on three parameters ($\kappa$, $f_{WW}$ and $f_{W}$) whereas the $HZZ$ and the $HZ\gamma$ vertices depend on five parameters ($\kappa$, $f_{WW}$, $f_{W}$, $f_{BB}$ and $f_{B}$). The $\kappa$ dependence enters the $HZ\gamma$ vertex through the $W$-loop in the effective $HZ\gamma$ vertex. The amplitude for the process $e^+e^-\to ZH/\nu\bar{\nu}H$ is a linear combination of $x_i\in\{\kappa,f_{WW},f_{W},f_{BB},f_B\}$ and therefore, the cross-section can always be expressed as a bi-linear form, $\sigma(S,x_i)=\displaystyle\sum_{i,j=1}^{5}x_i C_{ij}(S) x_j$, where $C_{ij}(S)$ is the $ij^{th}$ element of the coefficient matrix $\mathcal{M}(\sqrt{s})$ at a CME of $\sqrt{s}$. Hence, the cross-section can be written in the following closed form \begin{equation} \sigma(\sqrt{s})=\mathcal{X}\cdot\mathcal{M}(\sqrt{s})\cdot\mathcal{X}^T, \end{equation} where $\mathcal{X}=(\kappa,f_{WW},f_W,f_{BB},f_B)$ is a row vector. The matrices of coefficients for the $e^+e^-\to Z H$ process at $\sqrt{s}=250$ GeV and $300$ GeV are \begin{equation} \label{Ms} \footnotesize \mathcal{M}^{s,ZH}_{250}= \begin{pmatrix} 241.32 & -7.11 & -2.29 & -0.55 & -0.51 \\ -7.11 & 0.35 & 0.13 & -0.02 & -0.05 \\ -2.29 & 0.13 & 0.06 & -0.01 & -0.03 \\ -0.55 & -0.02 & -0.01 & 0.01 & 0.02 \\ -0.51 & -0.05 & -0.03 & 0.02 & 0.04 \end{pmatrix}; \mathcal{M}^{s,ZH}_{300}= \begin{pmatrix} 181.67 & -6.43 & -2.99 & -0.51 & -0.71 \\ -6.43 & 0.46 & 0.18 & -0.03 & -0.08 \\ -2.99 & 0.18 & 0.14 & -0.02 & -0.06 \\ -0.51 & -0.03 & -0.02 & 0.02 & 0.03 \\ -0.71 & -0.08 & -0.06 & 0.03 & 0.08 \end{pmatrix} \end{equation} Similar matrices for the $t$-channel process (after the $E_H^c$-cut) for the channel $e^+e^-\to \nu \bar{\nu}H$ at $\sqrt{s}=250$ GeV and $300$ GeV are \begin{equation} \label{Mt} \footnotesize \mathcal{M}^{t,\nu\bar{\nu}H}_{250}= \begin{pmatrix} 4.63 & 5.2\times 10^{-3} & 0.02 \\ 5.2\times 10^{-3} & 2.9\times 10^{-4} & -1.2 \times 10^{-4} \\ 0.02 & -1.2 \times 10^{-4} & 1.6 \times 10^{-4} \end{pmatrix}; \mathcal{M}^{t,\nu\bar{\nu}H}_{300}= \begin{pmatrix} 15.36 & 0.04 & 0.07 \\ 0.04 & 1.2\times10^{-3} & -7.7\times10^{-4} \\ 0.07 & -7.7\times10^{-4} & 4.6\times10^{-4}. \end{pmatrix} \end{equation} We must mention here that the matrices in Eq.~\ref{Mt} are three-dimensional compared to the five-dimensional matrices in Eq.~\ref{Ms} because the $t$-channel only involves the $HWW$ vertex which is not affected by the operators $\mathcal{O}_{BB}$ and $\mathcal{O}_B$ (Eqs.~\ref{eq:lagHVV},~\ref{eq:lagHVVcoeff}). We also observe that in Eq.~\ref{Ms}, the coefficients of the matrix related to either $f_{BB}$ or $f_B$ are much less pronounced compared to the coefficients involving the other three parameters, {\textit{viz.}} $\kappa$, $f_{WW}$ and $f_W$. Also from Eq.~\ref{Mt} we see that barring the (1,1) entry in the matrices, all the other coefficients are small implying that the HDOs will have small but non-negligible effects on the $t$-channel cross-sections for energies at the Higgs factories. \begin{figure} \includegraphics{ee2vvh_eH.pdf} \caption{Normalised distributions of the Higgs energy ($E_H$) for the $s$-channel (red : $\sqrt{s}=500$ GeV and blue : $\sqrt{s}=1$ TeV) and $t$-channel (green : $\sqrt{s}=500$ GeV and magenta : $\sqrt{s}=1$ TeV) for the benchmark point BP1.} \label{fig:EH} \end{figure} An explanation of relatively less dependence of the $t$-channel cross-section compared to the $s$-channel on the anomalous operators can also be understood from Fig.~\ref{fig:EH}. The plots reveal that, for the former process (essentially a vector boson fusion channel), the Higgs emerges with much smaller energy. The higher-dimensional couplings, on the other hand, contain derivatives which translate into a direct dependence on the energy of the Higgs, thus putting the $t$-channel process at a relative disadvantage. The Higgs energy distribution shows a longer tail for higher centre-of-mass energies, thus offering a partial recompense to the $t$-channel process for an energy as high as a TeV. In this study we also consider the process $e^+e^-\to W^+ W^-$ which involves the triple-gauge boson vertices $WW\gamma$ and $WWZ$. These are concomitantly affected by the operators $\mathcal{O}_W$ and $\mathcal{O}_B$. Besides, as mentioned in section~\ref{sec:ELF}, such vertices are also affected by the operator $\mathcal{O}_{WWW}$ which does not affect the Higgs sector. In the basis of $x_{i}^{WW}\in\{1,f_W,f_B,f_{WWW}\}$, the coefficient matrix at $\sqrt{s}=300$ GeV is given by \begin{equation} \label{MWW} \footnotesize \mathcal{M}^{WW}_{300}= \begin{pmatrix} 13.48 & 1.10\times 10^{-2} & 5.65\times 10^{-3} & 4.24\times 10^{-3} \\ 1.10\times 10^{-2} & 4.98\times 10^{-4} & 5.27\times 10^{-5} & 2.02\times 10^{-4} \\ 5.65\times 10^{-3} & 5.27\times 10^{-5} & 1.17\times 10^{-4} & 1.96\times 10^{-5} \\ 4.24\times 10^{-3} & 2.02\times 10^{-4} & 1.96\times 10^{-5} & 8.18\times 10^{-4} \end{pmatrix}. \end{equation} As we can see above, all the $C_{ij}s$ are very small when compared to $C_{11}$, which gives us the SM cross-section. We will discuss this channel in more details later in this paper. \subsection{Energy dependence of $s$ and $t$-channel cross-sections} It is well-known that in SM, the cross-section for the $s$-channel falls with the CME as $1/S$ and that for the $t$-channel, rises as $\ell n{S}$~\cite{Altarelli}. However, for sets of values of our parameters, different from the SM, the nature of the $s$-channel curve can be completely different from its SM-counterpart. The $t$-channel cross-section however is not affected so significantly on the introduction of HDOs as has been discussed in detail in the previous sub-section. We show the variation of the $s$ and $t$-channel processes for $\sqrt{s}$ ranging from $250$ GeV to $900$ GeV. In contrast to the SM nature of a fall in the $s$-channel cross-section with energy, the introduction of HDOs does in no way ensure such a nature which can be seen in Fig.\ref{stenergy} (a) for two benchmark points (BP2 ($x_i\in \{1,0,5,0,0\}$) and BP3 ($x_i\in \{1,0,-5,0,0\}$)) alongside the SM. The above two benchmark points have been chosen as the cross-sections are quite sensitive to $f_W$ and the two points are allowed from EWPT constraints. On the whole it is clear from the diagrams that the ratio of the $s$ and $t$-channel cross-sections in some channel at a particular energy can be an important probe to the nature of new Higgs couplings\footnote{The visible rise with $\sqrt{s}$ (in Fig.\ref{stenergy}(a) for the benchmark points BP2 and BP3) does not threaten unitarity, since the additional degrees of freedom responsible for the effective operators take care of it when $\sqrt{s}$ approaches $\Lambda$. The rise is not noticeable if one has the operators $\mathcal{O}_{WW}/\mathcal{O}_{BB}$ instead of $\mathcal{O}_W/\mathcal{O}_B$. The different momentum dependence in the former case tames the rise with $\sqrt{s}$ as can be verified from the corresponding Feynman rules in\cite{Garcia}.} \begin{figure}[!h] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{ee2zhsigsvsrts.pdf}} && \resizebox{70mm}{!}{\includegraphics{ee2vvhsigtvsrts.pdf}} \\ \hspace{8mm}(a)&&\hspace{10mm}(b) \end{tabular}} \caption{(a) : $\sigma^s$ (in fb) for the channel $e^+e^-\to Z H$ and (b) : $\sigma^{t,ac}$ (in fb) for the channel $e^+e^-\to \nu \bar{\nu} H$ as functions of the CME, $\sqrt{s}$. The cross-sections have been computed for three benchmark points, viz. SM ($x_i\in \{1,0,0,0,0\}$), BP2 ($x_i\in \{1,0,5,0,0\}$) and BP3 ($x_i\in \{1,0,-5,0,0\}$). The superscript $ac$ denotes the after cut scenario.} \label{stenergy} \end{figure} \subsection{More information from the total rates} The total rates and their ratios at different CMEs can be important probes to identify the tensor structure of the $HVV$ couplings. We show how the total rates for the $s$ and $t$-channel processes are affected on the introduction of the effective operators (Eqs.~\ref{Ms} and~\ref{Mt}). We must make a statement about the values of the coefficients, $f_i/\Lambda^2$ ($i$ is the index of the operator under consideration) chosen in the rest of the paper. In most cases, $f_i/\Lambda^2$ is allowed to vary in the range $[-20,20]$ TeV$^{-2}$. Now, a reasonable criterion for the validity of the effective field theory~\cite{effval} is $f_i x(g) E^2/\Lambda^2 < 1$, where $x(g)$ are the $SU(2)_L/U(1)_Y$ factors for the operators under study and $E$ is the scale of the process. For the production case, it is the centre of mass energy of the $e^+e^-$ colliding beams, which is $250-300$ GeV, while for decays, it is the mass of the Higgs boson. For the production case, we perform a rough calculation taking $g \approx 0.65$, $g'\approx 0.74$ and the cut-off scale $\Lambda=1$ TeV. Hence, for the operator $\mathcal{O}_W$, $f_W x(g) E^2/\Lambda^2 \approx f_W \frac{0.65}{2} 300^2/1000^2 \approx 0.029 f_W$, which can take $f_W$ to values $\simeq 34$. Similarly, for $\mathcal{O}_B$, the reach will be around $f_B \simeq 30$. For $\mathcal{O}_{WW}$, we have two factors of $g$ and two factors of $\frac{1}{2}$, which can take $f_{WW}$ to an even larger value. Thus the values chosen in our scan approximately conforms to the requirement of a valid effective theory. \subsubsection{One parameter at a time} \label{sec:1param} In Figs.~\ref{fig:1d} and~\ref{fig:1dpt8}, we show the variations of the $e^+e^-\to Z H$ and $e^+ e^- \to \nu \bar{\nu} H$ ($t$-channel) cross-sections as functions of a single parameter by keeping all other parameters fixed at their SM values. We show that even for small values of the operator coefficients, the cross-sections can vary significantly from the SM expectations. We also show that the ratios of the cross sections at two different energies can vary non-trivially with these parameters. If there is no new tensor structure in the $HVV$ couplings, the ratio plots will be flat horizontal curves. Any departure from a horizontal nature of such curves will shed light on new tensor structure in such $HVV$ vertices. The main sources of departure are the interference terms between the SM and HDO contributions. Such terms, occurring in both the numerator and the denominator of the ratio, carry the dependence on $f$ as well as $\sqrt{s}$. \begin{figure}[!h] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{ee2zh300.pdf}} && \resizebox{70mm}{!}{\includegraphics{1d_ee2vvh_t.pdf}} \\ \hspace{8mm}(a)&&\hspace{20mm}(b) \\ \resizebox{70mm}{!}{\includegraphics{ee2zh300by250.pdf}} && \resizebox{70mm}{!}{\includegraphics{ee2vvht300by250.pdf}} \\ \hspace{8mm}(c)&&\hspace{20mm}(d) \end{tabular}} \caption{Variations of (a) $\sigma^s_{ZH}(300)$ (fb) and (c) $\sigma^s_{ZH}(300)/\sigma^s_{ZH}(250)$ for $e^+e^-\to Z H$ and of (b) $\sigma^{t,ac}_{\nu\bar{\nu}H}(300)$ (fb) and (d) $\sigma^t_{\nu\bar{\nu}H}(300)/\sigma^{t,ac}_{\nu\bar{\nu}H}(250)$ for $e^+e^-\to \nu \bar{\nu} H$ with $f_{WW}$, $f_W$, $f_{BB}$, $f_B$. $\kappa=1$ for all the cases. The superscript $ac$ denotes the cut in Eq.\ref{eq:stcut}. The numbers in the brackets are the CMEs.} \label{fig:1d} \end{figure} \begin{figure}[!h] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{ee2zh300pt8.pdf}} && \resizebox{70mm}{!}{\includegraphics{1d_ee2vvh_tpt8.pdf}} \\ \hspace{8mm}(a)&&\hspace{20mm}(b) \\ \resizebox{70mm}{!}{\includegraphics{ee2zh300by250pt8.pdf}} && \resizebox{70mm}{!}{\includegraphics{ee2vvht300by250pt8.pdf}} \\ \hspace{8mm}(c)&&\hspace{20mm}(d) \end{tabular}} \caption{Variations of (a) $\sigma^s_{ZH}(300)$ (fb) and (c) $\sigma^s_{ZH}(300)/\sigma^s_{ZH}(250)$ for $e^+e^-\to Z H$ and of (b) $\sigma^{t,ac}_{\nu\bar{\nu}H}(300)$ (fb) and (d) $\sigma^t_{\nu\bar{\nu}H}(300)/\sigma^{t,ac}_{\nu\bar{\nu}H}(250)$ for $e^+e^-\to \nu \bar{\nu} H$ with $f_{WW}$, $f_W$, $f_{BB}$, $f_B$. $\kappa=0.8$ for all the cases. The superscript $ac$ denotes the cut in Eq.\ref{eq:stcut}. The numbers in the brackets are the CMEs.} \label{fig:1dpt8} \end{figure} We also remind the reader that the use of gauge invariant higher-dimensional operators implies a correlated modification in triple gauge boson couplings(Eqs.~\ref{eq:lagWWV},~\ref{eq:lagWWVcoeff}). $f_W$ and $f_B$ are thus responsible for altering the rates of $e^+ e^- \rightarrow W^+ W^-$ concomitantly with those for Higgs boson production. Such a concomitance, if verified in an $e^+ e^-$ collision experiment, should point rather unmistakably at one or the other of the gauge invariant operators mentioned here. We show the modified rates of the $WW$ final state in Fig.~\ref{fig:ee2ww} where we also show the effects of the operator driven by $f_{WWW}$ (which does not affect the Higgs couplings). It should however be mentioned that the actual presence of anomalous couplings in $e^+e^-\to W^+W^-$ is best reflected in a detailed study of various kinematic regions~\cite{Hagiwara:1986vm}. Such a study, however is not the subject of the present paper. \begin{figure}[!h] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{ee2ww300.pdf}} && \resizebox{70mm}{!}{\includegraphics{ee2ww300by250.pdf}} \\ \hspace{8mm}(a)&&\hspace{20mm}(b) \\ \end{tabular}} \caption{(a) Cross section ($\sigma$ (in pb)) for the process $e^{+}e^{-}\to W^{+} W^{-}$ for $\sqrt{s} = 300$ GeV and (b) ratio of cross sections ($\sigma_{300}/\sigma_{250}$) for the same process as functions of $f$'s.} \label{fig:ee2ww} \end{figure} The main conclusion emerging from Figs.~\ref{fig:1d},~\ref{fig:1dpt8} and~\ref{fig:ee2ww} are as follows : \begin{itemize} \item In Figs.~\ref{fig:1d}(a) and~\ref{fig:1dpt8}(a), for the process $e^+ e^- \rightarrow Z H$, we find that the operator $\mathcal{O}_{WW}$ changes the cross section from its SM expectation by $\sim 30\%$ even in the range $-5 < f_{WW} < 5$. The major contribution to the cross section modification comes from the operators $\mathcal{O}_{WW}$ and $\mathcal{O}_{W}$. $\mathcal{O}_{B}$ and $\mathcal{O}_{BB}$ have lesser contributions to the cross section. \item In Figs.~\ref{fig:1d}(b) and~\ref{fig:1dpt8}(b), for the cut-applied $t$-channel contribution in the process $e^+ e^- \to \nu \bar{\nu} H$, the operator $\mathcal{O}_W$ maximally affects the cross-section. The effect of $\mathcal{O}_{WW}$ is comparatively less pronounced. $\mathcal{O}_{BB}$ and $\mathcal{O}_B$ does not change this cross-section as the $HWW$ vertex is unaffected by these operators. Most importantly, it should be noted that the effect of these operators on the $t$-channel process is much less pronounced than its $s$-channel counterpart (Eqs.~\ref{Ms},~\ref{Mt}). \item In Figs.~\ref{fig:1d}(c) and~\ref{fig:1dpt8}(c), the ratio of the cross sections for the $e^+ e^- \rightarrow Z H$ channel at $\sqrt{s}=300$ GeV and $\sqrt{s}=250$ GeV shows a different nature. In the range $-20 < f_i < 20$ for the four operators discussed above, the ratio changes by $\sim 33 \%$ for $\mathcal{O}_W$. The effect of $\mathcal{O}_{WW}$ is less than this. The change in the ratio is the least for $\mathcal{O}_{BB}$. \item In Figs.~\ref{fig:1d}(d) and~\ref{fig:1dpt8}(d), the ratio of cross-sections for the cut-applied $t$-channel process varies in the range $\sim[3.1,3.5]$ for $-20 < f_i < 20$. \item We see that in Fig.~\ref{fig:ee2ww}, the cross-sections do not vary significantly with the operator coefficients. This is because the $e^+e^-\to W^+ W^-$ channel has a strong $\nu_e$ mediated $t$-channel contribution which does not involve the triple-gauge boson vertex. This has a significant interference with the $s$-channel. In order to bring out the feature of the triple gauge boson vertices, we need to devise some strategy which will tame down the $t$-channel effect, such as using right-polarised electrons if one uses a linear collider. \end{itemize} \subsubsection{Two parameters at the same time} In Figs.~\ref{fig:2ds} and~\ref{fig:2dt}, we show some fixed cross-section contours in the planes of two parameters varied at the same time. In Figs.\ref{fig:2ds} and \ref{fig:2dt}, all the parameters apart from the ones shown in the axes, are kept fixed. In each of these figures, we have marked regions in brown where the cross-section is $\sigma(SM)\pm10\%\times\sigma(SM)$. Hence, we see that for each of these plots, some regions even with large values of the parameters can closely mimic the SM cross-section. The above statement for the ranges of the coefficients of the HDOs will be somewhat modified if we consider the Higgs decays. This is because then we will have branching ratios depending on the effects of the HDOs. Even for fermionic decays of the Higgs, which are independent of the operators under study, the $BR$ will have non-trivial effects on the operator couplings through the total decay width. But, we must mention here that unless we go to very high values of the operator coefficients, the total decay width remains close to the SM expectation and hence fermionic decay channels would show similar features as these plots. Of course, when we study the effects of all the operators in the basis that we have considered by considering every possible decay mode of the Higgs, then the higher-dimensional operators will come to play at the $HVV$ decay vertices also. Hence, we will get modified bounds on the operator coefficients from a similar approach. We should mention that these operators are also constrained by the electroweak precision observables, {\textit viz.} $S$, $T$ and $U$ parameters. An important observation which is carried forward from Fig.~\ref{fig:1d} (a) is that the $HZZ$ and $H\gamma Z$ vertices are very less affected by the operators $\mathcal{O}_{BB}$ and $\mathcal{O}_B$. This fact is corroborated in Fig.\ref{fig:2ds} (e). The above mentioned pair of operators thus allow a wide region of parameter space which has cross-sections within $10\%$ of the SM value. Some salient features of Figs.~\ref{fig:2ds} and ~\ref{fig:2dt} are : \begin{itemize} \item Fig.~\ref{fig:2ds} shows the variation of the total rate for the channel $e^+e^-\to Z H$ as functions of two parameters taken together. All the other parameters are fixed for these plots. In Figs.~\ref{fig:2ds}(a)-(d), the cross-section varies significantly from the SM value for the allowed ranges of the parameters. However, Fig.~\ref{fig:2ds}(e) shows a large region of the parameter space to have cross-sections similar to the SM (within $10\%$). \item Fig.~\ref{fig:2dt} shows the variation of the cross-sections for the $t$-channel process in $e^+e^-\to \nu \bar{\nu} H$ as functions of two parameters varied at the same time. Figs.~\ref{fig:2dt}(c) and~\ref{fig:2dt}(d) shows a substantial amount of parameter space agreeing with the SM cross-section. \end{itemize} \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{65mm}{!}{\includegraphics{ee2zhkfww300.pdf}} && \resizebox{65mm}{!}{\includegraphics{ee2zhkfw300.pdf}} \\ \hspace{8mm}(a)&&\hspace{15mm}(b) \\ \resizebox{65mm}{!}{\includegraphics{ee2zhfwwfw300.pdf}} && \resizebox{65mm}{!}{\includegraphics{ee2zhfwwfw300pt8.pdf}} \\ \hspace{8mm}(c)&&\hspace{15mm}(d) \\ \resizebox{65mm}{!}{\includegraphics{ee2zhfbbfb300.pdf}} \\ \hspace{8mm}(e) \end{tabular}} \caption{Variations of $\sigma_s^{300}$ for $e^+e^-\to Z h$ with (a) $\kappa$ and $f_{WW}$, (b) $\kappa$ and $f_W$, (c) $f_{WW}$ and $f_W$ for $\kappa=1$, (d) $f_{WW}$ and $f_W$ for $\kappa=0.8$ and (e) $f_{BB}$ and $f_B$ for $\kappa=1$. For each case all the other $f$s are set to zeroes. Brown patches signify cross-sections within $\pm 10$\% of the SM expectation.} \label{fig:2ds} \end{figure} \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{65mm}{!}{\includegraphics{2d_ee2vvh_t_k_fww.pdf}} && \resizebox{65mm}{!}{\includegraphics{2d_ee2vvh_t_k_fw.pdf}} \\ \hspace{8mm}(a)&&\hspace{20mm}(b) \\ \resizebox{65mm}{!}{\includegraphics{2d_ee2vvh_t_fww_fw.pdf}} && \resizebox{65mm}{!}{\includegraphics{2d_ee2vvh_t_fww_fwpt9.pdf}} \\ \hspace{8mm}(c)&&\hspace{20mm}(d) \end{tabular}} \caption{Variations of $\sigma_t^{300,ac}$ for $e^+e^-\to Z h$ with (a) $\kappa$ and $f_{WW}$, (b) $\kappa$ and $f_W$, (c) $f_{WW}$ and $f_W$ for $\kappa=1$, (d) $f_{WW}$ and $f_W$ for $\kappa=0.9$. For each case all the other $f$s are set to zeroes. Brown patches signify cross-sections within $\pm 10$\% of the SM expectation.} \label{fig:2dt} \end{figure} \subsubsection{All parameters at the same time} The most general case will be to vary all the parameters simultaneously to obtain the most realistic parameter space. Here, we demonstrate this scenario for the cut-applied $t$-channel cross section in the $e^+e^-\to \nu \bar{\nu} H$ channel. In Figs.\ref{fig:allpar} (a), (b) and (c) we present three slices of the 3-dimensional hyper-surface. For each of these plots, there is a third parameter which has been varied. We see that a very large parameter space is allowed which can mimic the SM cross section within its $10\%$ value. Of course these plots are for illustrative purposes only. In Fig.~\ref{fig:allpar} (d), we have shown one such slice of the five-dimensional hyper-surface in the space of ($\kappa$, $f_{WW}$, $f_W$, $f_{BB}$ and $f_B$) for the $s$-channel process. \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{65mm}{!}{\includegraphics{tchannel_k_fww.pdf}} && \resizebox{65mm}{!}{\includegraphics{tchannel_k_fw.pdf}} \\ \hspace{5mm}(a)&&\hspace{15mm}(b) \\ \resizebox{65mm}{!}{\includegraphics{tchannel_fww_fw.pdf}} && \resizebox{65mm}{!}{\includegraphics{schannel_fww_fw.pdf}}\\ \hspace{5mm}(c)&&\hspace{15mm}(d) \end{tabular}} \caption{Allowed parameter space for $\sigma^{t,ac}_{\nu\bar{\nu}H}$ within $10\%$ of its SM value : (a) $f_{WW}$ vs $\kappa$ ($f_W$ varied) , (b) $f_W$ vs $\kappa$ ($f_{WW}$ varied), (c) $f_W$ vs $f_{WW}$ ($\kappa$ varied) and for $\sigma^{s}_{Z H}$ within $10\%$ of its SM value : (d) $f_W$ vs $f_{WW}$ ($\kappa$ , $f_{BB}$ and $f_B$ varied). $\sqrt{s}=300$ GeV.} \label{fig:allpar} \end{figure} \underline{\textbf{Discussion on EWPT constraints}} : All the benchmark points chosen throughout this paper are consistent with all constraints available till date \cite{HD-opsc,HD-opsa}. However, if one looks at the contour plots in Figs.~\ref{fig:2ds},~\ref{fig:2dt} and~\ref{fig:allpar}, there may exist certain points which are disfavoured by the precision constraints. \subsection{The effects on kinematic distributions} \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{dsdcts.pdf}} && \resizebox{70mm}{!}{\includegraphics{dsdcts500.pdf}} \\ \hspace{2mm}(a)&&\hspace{8mm}(b) \\ \resizebox{70mm}{!}{\includegraphics{dsdctt300.pdf}} && \resizebox{70mm}{!}{\includegraphics{dsdctt500.pdf}}\\ \hspace{2mm}(c)&&\hspace{10mm}(d)\\ \resizebox{70mm}{!}{\includegraphics{dsdpT-vvh-300.pdf}} && \resizebox{70mm}{!}{\includegraphics{dsdy-vvh-300.pdf}}\\ \hspace{2mm}(e)&&\hspace{10mm}(f) \end{tabular}} \caption{Normalised kinematic distributions $(1/\sigma^s)d\sigma^s/d\cos\theta$ for the channel $e^+e^-\to ZH$ for (a) $\sqrt{s}=300$ GeV and (b) $\sqrt{s}=500$ GeV. Normalised kinematic distributions $(1/\sigma^t)d\sigma^t/d\cos\theta$ for the $t$-channel process in $e^+e^-\to \nu \bar{\nu} H$ for (c) $\sqrt{s}=300$ GeV and (d) $\sqrt{s}=500$ GeV. Distributions for (e) $(1/\sigma^t)d\sigma^t/d p_{T,H}$ and (f) $(1/\sigma^t)d\sigma^t/d y_H$ for the $t$-channel process in $e^+e^-\to \nu \bar{\nu}H$ at $\sqrt{s}=300$ GeV. Benchmark points, {\textit viz.} SM ($x_i\in \{1,0,0,0,0\}$), BP1 ($x_i\in \{1,-3,8,-4,3\}$), BP2 ($x_i\in \{1,0,5,0,0\}$) and BP3 ($x_i\in \{1,0,-5,0,0\}$).} \label{fig:KD} \end{figure} The presence of anomalous $HVV$ vertex can in principle also affect the shapes of various kinematic distributions. In Figs.~\ref{fig:KD}(a)~and~\ref{fig:KD}(b) [Figs.~\ref{fig:KD}(c) and (d)], we show the normalised angular (angle of Higgs with the $z$-axis) distributions for the $s$-channel ($t$-channel) processes for $\sqrt{s}=300$ GeV and 500 GeV respectively. We find that the angular dependence for the $s$-channel is very sensitive in some regions of the parameter space allowed by the EWPT constraints and the LHC data. We also find the $\cos\theta$ dependence can be completely opposite as we increase the CME. This can be seen in Figs.~\ref{fig:KD}(a)~and~\ref{fig:KD}(b), if we compare the curves for BP1. In contrast, the $t$-channel is not significantly affected by the inclusion of HDOs. The angular dependence of the differential cross-sections can be expressed as \begin{equation} \frac{d\sigma(\sqrt{s},x_i)}{d\cos\theta} = a(\sqrt{s},x_i) + b(\sqrt{s},x_i)\cos^2\theta \end{equation} It is found that, between coefficients $a$ and $b$ above, $a$ is more affected by the anomalous couplings rather than $b$, unless $\sqrt{s}$ is 500 GeV or well above that. As a result, angular distributions are insensitive to the new interactions at the proposed energy scale of a Higgs factory. In Figs.~\ref{fig:KD}(e)~and~\ref{fig:KD}(f), we show the normalised $d\sigma/d p_{T,h}$ and $d\sigma/d y_h$ distributions respectively for the $t$-channel where $p_{T,h}$ is the transverse momentum of the Higgs and $y_h$ is its rapidity. We want to emphasise that it is very difficult to see any significant differences in the various kinematic distributions in most of the parameter space allowed by the LHC and EWPT constraints while performing experiments with smaller CME. In both the channels, we do not consider the final decay products of the Higgs. If we consider the Higgs boson decaying to fermionic final states, then the HDOs under consideration will not affect these decay vertices and the above normalised distributions will remain intact. However, if we consider the bosonic decay modes of the Higgs, then the HDOs will affect these distributions non-trivially. We end this subsection with the following admission. Various kinematical distributions are canonically emphasized as the best places to find the signature of non-standard Lorentz structures in interaction terms. While this expectation is not completely belied in the present case as well, we note that the anomalous couplings are reflected in distributions {\em at relatively high CMEs.} The reason behind this has already been explained above. While this prospect is encouraging, electron-positron colliders, especially those designed as Higgs factories, are likely to start operating at energies as low as $250-300$ GeV. Our observation is that the imprint of anomalous couplings can be found even at such low energies at the level of total rates and their ratios. A detailed study involving all possible decay products and their various correlations can in principle go further in revealing traces of anomalous couplings. We will take up such a study in a subsequent work. \subsection{Discussion on relevant backgrounds} We wish to see the effects of anomalous $HVV$ couplings on the Higgs production alone. Therefore, we do not look at bosonic decay modes of Higgs and limit our discussion only to those signal processes where $H$ decays maximally to a $b\bar{b}$ pair. For the $e^+e^-\to ZH$ process, the $Z$ can either decay visibly to $b\bar{b}$, $jj$, $\ell^+\ell^-$ (here $j=g,u,d,c,s$ and $\ell=e,\mu$) modes or invisibly to a $\nu\bar{\nu}$ pair. So the dominant backgrounds relevant for these final states are the non-Higgs $e^+e^-\to b\bar{b}b\bar{b},b\bar{b}jj,b\bar{b}\ell^+\ell^-,b\bar{b}+\cancel{E}$. The non-Higgs $e^+e^-\to b\bar{b}+\cancel{E}$ process can also act as the dominant background for the $e^+e^-\to \nu\bar{\nu}H$ channel. We select events after the following kinematic cuts: Trigger cuts : $p_T(b,j) > 20$ GeV, $p_T(\ell) > 10$ GeV, $|y(b,j)|< 5.0$, $|y(\ell)|< 2.5$, $\Delta R(bb,bj,jj,b\ell,j\ell) > 0.4$, $\Delta R(\ell\ell) > 0.2$. Finally we estimate two of the aforementioned backgrounds by applying the cuts below: \begin{itemize} \item \underline{Non-Higgs $e^+e^-\to bb\ell\ell$} We demand the two $b$'s to fall within the Higgs-mass window and the two $\ell$'s to fall within the $Z$-mass window as follows: \begin{equation} |M(bb) - M_h| < 10~\textrm{GeV~~AND}~~|M(\ell\ell) - M_Z| < 10~\textrm{GeV} \end{equation} Finally the total background cross-section for the $bb\ell\ell$ final state is defined as, $\mathcal{B}_{bb\ell\ell} = \eta_b^2~\sigma_{bb\ell\ell}$ where $\eta_b$ is the $b$-tagging efficiency which we take as $0.6$ for our analysis. The signal is also scaled by the same factor, $\eta_b^2$. \item \underline{Non-Higgs $e^+e^-\to bb+\cancel{E}$} We demand the two $b$'s to fall within the Higgs-mass window, $|M(bb) - M_h| < 10$ GeV. Here the background is $\mathcal{B}_{bb+\cancel{E}} = \eta_b^2~\sigma_{bb+\cancel{E}}$. The signal\footnote{The channel $e^+e^-\to H + \cancel{E} \to b\bar{b} + \cancel{E}$ also includes diagrams involving the triple-gauge boson vertices. These effects are almost nullified when the selection cuts for this channel are employed.} has also been scaled by the $b$-tagging efficiency. \end{itemize} \begin{table}[H] \centering \begin{tabular}{|c|c|c|c|c|c|c|c|} \hline Final states & $\sqrt{s}$ & $\sigma_{SM,tc}^{sig}$ & $\sigma_{SM,ac}^{sig}$ & $\sigma_{BP1,tc}^{sig}$ & $\sigma_{BP1,ac}^{sig}$ & $\sigma_{tc}^{bkg}$ & $\sigma_{ac}^{bkg}$ \\ & (GeV) & (fb) & (fb) & (fb) & (fb) & (fb) & (fb) \\ \hline $b\bar{b}l^+l^-$ & 250 & 2.68 & 2.46 & 2.76 & 2.52 & 10.33 & 0.09 \\ & 300 & 2.33 & 1.91 & 2.31 & 1.83 & 9.17 & 0.07 \\ \hline $b\bar{b}+\cancel{E}$ & 250 & 12.25 & 10.31 & 12.36 & 10.53 & 20.53 & 0.33 \\ & 300 & 13.67 & 9.79 & 13.26 & 9.62 & 18.00 & 0.29 \\ \hline \end{tabular} \caption{\label{tab:bkg} We show the signal and backgrounds for two different final states, {\textit{viz.}} $b\bar{b}l^+l^-$ and $b\bar{b}+\cancel{E}$. $\sigma_{tc}$'s are the cross-sections after the basic trigger cuts mentioned above and $\sigma_{ac}$'s are the cross-sections after the channel-specific cuts. The analysis has been done for the SM and the benchmark point BP1 ($x_i\in \{1,-3,8,-4,3\}$).} \end{table} \begin{figure} \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{sig_ee2bbll.pdf}} && \resizebox{70mm}{!}{\includegraphics{sig_ee2bbvv.pdf}} \\ \hspace{0mm}(a)&&\hspace{4mm}(b) \end{tabular}} \caption{Significance ($\mathcal{S}/\sqrt{\mathcal{B}}$) as functions of $f_i/\Lambda^2$ for $\kappa=1$ at $\sqrt{s} = 300$ GeV for (a) $e^+e^-\to bb\ell\ell$ and (b) $e^+e^-\to bb+\cancel{E}$.} \label{fig:signi} \end{figure} Alongside the issue of distinctness of the presence of the anomalous couplings, it is of interest to find out about the reach of a Higgs factory, or to know down to what strength the anomalous couplings can be detected. This information can be found in Fig.~\ref{fig:signi}. There we have have plotted the quantities $\mathcal{S} = |\sigma_{BSM}^H - \sigma_{SM}^H|$ and $\mathcal{B} = \sigma_{SM}^H + \sigma_{SM}^{NH}$ for computing the significance. Here, $H$ ($NH$) signifies sub-processes which involve (does not involve) the Higgs. In Table~\ref{tab:bkg}, we show the cross-sections for both the signal and background scenarios. For the signal we have considered two benchmark points, {\textit{viz.}} SM and BP1 ($x_i\in \{1,-3,8,-4,3\}$)). We show the cross-sections once after applying just the trigger cuts (designated with the subscript $tc$) and next by applying the channel-specific selection cuts (written with a subscript $ac$) along with the basic trigger cuts. All the numbers have been multiplied by $\eta_b^2$. We see that the effects of the invariant mass selection cuts on the signal cross-sections are negligible whereas these are very effective in reducing the backgrounds almost completely. The study performed here is at parton level. Shower, hadronization and detector effects are expected to have an impact on the effective cross-sections reported in Table~\ref{tab:bkg}. That said, these effects will not change the conclusions of the paper. \section{\label{sec:likelihood}Likelihood Analysis for $t$-channel} The kinematics of the final state associated to the $s$-channel production has been studied extensively in the past. As pointed out in section~\ref{sec:intro}, the $t$-channel production provides limited phase-space because the momenta of the outgoing neutrinos cannot be disentangled experimentally. This leaves the Higgs boson kinematics as the only handle to explore the nature of the $HWW$ coupling. Studies are documented in the literature with the use of the Higgs boson momentum as a means to gain sensitivity. Here we attempt to fully exploit the kinematics of the Higgs boson by means of a correlated two-dimensional likelihood analysis. The primary intent of this section is to shed light on the relative improvement of this two-dimensional approach, rather than determining absolute sensitivity to the size of anomalous couplings. The latter requires a detailed study that carefully incorporates experimental effects. This is beyond the scope of this paper. We use a test-statistic (TS) to distinguish the BSM hypothesis from its SM counterpart by defining the logarithm of a profile likelihood ratio ($q_{ij}=\ln\lambda_{ij}$) for two different hypotheses $i$ and $j$ defined as \begin{equation} q_{ij}=\ln\lambda_{ij}=\ln\frac{L(P_i|D_i)}{L(P_j|D_i)}, \end{equation} where $\lambda_{ij}$ is the ratio of two likelihood functions $L(P_i|D_i)$ and $L(P_j|D_i)$ describing two different hypotheses \footnote{Alternatively, its reciprocal is also sometimes used, depending on the analysis required. It should be noted here that both likelihoods are constructed using the same $D_i$, but different $P_i$s.}, $D_i$ is the data set used and $P_{i,j}$ are the probability density functions. Due to the discrete nature of the probabilities in this analysis, the likelihood functions are defined as products of binned Poisson probabilities over all channels and bins \cite{ATLAS}. From the TS, a $p$-value can be calculated to quantify the extent to which a hypothesis can be rejected. In general, a $p$-value is a portion of the area under a normalised TS which, after calculation, is the percentage confidence level (CL) by which a hypothesis can be rejected. In Monte Carlo (MC) studies, these TSs emerge as binned peaks which show up on running pseudo-experiments, each of which returns a value for the TS based on a randomly generated set of pseudo-data. The number of pseudo-data points generated is fixed by the cross-section of the process being studied. The TSs concerned in this analysis are always produced in pairs, in order to discriminate between the SM and BSM hypotheses. This pair of TSs is represented as \begin{equation} \label{eqn:ts1} q_U=\ln\frac{L(P_{SM}|D_{SM})}{L(P_{BSM}|D_{SM})}~~~~\textrm{and}~~~~ q_L=\ln\frac{L(P_{SM}|D_{BSM})}{L(P_{BSM}|D_{BSM})}. \end{equation} The $q_U$ TS tends to have a more positive value due to its ordering, and we refer to it as the \textit{upper} TS for our purposes, while we refer to $q_L$ as the \textit{lower} TS. A hypothesis can be rejected by calculating the associated $p$-value as follows \begin{equation} \label{eqn:ts} p=\int_{m_{q_U}}^{\infty}q_{L}(q) dq, \end{equation} where $m_{q_U}$ is the median of the upper TS, $q_{U}$. The confidence by which a hypothesis can be rejected, can alternatively be quantified by knowing the \textit{significance} of the separation between the two TSs. The median-significance, $Z_{med}$, is defined as the number of standard deviations between the median of $q_L$ and the left edge of the $p$-value area, that is, the median of $q_U$. \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{mom.pdf}} && \resizebox{70mm}{!}{\includegraphics{theta.pdf}} \\ \hspace{8mm}(a)&&\hspace{10mm}(b) \end{tabular}} \caption{Normalised kinematic distributions of (a) Higgs momentum, $p_H$ and (b) the angle of the Higgs with the beam-axis, $\theta_H$ for different benchmark points for the $t$-channel process at $\sqrt{s}=250$ GeV.} \label{img:t} \end{figure} \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{2DSM.pdf}} && \resizebox{70mm}{!}{\includegraphics{2DBSM.pdf}} \\ \hspace{8mm}(a)&&\hspace{10mm}(b) \end{tabular}} \caption{Two dimensional histograms showing the correlation of the $t$-channel Higgs momentum, $p_H$ and the angle of the Higgs with the beam-axis, $\theta_H$ at $\sqrt{s}=250$ GeV. The $z$-axis is an indication of the frequency of events, in arbitrary units. The effect of the correlation can be seen by noting how the BSM parameter $\lambda$ affects the distribution.} \label{img:t-2D} \end{figure} As stated above, we focus on the $t$-channel process (in $e^+e^-\to \nu\bar{\nu} H$) which has not been studied as extensively as the $s$-channel. The $s$-channel ($t$-channel) contributions can be separated out from the $\nu\bar{\nu}H$ events by applying the $E_H$-cut ($E_H^c$-cut) in Eq.~\ref{eq:stcut}. For this purpose, we work with the phenomenological parametrization of anomalous $HWW$ interaction characterised by $\lambda$ and $\lambda'$, as defined in Eq.~\ref{eq:LIP}. In our analysis, the vertices for the Lagrangians in the SM and in BSM with spin-0 bosons are calculated in {\scshape{FeynRules}}~\cite{Alloul:2013bka} and passed to the event-generator {\scshape{MadGraph}}~\cite{Alwall:2011uj}, which is used for the generation of the matrix elements for Higgs production in the $t$- and $s$-channels. MC samples are produced at parton level. Effects related to detector resolution are taken into account when defining requirements to suppress the contamination from the $s$-channel process (see Eq.~\ref{eq:stcut}). We set the stage for the likelihood analysis by showing some plots for distributions in terms of $\lambda$ and $\lambda'$. In Figs.~\ref{img:t}(a) and (b), we show the $p_H$ (Higgs momentum) and $\theta_H$ (the angle of the Higgs with the beam-axis) distributions respectively for the $t$-channel at $\sqrt{s} = 250$ GeV. We see that significant deviations from the SM can be seen. This is in contrast to what was shown for the gauge invariant formulation (in Fig.~\ref{fig:KD}) because there we stick to moderate values of the parameter coefficients, whereas for example, here, $\{\lambda= 1,\lambda'=0\} \Rightarrow x_i \approx \{1,77,0,0,0\}$). In Figs.~\ref{img:t-2D}(a) and (b), two dimensional histograms in $p_H$-$\theta_H$ plane are shown for the SM and a BSM (SM with $\lambda=1$, $\lambda'=0$) benchmark point respectively at $\sqrt{s}=250$ GeV. \begin{figure}[H] \centering \subfloat{ \begin{tabular}{ccc} \resizebox{70mm}{!}{\includegraphics{l1.pdf}} && \resizebox{70mm}{!}{\includegraphics{l-1.pdf}} \\ \hspace{0mm}(a)&&\hspace{4mm}(b) \\ \resizebox{70mm}{!}{\includegraphics{lp1.pdf}} && \resizebox{70mm}{!}{\includegraphics{lp-1.pdf}}\\ \hspace{0mm}(c)&&\hspace{4mm}(d) \end{tabular}} \caption{Median significance values for likelihood analyses done with both one dimensional and two dimensional distributions. (a) SM with $\lambda=1$, (b) SM with $\lambda=-1$, (c) SM with $\lambda'=1$ and (d) SM with $\lambda'=-1$. Results are obtained with 1\,fb$^{-1}$ of integrated luminosity.} \label{img:likelihoods} \end{figure} A likelihood analysis for each BSM hypothesis is performed for integrated luminosities of 1 fb$^{-1}$, 5 fb$^{-1}$ and 10 fb$^{-1}$. The number of pseudo-data points in each analysis is determined from the SM cross section. The $Z_{med}$ for the 1 fb$^{-1}$ case are plotted as functions of the CME for each hypothesis as shown in Fig.~\ref{img:likelihoods}. These plots show the power of using two dimensional distributions in likelihood analysis. The likelihood analysis is performed using a total number of 100,000 pseudo-experiments for each TS. The two dimensional distributions, examples of which are shown in Fig.~\ref{img:t-2D}, are also included in the likelihood analysis to demonstrate the effect of the correlation between the two variables, $p_H$ and $\theta_H$. Fig.~\ref{img:likelihoods} displays the significance for one-dimensional analyses using the Higgs boson momentum and the polar angle separately. Results are shown for illustration purposes for 1\,fb$^{-1}$ of integrated luminosity. Conclusions drawn here are found not to depend on the integrated luminosity in the range studied here. The corresponding results for the combined 2D likelihood are shown. The upper two plots correspond to admixtures with the CP-even term. The sensitivity of the polar angle is significantly less than that of the Higgs boson momentum. The lower plots display the corresponding results for admixtures with the CP-odd term. In this case the sensitivity of the polar angle is similar to that of the momentum. As a result, the improvement from the 2D analysis is significant, to the extent that the sensitivity can be enhanced by about a factor of two. The sensitivity of the angular variable grows with the CME. The results provide a good motivation for the role of an electron positron collider in understanding the nature of the $HVV$ couplings. The plots in Fig.~\ref{img:likelihoods} show the utility in using two dimensional distributions in discerning the rejection of hypotheses. That is, using the same accrued data from two separate one dimensional distributions, one can enhance the confidence in rejecting hypotheses. The correlation of the two dimensional distributions thus carries vital information about the dynamics of the processes which are studied in $e^+e^-$ collisions. \section{\label{sec:disc}Summary and Conclusions} We have attempted to demonstrate the efficacy as well as limitations of an $e^+ e^-$ Higgs factory operating at $250 - 300$ GeV in probing anomalous, higher-dimensional couplings of a Higgs to $W$-and $Z$-pairs, suppressed by a scale $\mathcal{O}$(TeV). For this purpose, we have mostly adhered to the set of gauge-invariant operators that can lead to such interactions, since it is such terms that are expected to emerge on integrating out physics above the electroweak symmetry breaking scale. We have utilised the consequent correlation of the anomalous $HWW$, $HZZ$ and $HZ\gamma$ couplings, and also the concomitant effect on $ZWW/\gamma WW$ interactions, as reflected in gauge boson pair-production rates. The general conclusion reached by this study is that the total rates can be quite useful as probes of higher-dimensional operators. Based on this, we have performed a detailed analysis of the cross-sections for $s$-and $t$-channel Higgs production, specifying event selection criteria for minimising their mutual contamination. A general scheme of computing the rates with more than one gauge-invariant operators has been outlined. Based on such an analysis, we conclude that, even with the additional operators well within the erstwhile experimental bounds (including those form the LHC), a number of observations can probe them at a Higgs factory. These include not only the individual total cross-sections but also their ratios at different values of $\sqrt{s}$ and also the ratio of the $s$-and the $t$-channel Higgs production rates at fixed energies. We also indicate the correlated variation of $W$-pair production rates. The Higgs production rate contours with more than one type of anomalous gauge-invariant operators are also presented. Finally, using some illustrative values of anomalous $HWW$ couplings in a more phenomenological parametrization, we indicate the viability of a correlated two-dimensional likelihood analysis to fully exploit the kinematics of the Higgs boson. The latter is particularly relevant to disentangle the SM from CP-violating admixtures. On the whole, we thus conclude that a Higgs factory can considerably improve our understanding of whether the recently discovered scalar is the SM Higgs or not, as evinced from its interactions with a pair of weak gauge bosons. \section*{Acknowledgements} We thank Taushif Ahmed, Satyanarayan Mukhopadhyay and Narayan Rana for helpful discussions. The work done by SvB was supported by The Claude Leon Foundation. The work of S.B., T.M. and B. Mukhopadhyaya was partially supported by funding available from the Department of Atomic Energy, Government of India for the Regional Centre for Accelerator-based Particle Physics (RECAPP), Harish-Chandra Research Institute. B. Mellado acknowledges the hospitality of RECAPP, Harish-Chandra Research Institute, during the collaboration.
proofpile-arXiv_067-10744
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Crab Nebula is a product of the supernova explosion observed by Chinese and Arab astronomers in 1054. It belongs to the class of Pulsar Wind Nebulae (PWN), bubbles of non-thermal plasma inflated inside of expanding supernova ejecta by magnetized neutron stars produced during core-collapse stellar explosions. The spectacular network of line-emitting filaments of the Crab Nebula is one of its most remarkable features. In projection, these filaments appear to occupy more or less the same space as the more amorphous non-thermal optical and radio emission. Most individual filaments are small scale structures but some are much longer and appear to cross almost the entire nebula. Images obtained at different epochs reveal radial expansion of the filamentary network \citep{trimble-68}. The speed of the proper motion of the filaments increases towards the outer edge. The line-of sight speeds, obtained via spectroscopic observations, show the opposite trend \citep{lawr-95}, thus confirming the overall radial expansion of the network. Moreover, the narrow band images of the Crab Nebula show that the filaments with low line-of-sight speed avoid the central part of the nebula image \citep{lawr-95}. This indicates that the Crab filaments do not penetrate the whole volume of the nebula, as otherwise low line-of-sight-speed emission would be seen there. Instead, the filaments reside near its outer edge, where they occupy a thick shell of thickness 0.3-0.7~pc, which is about one third of the nebula radius \citep{clark-83,lawr-95}. Initially, it was thought that the filaments could be the debris of the stellar envelope produced during the supernova explosion. However, this interpretation is in conflict with the low total mass of the filaments, $2-5 M_{\sun}$ and their low speed, only $\le 1500$~km/s, resulting in total energy of the explosion which is well below the typical one for core-collapse supernovae \citep{hester-08}. In order to bring these values up towards the expectations, one has to assume that most of the supernova ejecta is not visible yet. Most probably, the ejecta size significantly exceeds that of the Crab nebula and remains invisible due to low density ISM and hence a weak forward shock, whereas the observed thermal emission of the nebula comes as a result of the interaction between the inner part of the ejecta and the relativistic wind of the Crab pulsar \citep{rees-gunn-74,kc84a}. The high pressure of the hot bubble inflated by the pulsar wind inside of the ejecta drives a shock wave into the cold ejecta, heating its plasma and making it visible. Indeed, deep images of the Crab Nebula in high-ionization lines reveal a sharp outer edge, which can be identified with this shock \citep{GF-82,SH-97}. The non-thermal emission is generally confined within this edge, in agreement with this interpretation, although the edge is not seen in it in the north-west part of the nebula, there the radio emission seems to extend beyond the thermal one \citep[e.g.][]{V-84}. However, the cooling time of the post-shock gas and brightness of its emission is a strong function of the ejecta density and the observations may simply indicate lower ejecta density in the NW-direction \citep{SH-97}. The ejecta is much denser than the PWN bubble and provided the shock, and hence the contact discontinuity separating the shocked ejecta from the bubble, expands with increasing speed, this configuration is similar to the one where a heavy fluid is placed on top of a light one in gravitational field. The latter is known to be Rayleigh-Taylor (RT) unstable. During the non-linear phase of this instability, the heavy fluid forms fingers which stream downwards and the light fluid forms bubbles rising between the fingers. \citet{CG-75} proposed that this is the origin for the thermal filaments of the Crab Nebula. The possibility of acceleration is strongly supported by both the observations and the theoretical models of PWN. Indeed, the estimates of the nebula age based on its observed size and expansion speed are significantly shorter compared to that based on the time of the supernova explosion, implying accelerated expansion \citep[e.g.][]{trimble-68,bietenh-91a}. Strengthening this conclusion, the self-similar model of PWN inflated inside the ejecta with density $\rho\propto r^{-\alpha}$ by a pulsar wind of constant power yields the shock speed \begin{align} v_{sh}\propto t^{1/(5-\alpha)} \label{eq:v_sh} \end{align} \citep{CF-92}. The RT instability has been a subject of many theoretical studies. The original problem, involving ideal incompressible semi-infinite fluids in slab geometry, has been expanded to study the role of other factors, such as viscosity, surface tension, different geometry, magnetic field etc. For the original problem, the linear theory of the RT instability gives the growth rate \begin{align} \omega^2 = A g k \,, \label{eq:omega} \end{align} where $g$ is the gravitational acceleration and $k$ is the wavenumber of the perturbation and where we introduce the Atwood number $A=(\rho_2-\rho_1)/(\rho_2+\rho_1)$, where $\rho_1$ and $\rho_2$ are the mass densities of light and heavy fluids respectively. Thus, smaller scale structures grow faster. The transition to the non-linear regime occurs when the amplitude of the interface distortion becomes comparable to the wavelength. At the onset of this phase, the light fluid forms bubbles/columns of diameter $\sim\lambda$ which steadily rise with the speed \begin{align} v_b\simeq 0.5\sqrt{g\lambda} \,, \label{eq:v_b} \end{align} whereas the heavy fluid forms thin fingers approaching the state of free fall \citep[e.g.][]{DS-50,F-54,youngs-84,K-91,rama-12}. Thus at this stage, bubbles of larger scales grow faster and eventually dominate the smaller ones. This has been observed both in laboratory and in simulations \citep[e.g.][]{youngs-84,jun-95,SG-07}. Interestingly, the initially dominating small scale perturbations appear to be washed out completely when much larger scales begin to dominate. Even if the initial spectrum of linear perturbations has a high wavelength cutoff, structures on the length scale exceeding it may appear via a kind of inverse cascade process, where smaller bubbles merge and create larger ones \citep{sharp-84,youngs-84}. The dynamics of bubbles and fingers is influenced by secondary Kelvin-Helmholtz instability, which facilitates transition to turbulence and mixing between the fluids. This could be the reason for the observed disappearance of smaller scales. The geometry of PWN is very different from that of the original RT problem. For example, the finite extension of PWN puts a natural upper limit on the length scale of RT-perturbations which may develop and the shell of heavy fluid is not thick compared to the observed size of RT fingers and bubbles. Moreover, the shell is bounded by a shock wave and the whole configuration is expanding, including the perturbations. \citet{vishn-83} studied linear stability of thin shells formed behind spherical shocks in interstellar medium. He found that for the ratio of specific heats $\gamma<1.3$, the shell (and the shock) may experience unstable oscillations (overstability) for wavelengths below the shock radius. In geometric terms, the shell becomes rippled, extending further out in some places and lagging behind in others. For accelerated expansion, the RT instability is apparently recovered in the limit of planar shock. \citet{CF-92} applied the thin shell approach of \citet{vishn-83} to PWN. In particular, they found that, in agreement with the earlier finding by \citet{vishn-83}, in the spherical geometry the law of perturbation growth changes from exponential to power one and that only spherical harmonics of the degree $l\ge 5$ actually grow in amplitude. \citet{jun-98} investigated the role of these factors in the non-linear regime via axisymmetric numerical non-relativistic hydrodynamic (HD) simulations. He considered the case of uniform supernova ejecta and isotropic pulsar wind of constant luminosity. In accordance with the expectations based on the linear thin-shell theory, the results show rippling of the forward shock as well as developing of the RT fingers. They also show the gradual replacement of small-scale structures with larger ones, both in terms of linear and angular scales, similar to that seen in the earlier numerical studies of RT instability (see their Figure 6)\footnote{ \citet{jun-98} gives no information on the type and spectrum of initial perturbations.}. By the time of 4000~yr, the dominant angular scale of the RT bubbles is about $\theta\sim \pi/20$ and the RT fingers have approximately the same linear size as the ripples. The RT fingers are remarkably thin and coherent and reminiscent at least of some of the Crab filaments. However, at the current age of the Crab Nebula, $\sim 1000\,$yr, the thickness of the mixing layer occupied by the fingers is much smaller, only approximately 1/15 of the PWN radius. This is about five times below the observed thickness of the Crab's filamentary shell. A similar discrepancy is found with respect to the scale of the shock ripples\footnote{Although \citet{jun-98} does not attempt to compare results with the Crab Nebula because of the idealized nature of the simulations, the parameters of the setup are actually based on Crab data.}. The PWN plasma is magnetized and this motivates to investigate the role of magnetic field in the RT-instability. Since the magnetic field of the supernova ejecta is expected to be much weaker, the only relevant case is where the magnetic field is present only in the light fluid and hence runs parallel to the interface. Introduction of such field to the original RT problem leads to the growth rate \begin{align} \omega^2 = A g k -\frac{(\mathbf{B\cdot k})^2/2\pi}{\rho_2+\rho_1} \, , \label{eq:omega_m} \end{align} where $\rho_1$ and $\rho_2$ are the mass densities of light and heavy fluids respectively and $\mathbf{k}$ is the wave-vector of the perturbation \citep{chandra-stability}. For the modes normal to the magnetic field, the growth rate of the non-magnetic case is recovered. For modes parallel to the field the magnetic tension suppresses the perturbations with wavelengths below the critical one, \begin{align} \lambda_c = \frac{B^2}{g(\rho_2-\rho_1)} \, \label{eq:lambda_c} \end{align} and the wavelength of fastest growing modes exceeds $\lambda_c$ by a factor of two. 2D and 3D computer simulations confirm these conclusions of the linear theory \citep{jun-95,SG-07}. They also demonstrated that even magnetic field which is relatively weak compared to the critical one may have significant effect on the non-linear evolution of RT fingers via inhibiting the development of secondary KH instability, thus leading to longer fingers. \citet{hester-96} applied the theory of magnetic RT instability to the Crab Nebula. Their key assumption was that the smallest structures of the Crab's filamentary network reminiscent of the RT bubbles and fingers had the wavelength of $2\lambda_c$, in the limit $\rho_2\gg\rho_1$. Using the observational estimates of density they found that ``the ends meet'' when the magnetic field strength is near the equipartition value based on the non-thermal emission. Such strong magnetic field is indeed expected near the interface in the 1D model of PWN by \citet{kc84a}. However, there are several reasons to doubt this analysis. First, the multi-dimensional relativistic MHD simulations of PWN of recent years have demonstrated that many results of the 1D model on the structure and dynamics are incorrect. Secondly, the magnetic field does not suppress modes normal to the magnetic field. Finally, the gradual progression to larger scales at the non-linear phase, as described above, seems to make the task of identifying structures corresponding to the fastest growing linear modes virtually impossible. In the context of PWN the interface acceleration is not an arbitrary parameter, but relates dynamically to the PWN pressure and the ejecta density. \cite{bucc-04} utilized the self-similar model of PWN evolution by \citet{CF-92} to derive the critical angular scale of magnetic RT instability. In the case of constant wind power, they obtained \begin{align} \theta_c/\pi = 8 \frac{P_m}{P_{tot}} \Delta f(\alpha)\, , \label{eq:theta_c} \end{align} where $P_m$ and $P_{tot}$ are the magnetic and total pressure of the PWN near the interface, $\Delta$ is the thickness of the shocked ejecta, $\alpha$ is the index of the ejecta density distribution $\rho\propto r^{-\alpha}$, and $ f(\alpha) = 1+ (3-\alpha)/(6-\alpha)$. For uniform ejecta ($\alpha=0$) in the adiabatic case, one finds $\Delta \simeq 0.02$ \citep{jun-98}, and hence $ \theta_c/\pi \simeq 0.25 ({P_m}/P_{tot})\, . $ One can see that for magnetic field of equipartition strength, the critical scale is getting close to $\pi$, implying full suppression of the RT instability along the magnetic field. To test this result, \cite{bucc-04} carried out 2D relativistic MHD simulations intended to study the dynamics in the equatorial plane of PWN\footnote{ Like in the model of \citet{kc84a}, the symmetry condition prohibits motion in the polar direction.}. They considered equatorial sections of angular size up to $\pi/6$ and employed periodic boundary conditions in the azimuthal direction. The 1D model of \citet{kc84a}, with its purely azimuthal magnetic field was used to setup the initial solution and the boundary conditions in the radial direction. The results of these simulations generally agreed with Eq.~(\ref{eq:theta_c}), demonstrating suppression of RT instability in models where the magnetic fields builds up to the equipartition value near the interface with the shocked ejecta. This conclusion is in conflict with the analysis of \citet{hester-96} who identify $\lambda_c$ with structures as small as $1\farcs5$ in the sky, which corresponds to $\theta_c\sim \pi/300$, and yet deduce a magnetic field of equipartition strength. The discovery of the highly non-spherical ``jet-torus'' feature in the inner part of the Crab Nebula \citep{weiss-00}, and subsequent theoretical and computational attempts to understand the origin of this feature have lead to a dramatic revision of the Kennel-Coroniti model \citep[e.g.][]{bogovalov-khan-02b,lyub-02,ssk-lyub-03, ssk-lyub-04,delzanna-04,bogovalov-05,camus-09,porth-13,porth-14}. The KC-model describes the flow inside PWN as laminar radial expansion whose speed gradually decreases from its highest value just downstream of the pulsar wind termination shock to its lowest value at the interface with the supernova ejecta. This deceleration is accompanied by a gradual amplification of the purely azimuthal magnetic field from its lowest value at the termination shock to its highest value at midpoint where the magnetic pressure is approximately equal to that of particles. In reality, both the termination shock and the flow downstream of this shock are highly non-spherical with strong shears. The termination shock is highly unsteady and the motion inside the nebula is highly turbulent. The magnetic field of PWN is strongest not near its edge but at its center. While the KC-model requires the pulsar wind to be particle-dominated, which is in conflict with the theory of pulsar winds, our recent results show that the complex 3D dynamics of PWN allows the wind to be Poynting-dominated \citep{porth-13,porth-14}. These global simulations have reproduced many of the observed features of the inner Crab Nebula, which was their main objective. However, they failed to capture the development of thermal filaments. Only during our latest study \citep[see arXiv preprint of][]{porth-14}, we noticed what looked like ``embryos'' of RT fingers. In order to check this, we continued our reference 2D simulations all the way up to the current age of the Crab Nebula, by which time these embryos turned into fully developed structures. Unfortunately, we could not (yet) do the same in our 3D simulations due to their prohibitively high cost. In the present paper, we describe in details this part of our study together with the additional simulations carried out mainly to investigate the role of numerical resolution. \section{Simulations overview} \label{sec:simulations} The numerical method, the use of adaptive grid, as well as the initial and boundary conditions of the numerical models presented here are exactly the same as described in \citet{porth-14}. For this reason, we describe here only few key features and refer interested readers to that paper for details. The simulations have been carried out with the adaptive grid code MPI-AMRVAC \citep{amrvac}, using the module for integrating equations of ideal special relativistic MHD. The scheme is third order accurate on smooth solutions. Outside of the termination shock region, we use an HLLC solver \citep{Honkkila:2007:HSI:1232960.1233238}, which significantly reduces numerical diffusion compared to the normal HLL solver. We employ cylindrical coordinates and used cells of equal sizes along the $z$ and $r$ coordinates. The base level of AMR includes $64\times32$ cells. From three to six more levels are used to resolve the PWN, depending on the model, and even more levels are introduced to fully resolve the pulsar wind and its termination shock. When we study the influence of numerical resolution on the development of the RT instability, we only change the number of allowed grid levels in the PWN zone, while keeping the same number of levels in the pulsar wind zone. Thus in the model with the lowest resolution (model A0, see Table~\ref{tab:simulations}), the relevant effective grid size is $512\times256$ cells, whereas in the highest resolution model (A3) it is $4096\times2048$ cells. When the solution is scaled to the size of the Crab Nebula, the cell-size equals to $\Delta x=3.9\times 10^{16} \rm cm$ in the model A0 and $\Delta x=4.9\times 10^{15} \rm cm$ in the model A3. Initially, the computational domain is split in two zones separated by a spherical boundary of radius $r_i=10^{18}$cm. The outer zone describes a radially-expanding cold supernova ejecta. The ejecta is described as a radial flow with constant mass density $\rho=\rho_\ind{e}$ and the Hubble velocity profile $ v=v_\ind{i} (r/r_\ind{i})$. This is suitable for such young PWN like the Crab Nebula. The values of $\rho_\ind{e}$ and $v_\ind{i}$ are determined by the condition that the total mass and kinetic energy of the ejecta within $r_\ind{i}<r<5r_\ind{i}$ are $3M_{\sun}$ and $10^{51}$erg respectively. The inner zone is filled with the unshocked pulsar wind. To monitor the mass-fractions of PWN versus SNR material, we solve an additional conservation law \begin{align} \frac{\partial}{\partial t} (\Gamma\rho\tau) + \nabla_i(\Gamma \rho\tau v^i) = 0 \end{align} and inject $\tau=1$ with the PWN while $\tau=0$ elsewhere. Hence we have $\rho_{\rm PW}=\tau \rho$ for the (leptonic) material injected with the PWN and $\rho_{\rm SNR}=\rho(1-\tau)$ for material originating in the SNR. \begin{table} \begin{center} \caption{Simulation parameters. ID - the model name, $\sigma_0$ - the magnetization of the pulsar wind, effective Grid size - relevant for the nebula, $\Delta x$ - the cell size in the nebula in units of $10^{16}\rm cm$} \begin{tabular}{@{}lllll} ID & $\sigma_{0}$ & Grid size & $\Delta x$ \\ \hline \hline A0 & 0.01 & $256\times512$ & 3.90 \\ A1 & 0.01 & $512\times1024$ & 1.95 \\ A2 & 0.01 & $1024\times2048$ & 0.98\\ A3 & 0.01 & $2048\times4096$ & 0.49\\ B1 & 1.00 & $512\times1024$ & 1.95 \\ \end{tabular} \end{center} \label{tab:simulations} \end{table} The angular distribution of the wind power is based on the monopole model of \citet{michel-73}, where it varies with the polar angle as $\propto\sin^2\theta$. Following \citet{bogovalov-99}, we use the split-monopole approximation to introduce the stripe-wind zone corresponding to the oblique dipole with the magnetic inclination angle $\alpha$. In all models we put $\alpha=45^{\circ}$, the value preferred in the model of the Crab pulsar gamma-ray emission by \citet{harding-08}. Most models in this study have the rather low wind magnetization parameter $\sigma_0=0.01$. This is because 2D models with significantly higher magnetization develop artificially strong polar outflow. However, as we demonstrate here by including model B1 with $\sigma_0=1$, this does not make a noticeable impact on the development of the RT instability away from the poles. The wind Lorentz factor is set to $\Gamma=10$ and the total wind power to the current spin-down power of the Crab pulsar, $L=5\times10^{38} \rm erg\,s^{-1}$ \cite[e.g.][and references therein]{hester-08}. The spin-down time of the Crab pulsar is about 700~yr \citep{LPS-93}, which is below the age of the Crab Nebula, $\tau_\ind{sp}\sim 960\,$yr, and future attempts of more accurate modelling of the nebula should take this into account. \begin{figure*} \begin{center} \includegraphics[width=60mm]{f1a.pdf} \includegraphics[width=60mm]{f1b.pdf} \includegraphics[width=47mm]{f1c.pdf} \caption{Rayleigh-Taylor ``fingers'' in 2D simulations. The left and middle panels show $\log_{10}\rho$ at $t\simeq 1060$ years in simulation runs B1 (left) and A1 (middle). Note that in the low magnetization A1 run ($\sigma_0=0.01$) the jet is not able to penetrate far into the SNR. The right-hand panel illustrates the way of determining the nebula radius $r_{\rm n}$. It shows the mask used in order to separate PWN from the supernova shell. The nebula radius is then obtained as the volume average $r_{\rm n}=\left(3V/4\pi\right)^{1/3}$. The conspicuous ``fingers'' form via the Rayleigh-Taylor (RT) instability of the contact discontinuity between the PWN and the supernova shell. The typical ``mushroom'' morphology of the RT-fingers, characteristic of the non-linear stages of this instability, is not seen here, most likely because of the interaction with the turbulent flow of PWN. } \label{fig:A1vsB1} \end{center} \end{figure*} \section{Results}\label{sec:results} One of the important results of \citet{porth-14} is that the global dynamics of axisymmetric 2D models and that of 3D models with strong magnetization of the pulsar wind differ dramatically. The 2D models develop strong axial compression and produce powerful polar outflows. This results in a highly elongated shape of the nebula. This is in sharp contrast with the observations of the Crab Nebula, which is only moderately elongated. In contrast, the total pressure distribution of 3D models is almost uniform and their shape remains approximately spherical. For 2D models to remain approximately spherical, the magnetization of the pulsar wind should be low. From the perspective of studying the RT-instability in 2D, it looks like this makes us choose between two ``evils'' -- either to focus on the high-sigma models with their unrealistic overall geometry or on the low-sigma models with potentially weaker magnetic field in the nebula. Given the results of previous studies, the magnetic field strength can be important for development of the RT-instability \citep{jun-95,SG-07,bucc-04}. To clarify this issue we run two models, A1 and B1, which differ only by the wind magnetization (both these models were studied in \citet{porth-14}). It turns out that the RT-instability yields very similar filamentary structure in these two cases everywhere apart from the polar zones, as one can see in figure~\ref{fig:A1vsB1}, which illustrates the solutions at the time $t\simeq1060\,$yr\footnote{The nebula age is given by the simulation time $t$ plus the initial time $t_0=r_{\rm i}/v_{\rm i}\simeq210~$years, assuming initial expansion with constant $v_{\rm i}$.}. There are two main reasons behind this similarity of A1 and B1 models. First, in axial symmetry, the azimuthal magnetic field has no effect on the growth of RT perturbations as there is no mode-induced field line bending since $\mathbf{B}\cdot\mathbf{k}=0$. Second, the expansion rate of the nebula in the equatorial direction has not been altered dramatically in the high-sigma B1 model compared to the A1 one. Indeed, the equatorial radii are more or less the same in both models. Given this result, we decided to focus on the low sigma model in the rest of our study. Figure \ref{fig:A1vsB1} shows a number of anticipated features. Similar to what was found in \cite{jun-98}, we see that 1) the initially spherical shock front is now heavily perturbed and bulges out between the RT fingers; 2) some of the filaments become detached from the shell; 3) the filaments do not exhibit the ``mushroom caps'' characteristic of the single-mode simulations \citep{jun-95}. However, in contrast to \cite{jun-98}, we do not see significant density enhancements at the heads of the filaments. Moreover, the filaments extend much further into the nebula in our simulations, up to the distance of up to $1/4\,r_\ind{n}$, which is much closer to the value of $1/3\,r_\ind{n}$ deduced for the Crab Nebula. Visually, the scale of the shock ripples is also not that far away from the observed one. Although these results looked very encouraging, it was not clear what exactly set the scale found in the simulations. In contrast to the previous studies we did not impose any perturbations of the shock front at the beginning of the runs. Instead, the RT mechanism amplified perturbations which had been imparted on the shock by the unsteady flow inside the PWN bubble. Visual inspection of Figure~\ref{fig:A1vsB1} hints that the scale of the dominant RT-modes could be related to the size of the termination shock, which sets the scale of large-scale eddies emitted by the shock into the PWN. On the other hand, numerical viscosity could also set the scale. As in any numerical study, it is imperative to check the resolution dependence of our results. Increased resolution leads to a reduction of the numerical viscosity which in turn can influence the instability growth. The numerical viscosity of our third order reconstruction scheme is expected to scale linearly with resolution, a behavior established for example in high order WENO-type schemes \citep{ZSSZ-03}. The scaling of the viscous growth rate in the Rayleigh-Taylor problem is well known \citep[e.g.][and references therein]{K-91} and leads to \begin{align} k_{m}\propto \nu^{-2/3} \end{align} for the wave-number of the fastest growing mode. In terms of the wavelength and cell size this reads as $\lambda_{\rm m}\propto \Delta x^{2/3}$. It is much more difficult to predict the outcome in the nonlinear regime because of the earlier saturation of small wavelength modes and the possible inverse cascade. In order to study the role of resolution in the nonlinear regime, we run three more models A0, A2, and A3, which differ from the A1 model only by the numerical resolution inside the PWN bubble (see table~\ref{tab:simulations}). Figures \ref{fig:resolution} and \ref{fig:zoom-rho} show the density distribution found in these models. One can see that while the size of the termination shock in all of them is more or less the same, with increasing resolution the power of RT features is progressively shifted towards smaller scales - the forward shock becomes more rounded and the RT-fingers become more numerous and small scale. \begin{figure*} \begin{center} \includegraphics[width=65mm]{f2a.pdf} \includegraphics[width=65mm]{f2b.pdf} \includegraphics[width=65mm]{f2c.pdf} \includegraphics[width=65mm]{f2d.pdf} \includegraphics[width=50mm]{f2e.pdf} \caption{Logarithmic densities showing the entire nebula and filaments at $t\simeq1060\,\rm years$ with increasing resolution. The models shown here are A0 (top left), A1 (top right), A2 (bottom left) and A3 (bottom right). Each next model has twice the resolution of the previous one. } \label{fig:resolution} \end{center} \end{figure*} In order to quantify the dominant scales, we analyze the surface mass density distribution defined via the integral \begin{align} \Sigma(\theta) = \int_{0.6 r_{\rm n}}^{1.2 r_{\rm n}} \rho(r,\theta) r^2 dr \, . \end{align} Then we subtract the mean value, $\Delta\Sigma(\theta)=\Sigma(\theta)-\bar{\Sigma}$, and use the Fourier decomposition to obtain the power spectrum $P(\Delta\Sigma)(m)$ of the residual fluctuations.\footnote{Note that the integrand is also shown in figure \ref{fig:time_evol}.} The results are shown in figure~\ref{fig:spectra} together with the low-pass-filtered data. They confirm our naked eye observation of the power transfer to smaller scale features with increasing resolution. In addition, one can see that in all models the spectrum peaks around $m = 10-20$. A secondary peak seems to appear at $m\sim 50$ in the model A2 and move to $m\sim 40$ in the model A3. \begin{figure} \begin{center} \includegraphics[width=65mm]{f3a.pdf} \includegraphics[width=65mm]{f3b.pdf} \includegraphics[width=65mm]{f3c.pdf} \includegraphics[width=65mm]{f3d.pdf} \includegraphics[width=63mm]{f3e.pdf} \caption{Zoomed in views of logarithmic densities showing the southern shell and filaments with increasing resolution to illustrate the ``filament trees''. Simulations A0, A1, A2 and A3 (top to bottom) are shown at $t\simeq 1060~\rm years$.} \label{fig:zoom-rho} \end{center} \end{figure} The growing power of small scales with numerical resolution can be interpreted as a result of weaker dampening of small scale RT perturbations by numerical viscosity. On the other hand, visual inspection of plots in figure~\ref{fig:resolution} also shows that at higher resolution the size of eddies reaching the RT interface is also reduced, via development of the turbulent cascade. This could be an additional factor in favor of small scale RT modes, as the initial perturbations imparted on the RT shell at large scales become weaker, and so require more time to reach the non-linear regime. Moreover, smaller scale eddies are also less powerful and smaller scale RT-fingers can survive interactions with them. \begin{figure*} \begin{center} \includegraphics[width=70mm]{f4a.pdf} \includegraphics[width=70mm]{f4b.pdf} \includegraphics[width=70mm]{f4c.pdf} \includegraphics[width=70mm]{f4d.pdf} \caption{Angular spectra of the surface-density with increasing resolution. The data for models A0, A1, A2 and A3 (left to right, top to bottom) are shown for $t\simeq 1060~\rm years$. } \label{fig:spectra} \end{center} \end{figure*} Figure~\ref{fig:time_evol} illustrates the time evolution of the RT mixing layer in the highest resolution model A3. In order to interpret the data correctly, one has to recall that due to the fixed linear resolution, the angular resolution increases in time following the increase of the linear size of the nebula. This complicates the matter. The time $t=100$ plot shows relatively small scale perturbations in the thin dense layer of shocked ejecta (at $r/r_n\sim 1$) reaching the saturation regime. This plot also shows much longer and less dense structures curling around the PWN eddies in the region $0.6\!<\!r/r_n\!<\!1$. These features are likely to be the result of entrainment of the shell matter by the fast flow inside the PWN bubble and not RT fingers. Such features have been observed in earlier low-resolution 2D simulations, e.g. the very long ``fingers'' associated with the backflow of polar jets (see figure~6 in \citet{ssk-lyub-04}). At $t=300$, the RT-fingers proper are becoming more prominent. They are much longer and occupy the region $0.8\!<r\!/r_n\!<\!1$. The angular scale of the shock ripples is also noticeably higher. \begin{figure*} \begin{center} \includegraphics[width=0.9\textwidth]{f5.pdf} \caption{Temporal evolution of the filaments for simulation A3 in the self-similar frame. We show the quantity $\log_{10}r^2\rho$ for the simulation times $t\in\{100,300,500,800\}$. The non-linear evolution starting at $t\approx100$ is governed by the merging of established filaments to form larger scales as well as fragmentation of bubbles and filaments, constantly re-filling fast-growing small scale structure. The thickness of the mixing layer saturates at $\sim 20\%$ of the nebula radius.} \label{fig:time_evol} \end{center} \end{figure*} At $t=500$ and 800, one can see the fragmentation of large scale shock ripples and large filaments, facilitated by the higher effective Reynolds number of the expanding system. The increase in Reynolds number can also be seen in the progressively smaller eddies in the PWN proper. Fragmentation and inverse cascade of the non-linear RTI compete over the dominant scale of filaments and it is not obvious which process has the upper hand at any given time. This is visualised in figure~\ref{fig:maxamp_vhr}, showing the time-evolution of the scale containing the most mass. \FloatBarrier \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{f6.pdf} \caption{Mode number of the most massive scale as a function of simulation time in run A3. } \label{fig:maxamp_vhr} \end{center} \end{figure} Initially, the dominant mode number is small-scale, owing to the faster linear growth of small scale structure. The inverse cascade is obtained in the non-linear phase where the larger scales overtake the saturated small scales. However, this trend is reversed at times $t\simeq200$ and $t\simeq400$ where we observe a sudden increase in the dominating mode number, owing to the creation of new small scale features. While the structure of the filaments in $\theta$-direction thus shows some resolution-dependence, the resulting transport of SNR material into the PWN is largely unaffected by resolution effects. Figure \ref{fig:dmdr} shows the radial distribution of SNR material defined via \begin{align} \langle dM/dr\rangle_\theta = \frac{1}{\pi} \int_0^\pi 2\pi r^2 \rho_{\rm SNR} \sin\theta d\theta \,. \end{align} At $t\simeq 950\,\rm years$, the mixing region ranges from $6\times10^{18} \rm cm$ to $\sim7.5\times10^{18}\rm cm$ and the radial distribution agrees particularly well in the inner part. Further outside, the distribution becomes increasingly peaked for higher resolution, in agreement with the increasingly circular appearance of the nebula. In front of the PWN-shock, the SNR evolves according to the self-similar expansion law $\rho_{\rm SNR}\propto t^{-3}$ in good agreement with the simulations. \begin{figure} \begin{center} \includegraphics[width=0.45\textwidth]{f7.pdf} \caption{Radial distribution of SNR material at $t\simeq950\ \rm years$ for simulation runs \{A1,A2,A3\}. } \label{fig:dmdr} \end{center} \end{figure} Visual inspection of the animated data (see the online material) suggests that the inverse cascade is also present in the simulations -- occasionally smaller scale ripples merge and create a larger one. When this occurs, one can see several RT fingers emerging from the same base. The plot for the A3 model in figure~\ref{fig:zoom-rho} shows an example of such a structure (its base coordinates are $r=3.4$, $z=-8.0$). Figure~\ref{fig:zooms} provides more information on the stucture of simpler configurations, where only one or two fingers are found at the junction of two shock ripples. Interestingly, the total pressure measured at the base of some fingers is significantly higher than in the surrounding plasma, with a sharp rise, characteristic of a shock wave. In order to check the shock interpretation, we studied the velocity field in the frame moving with the velocity measured at the base of the left finger, indicated in the figure by a cross. This velocity is subtracted from the velocity field measured in the original lab-frame and the result is presented in the figure~\ref{fig:zooms}. \footnote{Since the velocities at the PWN boundary are $\sim2000\rm km\, s^{-1}\ll c$, this Galilei transformation is sufficient.} This allows us to see clearly the flow converging towards the finger base. The Mach number is above unity upstream of the base and drops below unity downstream, inside of the high pressure region. Thus, the results are consistent with the shocked ejecta plasma sliding with slightly supersonic speed along the ripples towards their junction point, where it passes through two stationary shocks before entering the finger. Vortical motion in the space between the neighbouring fingers may also contribute to the finger overpressure. The plasma-beta at the filament base varies greatly (lower right panel), with minimal values larger than $\simeq10$ and maximal values -- in regions where mixing is advanced -- ranging up to $10^4$. In simulation B1 with $100\times$ stronger wind magnetisation, with the exception of the spurious jet region where $0.1<\beta<1$, we still observe similar values of the interface plasma beta, due to annihilation of flux loops with opposite polarity in the nebula. Thus we do not expect strong suppression of field-aligned modes, as the critical angular scale becomes $\theta_c/\pi\lesssim 1/40$ ( see Eq. \ref{eq:theta_c}) in the bulk of the nebula. \begin{figure*} \begin{center} \includegraphics[height=70mm]{f8a.pdf} \includegraphics[height=70mm]{f8b.pdf} \includegraphics[height=70mm]{f8c.pdf} \includegraphics[height=70mm]{f8d.pdf} \caption{Zoom into the southern filaments of run A1 ($t\simeq1060\rm~years$). The top-left panel shows the logarithmic rest-frame density with flow field vectors and the top-right panel the total pressure. Sonic Mach-number with white contour on $M^+=1$ is shown in lower left and the ratio of gas and magnetic pressures, $\beta=P_g/P_m$, is given in the lower right panel. Note that velocity vectors and Mach number $M^+$ are shown relative to the rest-frame of the pivot-point indicated by grey ``+''. } \label{fig:zooms} \end{center} \end{figure*} \section{Discussion} \label{sec:discussion} The convergence study of the RT-instability described in the previous section indicates that with higher resolution, the filamentary structure produced in the simulations becomes less similar to the one observed in the Crab Nebula, with the deficit of large-scale structure being the most pronounced discrepancy. In this section, we discuss the possible explanations of this result and speculate on the origin of the Crab's large-scale filaments. As we have already commented on in the Introduction, the radial expansion of the RT interface in our problem, introduces a number of interesting modifications to the classical results obtained in plane geometry. Here we derive them via adjusting in a rather crude but very simple way the non-relativistic results in planar symmetry. More accurate analysis of the linear theory can be found elsewhere \citep{BT-53,GL-86,CF-92,goedbloed2010}. We start with the non-magnetic case and discuss the potential role of magnetic field later. We denote as $\theta_m=\lambda/r_n$ the angular scale of RT perturbations. Here $r_n\propto t^\delta$, where $\delta=6/5$, is the mean radius of the nebula at the phase of self-similar expansion. As the nebula expands radially, $\lambda$ increases proportionally to $r_n$, but $\theta_m$ remains constant. This linear stretching of the perturbations is most important. First, the linear amplitude of perturbations, $h$, is no longer the most suitable parameter to describe their strength and should be replaced with $x=h/\lambda$. Assuming that $h$ grows at the same rate as in the plane case with $A=1$ we find \begin{align} \dot{x}=x\left[ \frac{\dot{h}}{h} - \frac{\dot{r}_n}{r_n}\right] = \kappa \frac{x}{t}\, , \label{eq:x} \end{align} where $$ \kappa=\omega - \delta = \fracp{2\pi\delta(\delta-1)}{\theta_m}^{1/2}-\delta \,. $$ This shows that the amplitude growth is a power law, $x\propto t^\kappa$, and suggests the critical angular scale $$ \theta_s = \frac{2\pi(\delta-1)}{\delta} $$ above which the perturbations do not grow. For $\delta=6/5$, corresponding to a uniform ejecta and constant wind power, $\theta_s=\pi/3$. More accurate analysis of RTI in spherical geometry yields the growth rate \begin{align} \omega^2 = \frac{lg}{r_n} \, , \label{eq:omega_sph} \end{align} where $l$ is the degree of the associated Legendre polynomial $P^m_l(\cos\theta)$ \citep{BT-53,GL-86}. In the limit of high $l$ we should recover the plane geometry which leads to the identification of the wavenumer $k$ with $l$ via $k=l/r_n$. Using this we find the critical degree $l_s=6$ which corresponds to $\theta_s=\pi/3$. This is very close to the critical degree $l_s=5$ found in the thin shell approximation \citep{CF-92}, showing that the dumping of the RT instability at large scales is a robust result. Based on this we tentatively conclude that there is an upper limit of $\simeq 50^\circ$ for the angular size of perturbations above which they do not grow. The solution to Eq.~(\ref{eq:x}), $ x(t)=x_0 (t/t_0)^\kappa $, implies that a perturbation of any amplitude imposed at the time $t_0$ will be able to reach the non-linear regime provided $t_0$ is sufficiently small. For the Crab Nebula this means that we should be dealing with the fully nonlinear regime on all scales. However, in our simulations where $t/t_0$ is not particularly high, some large-scale perturbations may still be growing in the linear regime. For example, according to this result, the largest angular scale to grow by a factor of $e$ during our simulation time is $\pi/4.4$. Moreover, one may reasonably expect their final amplitude to depend on the strength of large-scale motion inside the PWN bubble as it is responsible for the initial amplitude of these perturbations. The reduction of the final amplitude with resolution, observed in our simulations, may well reflect the parallel weakening of this large scale motion. Since in the real Crab Nebula the perturbations of all scales which are linearly unstable, are expected to have reached the nonlinear regime, this regime is much more relevant for interpretating the observations. To this end, consider the growth of RT bubbles in the non-linear regime. Substituting $g=\ddot{r}_n=\delta(\delta-1)(r_n/t^2)$ into equation~(\ref{eq:v_b}) and integrating, we find that the bubble height is \begin{align} \frac{h}{r_n} \simeq \frac{1}{2}\fracp{\theta_m (\delta-1)}{\delta}^{1/2} \, . \label{eq:h_b} \end{align} Thus, the relative height of bubbles, $h/r_n$, ``freezes out'' in the non-linear phase. This conclusion fits nicely the picture of self-similar expansion. The critical scale $\theta_s$ of linear regime sets the upper limit on $h/r_n$. For $\delta=6/5$ this limit is $(h/r_n)_{\rm max}\simeq 0.2$, which is about the size for the largest bubbles of the Crab Nebula ``skin'' \citep{hester-08}. The fact that in our simulations we do not observe high amplitude bubbles on these scales may indicate they have not had enough time to reach the non-linear regime yet. The inverse cascade may contribute to the production of large-scale bubbles, but it is unlikely to overturn the freezing-out effect. Since the finite thickness of the RT unstable layer and the forward shock do not feature in the analyses leading to Eq.~(\ref{eq:h_b}), this result should not be considered as an accurate prediction yet. However, it gives us a basis to speculate about the origin of the largest ``filaments'' seen in the Crab Nebula. These features can be as long as the nebula radius and they do not appear to be streaming radially towards its center (see figure 1 in \citet{hester-08} as well as figure 2 in \citet{clark-83}) as one would expect for the RT-fingers. Instead they seem to outline a network of very large cells filled with synchrotron-emitting plasma. We propose that these cells are actually the largest RT-ripples (bubbles) on the surface of the Crab Nebula and these filaments designate ``valleys'', where these ripples come into contact with each other. The plasma of shocked ejecta may slide along the surface of the ripples into these valleys, in very much the same fashion as we have discussed in connection with figure~\ref{fig:zooms}. These filaments may form a base from which proper RT-fingers will stream radially towards the center of the nebula. In fact, this is indeed what is seen in the Crab Nebula, most clearly in its NE section, where a number of smaller scale filaments seem to originate from a large one at the angle of almost 90 degrees. Remarkably, the observed cell size is in a very good agreement with the largest angular scale of shock ripples, $\theta_s\simeq \pi/3$, which can be amplified by the RT-instability. These large-scale ripples are not seeded internally by the interaction with the large-scale motion inside the PWN bubble, the only source of perturbations in our simulations. Instead, they may originate from inhomogeneities in the supernova ejecta itself, which we did not incorporate in our models. Given the violent nature of supernova explosions it seems only natural to expect strong large-scale fluctuations in the ejecta \citep[e.g.][]{CO-13}. Moreover, \citet{FMS-92} argued that the conspicuous ``bays'' in the nonthermal optical emission of the Crab Nebula could be indications of a presupernova disk-like ejection. The interaction of a supernova ejecta with such a disk is believed to be behind the emergence of bright rings around SN 1987 A \citep{larsson-11}. As we have noted in the Introduction, the magnetic field may have strong impact on the development of the RT-instability. This may seem particularly significant as pulsar winds inject highly magnetized plasma into the PWN bubble. Even for weakly-magnetised winds, the magnetic effects would be important provided PWN were organised in accordance with the Kennel-Coroniti model. In this model, the initially weak magnetic field is amplified towards equipartition between the magnetic and thermal energies near the contact discontinuity with the shocked supernova ejecta. This is exactly the condition for inhibiting the RTI as derived in \citet{bucc-04}, when considering modes aligned with the magnetic field. In contrast, in our simulations the magnetic field is always normal to the wave vector of any type of perturbation due to their symmetry, which nullifies the magnetic effect and which can be considered as the main limitation of our study. However, this limitation is probably not as important as it seems. Strong magnetic dissipation seen in our simulations, particularly in the 3D models \citep{porth-13,porth-14}, keeps the magnetic field well below the equipartition near the interface even for high-sigma pulsar winds. Moreover, in 3D the magnetic field is not that effective in inhibiting the RTI, as matter can slide in between the magnetic field lines without bending them downwards \citep{SG-07}. The combination of these factors makes us conclude that the impact of magnetic field on the development of RTI in the Crab Nebula is likely to be rather minimal, which is consistent with the observations. Apart from the high magnetization employed in some runs, the setup of our simulations is very similar to that of the previous axisymmetric simulations of PWN, e.g. by \cite{ssk-lyub-03,ssk-lyub-04, delzanna-04,bogovalov-05,camus-09}. However, none of those captured the development of the RT-instability. We believe that this is due to the insufficient resolution of previous studies at the interface between the PWN and the supernova shell. This is not surprising as these studies were mainly concerned with the inner regions around the termination shock and used spherical coordinates which fit the purpose nicely. However, their spatial resolution thus quickly decreases with the distance from the origin. In contrast, in the cylindrical coordinates employed here, we obtain uniform resolution throughout the PWN ($\Delta r=\Delta z =1.95\times 10^{16}\, \rm cm$). Moreover, we utilize third order spatial reconstruction and Runge-Kutta time-stepping giving overall higher accuracy compared to the previous studies. \section{Conclusions} \label{sec:conclusions} Our high resolution axisymmetric simulations of PWN now reveal intricate structures of filaments growing via the Rayleigh-Taylor instability of the contact discontinuity between PWN and SNR. Given the high rate of magnetic dissipation observed in recent 3D simulations of PWN, the magnetic tension is likely to play only a minor role in the RTI such that our axisymmetric simulations are in fact applicable to reality. In application to the Crab nebula, we have simulated the last 800 years of its evolution and find the longest fingers to reach a length of $\sim1/4$ of the nebula radius. The inverse cascade observed in the planar RTI is complemented by constant replenishment of small scale structure due to fragmentation of old filaments and formation of new fast growing small scale perturbations. The latter is particularly pronounced at the large ``bubbles'' of the nebula which, as they expand along with the nebula provide favourable conditions for growth of fresh small-scale RTI. The most massive filaments in our simulations reach a scale of $m\simeq15$ (corresponding to 15 large fingers over the semi-circle), independent of the numerical resolution. Our simulations can not yet reproduce the largest scales observed in the Crab nebula and we propose that they must be seeded from inhomogeneities in the SNR as would result from an anisotropic supernova explosion. In the future, we plan to investigate the influence of magnetic tension on the filamentary network in local 3D simulations of PWN with realistic values of the magnetisation. \section{Acknowledgments} SSK and OP are supported by STFC under the standard grant ST/I001816/1. SSK has been partially supported by NASA grant NNX13AC59G. SSK acknowledges support by the Russian Ministry of Education and Research under the state contract 14.B37.21.0915 for Federal Target-Oriented Program. RK acknowledges FWO-Vlaanderen, grant G.0238.12, and BOF F+ financing related to EC FP7/2007-2013 grant agreement SWIFF (no.263340) and the Interuniversity Attraction Poles Programme initiated by the Belgian Space Science Policy Office (IAP P7/08 CHARM). The simulations were carried out on the Arc-1 cluster of the University of Leeds. \bibliographystyle{mn2e}
proofpile-arXiv_067-10758
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Social network is a network consisting of constant information of the connection of a group of people. Social Network Analysis (SNA) \cite{Adamic04} discovers the unique or target characters to describe the certain pattern of such people connection. As an implementation of SNA, online social services like \textit{LinkedIn.com} are becoming popular these years. In some previous studies, SNA models are implemented to make connections of certain groups by directed graph models or weighted edge graph models. For example, Wang and his coworkers develop a probabilistic factor graph model \cite{Wang10} to analyze bibliographic network in academic world. When conducting research in social networks by characterizing documents, such as emails and academic publications, data mining models have been applied in many SNA models. Lada Adamic and Eytan Adar present Latent Dirichlet Allocation (LDA) model \cite{Blei03}, which selects Dirichlet distribution to estimate topic mixture, and samples training data by Expectation Maximization (EM) algorithm. This LDA model collects separated data to improve predicted models, generating representative words under such latent topics from given documents. Later, Rosen-Zvi and her group extend LDA model to Author Topic (AT) model \cite{Zvi04}, determining topics by both content and author distributions. Our project collects topic words from emails and models the preference of senders in typical website, using Gibbs sampling as training strategy instead of EM algorithm. We create an email account at Columbus, Ohio and use this account to register at \textit{Obama.com}. Emails sent by \textit{Obama.com} are received through \textit{Datagreening.com} \cite{Steward12}, which is a carbon-free server developed by our research group. Then the downloaded emails are processed to input data for LDA model in an appropriate format. Then we apply the LDA model mapping the data to topic layers. Thus the topic words list is able to be generated in the next step. Finally, by analyzing the similarity between target words list and topic words list generated in above steps, we establish which information are likely given to publicity by \textit{Obama.com}. Python packages have been used to process words, such as word tokenizing, word stemming and filtering out stop words, etc. Word count program is run on two nodes implementing the parallel computing tool BashReduce \cite{Zawodny09} to achieve almost 30\% speedup. TF-IDF model \cite{Liu04} is selected to be a comparison generating topic words list. The model applied in our project provides precision rate 53.96\% higher than TF-IDF model, when we define the size of topic list properly. The rest parts of this paper are organized in following five sections: In section 2, we introduce the generative LDA model and the design details of applying this model in our project. Section 3 describes parameter estimation using Gibbs sampling \cite{Gilks96}. Section 4 describes \textit{Datagreening.com} server, BashReduce tool and python packages used in the project. Experimental results are shown and evaluated in section 5. Conclusion and future work are discussed in section 6. \section{Generative LDA Model} In this section, we will first briefly introduce the unigram model, a model generating documents by single topics. Then we discuss LDA model applied in our project. To simplify the process, we define that a document is a collection of discrete words. \subsection{Unigram Model} Unigram Model, which is a basic probability model, treats words as isolated elements in each document \cite{Nigam00}. Assume that we have a corpus $W$ of $m$ documents, and a document d is a vector of $n$ words, where each word is selected from a vocabulary of $|V|$ words. We define document $d=\vec{w}=(w_{1},w_{2},\ldots,w_{n})$, and corpus $W=(\vec{w_{1}},\vec{w_{2}},\ldots,\vec{w_{m}})$. In unigram model, the probability of generating document $d$ is: \begin{equation} p(\vec{w})=p(w_{1},w_{2},\ldots,w_{n})=p(w_{1})p(w_{2})\cdots p(w_{n})\label{eq:Eq.(1)} \end{equation} Considering that the documents are interchangeable with each other, the probability of generating corpus $W$ is: \begin{equation} p(\vec{W})=p(\vec{w_{1}})p(\vec{w_{2}})\cdots p(\vec{w_{m}})\label{eq:Eq.(2)} \end{equation} Suppose the word probability of this corpus is $N$, and the probability of word $i$ in corpus is $n_{i}$. Then written in full, $\vec{n}=(n_{1},n_{2},\ldots,n_{V})$ can be representative as a multinomial distribution over $V$ words vocabulary: \begin{equation} p(\vec{n})=Mult(\vec{n}|\vec{p},N)=\left(\begin{array}{cc} N\\ \vec{n} \end{array}\right)\prod_{k=1}^{V}p_{k}^{n_{k}}\label{eq:Eq.(3)} \end{equation} where $\vec{p}=(p_{1},p_{2},\ldots,p_{V})$ is the vector of $V$ words probability in vocabulary, and the probability of the corpus is $p(\vec{W})=p(\vec{w_{1}})p(\vec{w_{2}})\cdots p(\vec{w_{V}})=\prod_{k=1}^{V}p_{k}^{n_{k}}$. \subsection{LDA Model} LDA model is a Bayesian hierarchy topic model \cite{Fei05}, generating topic words for each document with efficiently reduced complexity. Also, LDA model characterize the possibility that one document may contain multiple topics, while unigram model only consider the single topic situation. Instead of calculating probability of word frequency using continually multiply in unigram model, LDA model maps a document of $N$ words $d=\vec{w}=(w_{1},w_{2},\ldots,w_{n})$ to $|T|$ latent topics. As the hierarchy model shows in Figure 1, document-word distribution has been mapped to document-topic distribution following topic-word distribution. Therefore, the general expression for word probability in topic model is: \begin{equation} p(w|d)=\sum_{j=1}^{T}p(w|z_{j})\cdot p(z_{j}|d),\label{eq:Eg.(4)} \end{equation} where $z_{j}$ is the $j$th topic sampled from a multinomial distribution of which the prior distribution is a Dirichlet distribution. \begin{figure}[h] \centering \includegraphics[scale=0.5]{fig1} \caption{Graphic representation of hierarchy topic model. Each document is mapped to a mixture of $|T|$ topics from document-topic distribution, and then each one of $n$ words is generated under its latent topic by sampling from the topic-word distribution.} \label{fig:one} \end{figure} In our project, a document of $n$ words $d=\vec{w}=(w_{1},w_{2},\ldots,w_{n})$ is generated in the following process. Suppose that there are $|T|$ latent topics, then the probability of $i$th word $w_{i}$ in the given document can be represented in the following mixture: \begin{equation} p(w_{i})=\sum_{j=1}^{T}p(w_{i}|z_{i}=j)\cdot p(z_{i}=j),\label{eq:Eq.(5)} \end{equation} where $z_{i}$ is the topic to which $i$th word $w_{i}$ assigned, $p(w_{i}|z_{i}=j)$ represents the probability of word $w_{i}$ assigned to the $j$th topic, and $\sum_{j=1}^{T}p(z_{i}=j)$ gives the topic mixture proportion for the current sampled document. Assume that the corpus is a collection of $|D|$ documents and the vocabulary of this corpus has $|V|$ unique words. Each document $d$ of $|N_{d}|$ words is generated according to $|T|$ topics. Let $\phi_{w}^{(z=j)}$ denote $p(w_{i}|z_{i}=j)$, representing that word $w_{i}$ is sampled from the multinomial distribution on the $j$th topic $z_{j}$. And let $\psi_{z=j}^{(d)}$ denote $p(z_{i}=j|d)$, which is a multinomial distribution from $|T|$ topics for document $d$. Therefore, the probability of word $w$ in document $d$ is: \begin{equation} p(w|d)=\sum_{j=1}^{T}\phi_{w}^{(z=j)}\cdot\psi_{z=j}^{(d)}\label{eq:Eq.(6)} \end{equation} In LDA model, $\psi^{(d)}$ sampled from $Dirichlet(\alpha)$ is the prior distribution of multinomial distribution $\psi_{z=j}^{(d)}$ \cite{Evans11}, and $\phi^{(z)}$ sampled from symmetric $Dirichlet(\chi)$ is the prior distribution of multinomial distribution $\phi_{w}^{(z=j)}$. Then the multinomial distributions $\phi_{w}^{(z=j)}$ and $\psi_{z=j}^{(d)}$ in LDA model is parameterized as follows: \begin{equation} w_{i}|z_{i},\phi^{(z_{i})} Mult(\phi^{(z_{i})}),\quad\phi^{(z_{i})} Dirichlet(\chi)\label{eq:Eq.(7)} \end{equation} \begin{equation} z_{i}|\psi^{(d_{i})} Mult(\psi^{(d_{i})}),\quad\psi^{(d_{i})} Dirichlet(\alpha)\label{eq:Eq.(8)} \end{equation} In Eq.(\ref{eq:Eq.(7)}), $\chi$ is a $|T|\times|V|$ matrix, which is the initial value of word probability sampled from $|T|$ topics. And in Eq.(\ref{eq:Eq.(8)}), $\alpha=<\alpha_{1},\alpha_{2},\ldots,\alpha_{T}>$ is the initial value of topic probability. $\chi$ and $\alpha$ are parameters of prior distribution of each multinomial distribution. We assume both prior distributions to be symmetric Dirichlet distributions. Therefore, $\chi$ is initialed the same value in the beginning of sampling every document. Also, is initialed the same value in the beginning of sampling every document. Figure 2 shows the Bayesian network \cite{Carey03} of LDA model. The plates represent repeated sampling. In the left part, the inner plate represents generating each topic and each word under its topic repeatedly in a document $d$; the outer plate represents repeated sampling topic proportion for each of $|D|$ documents in corpus. And the right plate repeatedly samples $|T|$ parameters for $Mult(\phi^{(z_{i})})$. \begin{figure}[h] \centering \includegraphics[scale=0.5]{fig2} \caption{Bayesian network of LDA model.} \label{fig:two} \end{figure} \section{Gibbs Sampling} To estimate parameters of LDA model, Lada Adamic and Eytan Adar use Expectation Maximization (EM) algorithm as the inference strategy \cite{McCallum99}. In our project, Gibbs sampling is the choice. Considering the posterior distribution $p(w|z)$, Gibbs sampling is a simple strategy to estimate $\phi$ and $\psi$. As a simple case of Markov chain Monte Carlo (MCMC) algorithm, Gibbs sampling aims at constructing a Markov chain converging to the target distribution on $z$, and selecting samples approximating the inferred distribution. The sampling method begins with initialing the value of vector $z$. Then it repeatedly samples the $z_{i}$ from the conditional probability $p(z_{i}=j|z_{-i},w_{i})$ and transfers to the next state of Markov chain by updating the probability function using the newly sampled $z_{i}$. In our project, the probability function of Gibbs sampling is: \begin{equation} P(z_{i}=j|z_{-i},v_{i})=\frac{\frac{n_{-i,j}^{(v_{i})}+\chi}{n_{-i,j}^{(.)}+V\chi}\cdot\frac{n_{-i,j}^{(d_{i})}+\alpha}{n_{-i,.}^{(d_{i})}+T\alpha}}{\sum_{j=1}^{T}\frac{n_{-i,j}^{(v_{i})}+\chi}{n_{-i,j}^{(.)}+V\chi}\cdot\frac{n_{-i,j}^{(d_{i})}+\alpha}{n_{-i,.}^{(d_{i})}+T\alpha}},\label{eq:Eq.(9)} \end{equation} where $z_{i}=j$ stands for the assignment of word $v_{i}$, the $i$th word in a document, to topic $j$; and $z_{-i}$ represents all $z_{k}\,(k\neq i)$ assignments. $n_{-i,j}^{(v_{i})}$ is the number of times $v_{i}$ assigned to topic $j$; $n_{-i,j}^{(.)}$ is the number of words in vocabulary assigned to topic $j$; $n_{-i,j}^{(d_{i})}$ is the number of words in document $d_{i}$ assigned to topic $j$; all numbers of words do not include the current assignment $z_{i}=j$. The detailed sampling process is as follows: \begin{enumerate} \item For $i=1$ to $N$, where $N$ is the number of word in the current document, iteratively initial $z_{i}$ as one random integer between $1$ to $T$. This $N$-sized $Z$ vector is the initial state of this Markov chain. \item For $i=1$ to $N$, transfer to next state of this Markov chain by iteratively assigning word $v_{i}$ to its topic using Eq.(\ref{eq:Eq.(9)}). \item Run step 2 for $b$ iterations until it reaches the convergent state. For $i=1$ to $N$, the current value of $z_{i}$ is selected as a sample. The value of $b$ is called Burn-in period in Gibbs sampling. \end{enumerate} Eq.(\ref{eq:Eq.(9)}) is the function calculating the posterior distribution over word distribution in each document. Therefore, we can derive the conditional probability function estimating $\phi$ and $\psi$ for every unique word $w$ in a document by removing the word tag $i$ in Eq.(\ref{eq:Eq.(9)}): \begin{equation} \tilde{\phi}_{w}^{(z=j)}=\frac{n_{j}^{(v)}+\chi}{n_{j}^{(.)}+V\chi},\quad\tilde{\psi}_{z=j}^{(d)}=\frac{n_{j}^{(d)}+\alpha}{n_{.}^{(d)}+T\alpha},\label{eq:Eq.(10)} \end{equation} where $n_{j}^{(v)}$ is the number of times $v$ assigned to topic $j$; $n_{j}^{(.)}$ is the number of words in vocabulary assigned to topic $j$; $n_{j}^{(d)}$ is the number of words in document $d$ assigned to topic $j$; $n_{.}^{(d)}$ is the number of all words in document $d$ assigned to its topic. \section{Data Preprocessing} In this section, we discuss the operations of data preprocessing for LDA model. \textit{Datagreening.com} server, Python packages and BashReduce tool implemented in our project will be introduced in the following. \subsection{Datagreening.com} The input data for analyzing in LDA model are emails sent to our locally registered account at \textit{Obama.com}. In our project, these emails are received through \textit{Datagreening.com}, which is an email server developed by our research group. This server provides email service with clean energy and collects research data of carbon footprint in the meantime. Also, capturing the energy cost in datacenters of popular email providers, this greening server helps further research in performance of cloud computing. \subsection{Python packages} The downloaded emails are processed to documents consisting of unique terms using Python NumPy package \cite{NumPy13} and Natural Language Toolkit (NLTK) package \cite{NLTK13}. NumPy is the fundamental package for scientific computing with Python. We install the NumPy package to offer the proper back environment for NLTK packages and include sorting algorithm functions for Python code. NLTK is a Python platform for text processing. Some of the NLTK packages are installed for raw content tokenizing, word stemming and stop words removing in our project. Following is the list of packages we installed: \begin{enumerate} \item banktree\_tag package; \item stop\_word package; \item wn package. \end{enumerate} \subsection{BashReduce tool} Now we format the unique terms learned by the above Python program as: \[ N~word\_1:count\_1,word\_2:count\_2,...,word\_n:count\_n \] where \textit{N} is the number of unique terms in the current document; \textit{word\_i} is an integer that indexes this term in the vocabulary. BashReduce tool is used here to calculate the word count. BashReduce is a parallel computing tool applying online MapReduce \cite{Dean08} model to bash environment. According to BashReduce operation instruction, we begin with specifying the host list to be \textit{bash.xi.0} and \textit{bash.xi.1} using BashReduce option \textit{'br –h'}. Thus, we have two instances for parallel calculating word count. Then we write Python programs \textit{map.py} and \textit{reduce.py}. Program \textit{map.py} maps each word in word set to pattern (word,1); while \textit{reduce.py} receives such patterns and sum the same patterns to generate the (word, count) result and implements. We use BashReduce options \textit{'-m'} and \textit{'-r'} to run \textit{map.py} and \textit{reduce.py} respectively. Note that, both input and output paths for \textit{map.py} and \textit{reduce.py} need to be defined by user. Running on two instances implemented BashReduce, it costs 3'41'' to finish the word count program, ignoring the network latency. Comparing with the time cost 5'18'' when calculating locally, it achieve almost 30\% speedup. \section{Experimental Result} In our results, we use emails sent to our locally registered account by \textit{Obama.com}. There are total 58 emails in this account with a vocabulary size of $|V|=1118$ words. To use Gibbs sampling more efficiently, we first fix the value of burn-in period $b$ using the data learned in section 4. Then we show the experimental result of top 15 words generated under 5 latent topics. And we define precision rate to evaluate the predictive power. \begin{figure}[h] \centering \includegraphics[scale=0.7]{plot_iter_lnp} \caption{Bayesian network of LDA model.} \label{fig:three} \end{figure} \subsection{Burn-in period choice} We define the number of topics $|T|=300$ and use 3 distinct values for initialization. Then the convergent sample result $\ln P(w|z)$ will be obtained when choosing the appropriate iteration number. Figure 3 shows the convergent process for Gibbs sampling. Starting at 3 distinct values, the sample result tends to gather and reach a constant value independent with all three initial values after 500 iterations. \subsection{Experimental Result} In our experiment, parameters $\chi$ and $\alpha$ are assigned values 0.01 and $50/T$ respectively. Latent topics are selected after 500 iterations in Gibbs sampling. And words are generated under 300 topics. Table 1 give the example of top 15 words generated under 4 topics. \begin{table}[h] \tbl{Example of top 15 words generated under 5 topics.\label{tab:one}} \begin{tabular}{|c|c|c|c|c|} \hline Topic1 & Topic2 & Topic3 & Topic4 & Topic5\\\hline action & winner & ohio & display & president\\\hline organize & enter & party & color & obama\\\hline email & guest & help & none & contact\\\hline make & action& need & medium & supporter \\\hline take & organize & governor & screen & committee \\\hline get & nbsp & state & input & email \\\hline ofa & washington & authorize & block & democrat \\\hline people & receive & friend & auto & let \\\hline health & contribution & make & http & washington \\\hline fight & entry & kasich & label & work \\\hline send & email & campaign & leave & know \\\hline care & state & pay & nbsp & candidate \\\hline Box & ticke & republican & see & support \\\hline friend & prize & voter & table & country \\\hline address & resident& Work & Arial & stay \\\hline \end{tabular}} \begin{tabnote} \tabnoteentry{$^a$}{Words in each topic list is able to help reveal what corresponding topic it can be. According to the table, a email from \textit{Obama.com} is likely consisting of health care (Topic1), new contributions (Topic2), local news (Topic3), and president information (Topic4).} \end{tabnote} \end{table} \subsection{Result Analysis } To compare the LDA model applied in our project with other models, we intuitively define a word list of size $|C|=15$, and each word in the target list works as an identifier to represent the information of this corpus. \textit{correct list = \{obama, ohio, health, washington, governor,campaign, republican, president, party, supporter, state, committee,democrat, voter, work\}}\\ And the target word list are defined as part of the correct list. \subsubsection{Evaluation measure} We use precision as a measure to evaluate the experimental results. The definition of precision is defined as follow: \begin{equation} precision=\frac{n_{correct}}{n_{total}}\label{eq:Eq.(11)} \end{equation} \begin{figure}[htbp] \centering \includegraphics[scale=0.7]{plot_precision_2} \caption{Precision rate of LDA model and TF-IDF model when $|TW|=5$. Precision rate of both model grows when $|TG|$ increases, for the reason that larger $|TG|$ may lead to more matches. Considering the limited $|TW|$, the precision rate of LDA model falls below TF-IDF model when $|TG|>8$.} \label{fig:four} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.7]{plot_precision_1} \caption{Precision rate of LDA model and TF-IDF model when $|TW|=10$. Precision rate of both model grows when increases $|TG|$. Higher precision rate of LDA model than TF-IDF model illuminates that the predictive power of LDA model is stronger than TF-IDF model, when $|TW|$ is large enough.} \label{fig:five} \end{figure} \begin{figure}[htbp] \centering \includegraphics[scale=0.7]{plot_precision_3} \caption{Precision rate of LDA model and TF-IDF model when $|TW|=15$. Precision rate of both model grows when increases $|TG|$. And when $|TW|=15$, the precision rate of LDA model is almost 53.96\% higher than TF-IDF model.} \label{fig:six} \end{figure} For each document with $|T|$ topic word list, we compare the top $k$ words of each topic words list with the words in current target word list. If any match is captured, then we mark this document as 'correct'. $n_{correct}$ in Eq.(\ref{eq:Eq.(11)}) represents the number of 'correct' documents in this corpus, and $n_{total}$ stands for the total number of documents in this corpus. \subsubsection{Comparison model} We choose TF-IDF model as a comparison.TF-IDF, short for Term Frequency-Inverse Document Frequency model, a numerical statistic representing the relation between the key word and the document, is one of the major weight factors in text mining. It is often used for text classification by ranking a document's relevance given a user query. Assume that $D$ is the total number of dicuments in this corpus; $f(t,d)$ is the raw frequency of a term in a document, then the expression of this model is: \begin{equation} tfidf(t,d,D)=tf(t,d)\times idf(t,D)\label{eq:Eq.(12)} \end{equation} where $idf(t,D)$ is defined as: \begin{equation} idf(t,D)=\log\frac{|D|}{|\{d\in D:t\in d\}|}\label{eq:Eq.(13)} \end{equation} Therefore, a high weight in TF-IDF is reached by a high term frequency and a low document frequency of the term in the whole collection of documents; the weights hence tend to filter out common terms and select the terms with lower probability to be the words that distinguish documents. \subsubsection{Predictive power} Let $|TG|$ be denoted as the size of target word list, and $|TW|$ as the size of topic word list. We evaluate the predictive power by comparing precision rate for LDA model and TF-IDF model in cases with different $|TW$|. The following Figure 4, Figure 5 and Figure 6 show the comparison of LDA model and TF-IDF model where \textit{x}-coordinate represents different values of $|TG|$ and \textit{y}-coordinate represents precision rate. \section{Conclusion} In this project, we apply LDA model to analyze and vilsualize topic words of emails from \textit{Obama.com} to accounts registered in Columbus, Ohio. We use Gibbs sampling to estimate topic and word distribution. Carbon-free server \textit{Datagreening.com}, pareleling tool BashReduce and Python packages are used in data preprocessing. The experimental result shows the MapReduce method in our project efficiently reduces the time cost of word count program and the predictive power of LDA model applied in our project is obviously stronger than TF-IDF model. Every new document adding to this corpus lead to another sampling process for it, highly increasing the time cost. Considering this drawbask, possible future work of our project is likely to be sampling only new added words instead of a new added document. \bibliographystyle{ACM-Reference-Format-Journals}
proofpile-arXiv_067-10830
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Proof of Theorem \ref{the:outage-general}} \label{app:outage-general} In order to prove Theorem \ref{the:outage-general}, we modify some useful results from \cite{Andrews2011Tractable}. Conditioning on the nearest base station at a distance $r$ from the typical user, the outage probability can be written as: \begin{multline} p_{\text{out}}(\lambda,T,\alpha,S,\gamma) = \mathbb{E}_r\Big[ 1 - \mathbb{P}[\mathrm{ln}(1 + \mathrm{SINR}) > T, f_o \in \Delta_{b_o} \mid r] \Big]. \notag \end{multline} Since expectation is a linear operator and these two events are independent, the above expression can be decomposed as: \begin{multline} \label{pro:outage-general:expanded} p_{\text{out}}(\lambda,T,\alpha,S,\gamma) = 1 - \underbrace{\mathbb{E}_r\Big[ \mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right] \Big]}_\text{$(i)$} \underbrace{\mathbb{E}_r\Big[ \mathbb{P}\left[f_o \in \Delta_{b_o} \mid r\right] \Big]}_\text{$(ii)$}. \end{multline} Proceeding term by term, we first write $(i)$ as: \begin{flalign} & \mathbb{E}_r\left[\mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right]\right] \notag & \end{flalign} \vspace{-1.1cm} \begin{align} &= \int_{r>0}{ \mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right] f_r(r)\mathrm{d}r } \label{pro:outage-general:firstterm-start} \\ &\stackrel{(a)}{=} \int_{r>0}{ \mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right] e^{-\pi\lambda r^2}2\pi\lambda r\mathrm{d}r } \notag \\ &\stackrel{(b)}{=} \int_{r>0}{ \mathbb{P}\left[\frac{hr^{-\alpha}}{\sigma^2+I_r} > e^T-1 \mid r \right] e^{-\pi\lambda r^2}2\pi\lambda r\mathrm{d}r } \notag \\ &\stackrel{(c)}{=} \int_{r>0}{ \mathbb{P}\left[h > r^{\alpha}(e^T-1)(\sigma^2 + I_r) \mid r \right] e^{-\pi\lambda r^2}2\pi\lambda r \mathrm{d}r }, \label{pro:outage-general:firstterm} \end{align} where $f_r(r) = e^{-\pi\lambda r^2}2\pi\lambda r$ is the \ac{PDF} of $r$ for \ac{PPP} \cite{Andrews2011Tractable}, hence $(a)$ follows from its substitution. The expression in $(b)$ is obtained by plugging the \ac{SINR} formula and letting it on the left hand side of the inequality, $(c)$ is the result of some algebraic manipulations for keeping fading variable $h$ alone. Conditioning on $I_r$ and using the fact that $h\sim \mathrm{Exponential(\mu)}$, the probability of random variable $h$ exceeding $r^{\alpha}(e^T-1)(\sigma^2 + I_r)$ can be written as: \begin{flalign} & \mathbb{P}\left[h > r^{\alpha}(e^T-1)(\sigma^2 + I_r) \mid r \right] \notag & \end{flalign} \vspace{-1.1cm} \begin{align} &= \mathbb{E}_{I_r}\left[ \mathbb{P}\left[h > r^{\alpha}(e^T-1)(\sigma^2 + I_r) \mid r, I_r \right] \right] \notag \\ &= \mathbb{E}_{I_r}\left[ \mathrm{exp}\left(-\mu r^{\alpha}(e^T-1)(\sigma^2 + I_r)\right) \mid r \right] \notag \\ &= e^{-\mu r^{\alpha}(e^T-1)\sigma^2} \mathcal{L}_{I_r}\left(\mu r^{\alpha}(e^T-1) \right), \label{pro:outage-general:h} \end{align} where $\mathcal{L}(s)$ is the Laplace transform of random variable $I_r$ evaluated at $s$ conditioned on the distance of the nearest base station from the origin. Substituting (\ref{pro:outage-general:h}) into (\ref{pro:outage-general:firstterm}) yields the following: \begin{multline} \label{pro:outage-general:firstterm2} \mathbb{E}_r\left[\mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right]\right] = \\ \int_{r>0}{ e^{-\mu r^{\alpha}(e^T-1)\sigma^2} \mathcal{L}_{I_r}\left(\mu r^{\alpha}(e^T-1) \right) e^{-\pi\lambda r^2}2\pi\lambda r \mathrm{d}r }. \end{multline} Defining $g_i$ as a random variable of arbitrary but identical distribution for all $i$, and $R_i$ as the distance from the $i$-th base station to the tagged receiver, the Laplace transform is written as: \begin{align} \mathcal{L}_{I_r}(s) &= \mathbb{E}_{I_r}\left[e^{-sI_r}\right] = \mathbb{E}_{\Phi,\{g_i\}} \left[ \mathrm{exp}\left( -s\sum_{i \in \Phi \backslash\{b_o\}}{g_iR^{-\alpha}_i} \right) \right] \notag \\ &= \mathbb{E}_{\Phi,\{g_i\}} \left[ \prod_{i \in \Phi \backslash\{b_o\}}{ \mathrm{exp}\left(-sg_iR^{-\alpha}_i\right)} \right] \notag \\ &\stackrel{(a)}{=} \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_o\}}{ \mathbb{E}_{\{g_i\}}\left[\mathrm{exp}\left(-sg_iR^{-\alpha}_i\right)\right]} \right] \notag \\ &\stackrel{(b)}{=} \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_o\}}{ \mathbb{E}_g\left[\mathrm{exp}\left(-sgR^{-\alpha}_i\right)\right]} \right] \notag \\ &= \mathrm{exp}\left( -2\pi\lambda \int^{\infty}_{r}{\left(1 - \mathbb{E}_{g} \left[ \mathrm{exp}\left(-sgv^{-\alpha}\right) \right] \right)v\mathrm{d}v} \right), \notag \end{align} where $(a)$ comes from the independence of $g_i$ from the point process $\Phi$, and $(b)$ follows from the i.i.d. assumption of $g_i$. The last step comes from the \ac{PGFL} of the \ac{PPP}, which basically says that for some function $f(x)$, $\mathbb{E}\left[\prod_{x \in \Phi}{f(x)}\right]=\mathrm{exp}\left(-\lambda\int_{\mathbb{R}^2}{(1 - f(x))\mathrm{d}x)} \right)$. Since the nearest interfering base station is at least at a distance $r$, the integration limits are from $r$ to infinity. Denoting $f(g)$ as the \ac{PDF} of $g$, then plugging in $s = \mu r^{\alpha}(e^T-1)$ and switching the integration order yields \begin{multline} \mathcal{L}_{I_r}\left(\mu r^{\alpha}(e^T-1) \right) = \\ \mathrm{exp}\left( -2\pi\lambda \int^{\infty}_{0}{ \left( \int^{\infty}_{r}{\left(1 - e^{-\mu r^{\alpha}(e^T - 1)v^{-\alpha}g} \right)v\mathrm{d}v} \right) f(g) \mathrm{d}g } \right). \nonumber \end{multline} By change of variables $v^{-\alpha} \rightarrow y$, the Laplace transform can be rewritten as: \begin{multline} \label{pro:outage-general:laplace} \small \mathcal{L}_{I_r}\left(\mu r^{\alpha}(e^T-1) \right) = \\ \mathrm{exp}\Big( \lambda\pi r^2 - \frac{2\pi \lambda \left(\mu (e^T - 1)\right)^{\frac{2}{\alpha}}r^2}{\alpha} \times \\ \int^{\infty}_{0}{ g^{\frac{2}{\alpha}} \left[ \Gamma\left(-\frac{2}{\alpha},\mu\left(e^T - 1\right)g\right) - \Gamma\left(-\frac{2}{\alpha}\right) \right] f(g)\mathrm{d}g } \Big). \end{multline} Plugging (\ref{pro:outage-general:laplace}) into (\ref{pro:outage-general:firstterm2}), using the substitution $r^2 \rightarrow v$ and after some algebraic manipulations, the expression becomes \begin{multline} \label{pro:outage-general:i} \mathbb{E}_r\left[\mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right]\right] = \pi\lambda \int^{\infty}_{0}{ e^{-\pi\lambda v\beta(T,\alpha) - \mu(e^T - 1)\sigma^2v^{\alpha/2}}\mathrm{d}v }, \end{multline} where $\beta(T,\alpha)$ is given as \begin{multline} \beta(T,\alpha) = \frac{2\left(\mu(e^T - 1)\right)}{\alpha} \mathbb{E}_g\left[ g^{\frac{2}{\alpha}} \left( \Gamma\left(-\frac{2}{\alpha},\mu\left(e^T - 1\right)g\right) - \Gamma\left(-\frac{2}{\alpha}\right) \right) \right]. \notag \end{multline} So far, we have obtained $(i)$ of (\ref{pro:outage-general:expanded}). The term $(ii)$ is straightforward to derive. In the system model, as we assume that every small base station caches the same popular files and they have the same storage size, the cache hit probability becomes independent of the distance $r$. This yields: \begin{align} \label{pro:outage-general:ii} \mathbb{E}_r\left[\mathbb{P}\left[f_o \in \Delta_{b_o} \mid r\right]\right] &= \int^{S/L}_{0}{ f_{\mathrm{pop}}(f,\gamma)\mathrm{d}f }. \end{align} Plugging both (\ref{pro:outage-general:i}) and (\ref{pro:outage-general:ii}) into (\ref{pro:outage-general:expanded}) and rearranging the terms, we conclude the proof. \hfill$\blacksquare$ \section{Proof of Theorem \ref{the:delivery-general}} \label{app:delivery-general} Average achievable delivery rate is ${\bar \tau} = \mathbb{E}\left[ \tau \right]$, where the average is taken over the \ac{PPP} and the fading distribution. It can be shown that \begin{align} {\bar \tau} &= \mathbb{E}\left[ \tau \right] \notag \\ &\stackrel{(a)}{=} \mathbb{E} \Big[ {\mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T\right]} \Big( T\mathbb{P}\left[f_o \in \Delta_{b_o}\right] + C\left(\lambda \right)\mathbb{P}\left[f_o \not\in \Delta_{b_o}\right] \Big) \Big] \notag \end{align} \vspace{-1.0cm} \begin{multline} \hspace{0.20cm} \stackrel{(b)}{=} \mathbb{E}\left[ \underbrace{\mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right]}_\text{$\tau_1$} \right] \times \\ \left( \mathbb{E}\left[ \underbrace{T\mathbb{P}\left[f_o \in \Delta_{b_o} \mid r \right]}_\text{$\tau_2$} \right] + \mathbb{E}\left[ \underbrace{C\left(\lambda \right)\mathbb{P}\left[f_o \not\in \Delta_{b_o} \mid r \right]}_\text{$\tau_3$} \right] \right) \notag \end{multline} \vspace{-1.0cm} \begin{flalign} \hspace{1.10cm} &= \mathbb{E}\left[\tau_1\right] \left( \mathbb{E}\left[\tau_2\right] + \mathbb{E}\left[\tau_3\right] \right), \label{pro:delivery-general:first} & \end{flalign} where $(a)$ is obtained by plugging the delivery rate as defined in (\ref{eq:deliveryrate}), and $(b)$ follows from independence of the events and linearity of the expectation operator. Derivation of $\mathbb{E}[\tau_1]$ can be obtained from the proof of Theorem \ref{the:outage-general}, by following the steps from (\ref{pro:outage-general:firstterm-start}) to (\ref{pro:outage-general:i}). On the other hand, the fact that the cache hit probability is independent of $r$, $\mathbb{E}_r[\tau_2]$ can be expressed as \begin{align} \mathbb{E}_r[\tau_2] &= T \int^{S/L}_{0}{ f_{\mathrm{pop}}(f,\gamma)\mathrm{d}f }. \notag \end{align} Using similar arguments, $\mathbb{E}_r[\tau_3]$ is written as: \begin{align} \mathbb{E}_r[\tau_3] &= C(\lambda)\left( 1 - \int^{S/L}_{0}{ f_{\mathrm{pop}}(f,\gamma)\mathrm{d}f } \right). \notag \end{align} Substituting these expressions into (\ref{pro:delivery-general:first}) concludes the proof.\hfill$\blacksquare$ \section{Proof of Proposition \ref{the:outage-special}} \label{app:outage-special} Since Proposition \ref{the:outage-special} is a special case of Theorem \ref{the:outage-general}, we follow the similar steps. We first rewrite (\ref{pro:outage-general:expanded}) as: \begin{multline} \label{pro:outage-special:expanded} p_{\text{out}}(\lambda,T,\alpha,S,\gamma) = 1 - \underbrace{\mathbb{E}_r\Big[ \mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right] \Big]}_\text{$(i)$} \underbrace{\mathbb{E}_r\Big[ \mathbb{P}\left[f_o \in \Delta_{b_o} \mid r\right] \Big]}_\text{$(ii)$}. \end{multline} For the proceeding of $(i)$, the proof of Theorem \ref{the:outage-general} can be followed starting from (\ref{pro:outage-general:firstterm-start}) to (\ref{pro:outage-general:firstterm2}). Then, the Laplace transform is written as \begin{align} \mathcal{L}_{I_r}(s) &= \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_o\}}{ \mathbb{E}_{g}\left[\mathrm{exp}\left(-sgR^{-\alpha}_i\right)\right]} \right] \notag \\ &\stackrel{(a)}{=} \mathbb{E}_{\Phi} \left[ \prod_{i \in \Phi \backslash\{b_o\}}{\frac{\mu}{\mu + sR_i^{-\alpha}}} \right] \notag \\ &= \mathrm{exp}\left( -2\pi\lambda \int^{\infty}_{r}{\left(1 - \frac{\mu}{\mu + sv^{-\alpha}} \right)v\mathrm{d}v} \right), \label{pro:outage-special:laplace0} \end{align} where $(a)$ comes from the new assumption that $g \sim \mathrm{Exponential}(\mu)$. Then, plugging $s = \mu r^{\alpha}\left( e^T - 1 \right)$ yields: \begin{align} \mathcal{L}_{I_r}\left(\mu r^{\alpha}\left( e^T - 1 \right)\right) &= \mathrm{exp}\left( -2\pi\lambda \int^{\infty}_{r}{ \frac{e^T - 1}{e^T - 1 + (\frac{v}{r})^{\alpha}} v\mathrm{d}v } \right). \notag \end{align} Using a change of variables $u = \left( \frac{v}{r(e^T-1)^{\alpha/2}}\right)^2$ results in \begin{align} \label{pro:outage-special:laplace} \mathcal{L}_{I_r}\left(\mu r^{\alpha}\left( e^T - 1 \right)\right) &= \mathrm{exp}\left( -\pi r^2\lambda\rho(T,\alpha) \right), \end{align} where \begin{align} \rho(T,\alpha) &= (e^T - 1)^{2/\alpha} \int^{\infty}_{(e^T - 1)^{-2/\alpha}}{ \frac{1}{1 + u^{\alpha/2}} \mathrm{d}u }. \notag \end{align} Substituting (\ref{pro:outage-special:laplace}) into (\ref{pro:outage-general:firstterm2}) with $r^2 \rightarrow v$ gives \begin{align} \label{pro:outage-special:laplaceFinal} \pi\lambda\int_{0}^{\infty}{e^{-\pi \lambda v(1 + \rho(T,\alpha)) - \mu(e^T-1)\sigma^2v^{\alpha/2}}\mathrm{d}v}. \end{align} Since $\alpha = 4$ in our special case, (\ref{pro:outage-special:laplaceFinal}) simplifies to \begin{align} \label{pro:outage-special:alpha4} \pi\lambda\int_{0}^{\infty}{e^{-\pi \lambda v(1 + \rho(T,4)) - \mu(e^T-1)\sigma^2v^{2}}\mathrm{d}v}, \end{align} where \begin{align} \rho(T,4) &= (e^T - 1)^{2/\alpha} \int^{\infty}_{(e^T - 1)^{-2/\alpha}}{ \frac{1}{1 + u^{2}} \mathrm{d}u } \notag \\ &= (e^T - 1)^{2/\alpha}\left(\frac{\pi}{2} - \mathrm{arctan}\left( (e^T-1)^{-2/\alpha} \right) \right) \notag \\ &= \sqrt{e^T - 1}\left(\frac{\pi}{2} - \mathrm{arctan}\left(\frac{1}{\sqrt{e^T-1}}\right) \right). \notag \end{align} From this point, (\ref{pro:outage-special:alpha4}) can be further simplified since it has a form similar to: \begin{align} \int_{0}^{\infty}{e^{-ax}e^{-bx^2}\mathrm{d}x} &= \sqrt{\frac{\pi}{b}} \mathrm{exp}\left( \frac{a^2}{4b} \right) Q\left( \frac{a}{\sqrt{2b} }\right), \notag \end{align} where $Q\left(x\right) = \frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}{e^{-y^2/2}\mathrm{d}y}$ is the standard Gaussian tail probability. Setting $a = \pi\lambda(1 + \rho(T,4))$ and $b = \mu(e^T - 1)\sigma^2 = (e^T - 1)/\mathrm{SNR}$ gives \begin{align} \label{pro:outage-special:i} \frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^T-1}{\mathrm{SNR}}}} \mathrm{exp} \left( \frac{\left(\lambda\pi(1 + \rho(T,4))\right)^2}{4(e^T-1)/\mathrm{SNR}} \right) Q \left( \frac{\lambda\pi(1 + \rho(T,4))}{\sqrt{2(e^T-1)/\mathrm{SNR}}} \right). \end{align} This is the final expression for $(i)$ of (\ref{pro:outage-special:expanded}). The term $(ii)$ of (\ref{pro:outage-special:expanded}) can be obtained by using similar arguments given for (\ref{pro:outage-general:ii}) in the proof of Theorem \ref{the:outage-general}, meaning that the cache hit probability is independent of distance $r$. Thus: \begin{align} \label{pro:outage-special:ii} \mathbb{E}_r\left[\mathbb{P}\left[f_o \in \Delta_{b_o} \mid r\right]\right] &= \int^{S/L}_{0}{ f_{\mathrm{pop}}\left(f,\gamma\right) \mathrm{d}f } \notag \\ &\stackrel{(a)}{=} \int^{1 + S/L}_{1}{ \left(\gamma - 1\right)f^{-\gamma} \mathrm{d}f } \notag \\ &= 1 - \left(\frac{L}{L+S}\right)^{\gamma - 1}, \end{align} where $(a)$ follows from plugging definition of $C(f,\lambda)$ given in Assumption \ref{ass:special} and changing the integration limits accordingly. The last term is the result of the integral. Therefore, we conclude the proof by plugging (\ref{pro:outage-special:i}) and (\ref{pro:outage-special:ii}) into (\ref{pro:outage-special:expanded}). \hfill$\blacksquare$ \section{Proof of Proposition \ref{the:delivery-special}} \label{app:delivery-special} The proposition is a special case of Theorem \ref{the:delivery-general}, thus we have the similar steps. We start by rewriting (\ref{pro:delivery-general:first}) as: \begin{multline} {\bar \tau} = \mathbb{E}\left[ \underbrace{\mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right]}_\text{$\tau_1$} \right] \times \\ \left( \mathbb{E}\left[ \underbrace{T\mathbb{P}\left[f_o \in \Delta_{b_o} \mid r \right]}_\text{$\tau_2$} \right] + \mathbb{E}\left[ \underbrace{C\left(\lambda \right)\mathbb{P}\left[f_o \not\in \Delta_{b_o} \mid r \right]}_\text{$\tau_3$} \right] \right) \notag \end{multline} \begin{flalign} & \hspace{0.46cm} = \mathbb{E}\left[\tau_1\right] \left( \mathbb{E}\left[\tau_2\right] + \mathbb{E}\left[\tau_3\right] \right). \label{pro:delivery-special:expanded} & \end{flalign} In this expression, the term $\mathbb{E}\left[\tau_1\right]$ can be obtained from the proof of Proposition \ref{the:outage-special}. More precisely, observe that $\mathbb{E}\left[\tau_1\right]$ is identical to $(i)$ of (\ref{pro:outage-special:expanded}). Thus, following the steps from (\ref{pro:outage-special:laplace0}) to (\ref{pro:outage-special:i}), we obtain \begin{align} \mathbb{E}\left[\tau_1\right] &= \mathbb{E}\Big[ \mathbb{P}\left[\mathrm{ln}(1 + \mathrm{SINR}) > T \mid r \right] \Big] \notag \\ &= \frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^T-1}{\mathrm{SNR}}}} \mathrm{exp} \left( \frac{\left(\lambda\pi(1 + \rho(T,4))\right)^2}{4(e^T-1)/\mathrm{SNR}} \right) Q \left( \frac{\lambda\pi(1 + \rho(T,4))}{\sqrt{2(e^T-1)/\mathrm{SNR}}} \right). \label{pro:delivery-special:first} \end{align} On the other hand, $\mathbb{E}\left[\tau_2\right]$ can be obtained by taking $T$ out of the expectation and plugging (\ref{pro:outage-special:ii}) into the formula, i.e. \begin{align} \mathbb{E}\left[\tau_2\right] &= \mathbb{E}\left[ T\mathbb{P}\left[f_o \in \Delta_{b_o} \mid r \right] \right] \notag \\ &= T\left(1 - \left(\frac{L}{L+S}\right)^{\gamma - 1}\right). \label{pro:delivery-special:second} \end{align} Finally, $\mathbb{E}\left[\tau_3\right]$ is easy to derive as \begin{align} \mathbb{E}\left[\tau_3\right] &= \mathbb{E}\left[ C\left(\lambda \right)\mathbb{P}\left[f_o \not\in \Delta_{b_o} \mid r \right] \right] \notag \\ &= C\left(\lambda \right)\left(\frac{L}{L+S}\right)^{\gamma - 1} \notag \\ &= \left(\frac{C_1}{\lambda} + C_2\right) \left(\frac{L}{L+S}\right)^{\gamma - 1}, \label{pro:delivery-special:third} \end{align} where definition of $C(\lambda)$ follows from Assumption \ref{ass:special}. Substituting (\ref{pro:delivery-special:first}), (\ref{pro:delivery-special:second}) and (\ref{pro:delivery-special:third}) into (\ref{pro:delivery-special:expanded}) concludes the proof. \hfill$\blacksquare$ \section{Introduction} Increasing traffic demand from mobile users due to the rich media applications, video streaming, social networks \cite{Cisco2014} is pushing mobile operators to make their mobile cellular networks evolving continuously (see \ac{LTE} \cite{3GPPRelease13}). \Glspl{SCN} \cite{Hoydis2011Green, Quek2013Small} and their integration with WiFi \cite{Mehdi2013When}, \glspl{HetNet} \cite{Andrews2013Seven}, together with many other ideas from both industry and academia, have now started being deployed and integrated in current cellular networks. In Europe, projects such as NewCom\# \cite{NewcomSharp} in the 7th Framework Program of the European Commission are focusing on the design of next generation cellular networks, and a new framework, called Horizon 2020 \cite{Horizon2020} is going to take place to support these efforts. At the same time, content providers are moving their users' content to the intermediate nodes in the network, namely caching, yielding less delays for the access. \Glspl{CDN} such as Akamai \cite{Nygren2010Akamai} are for that purposes. In this context, \glspl{ICN} are emerging \cite{Ahlgren2012Survey}. Mixing these infrastructural concepts with cellular networks is also of interest \cite{Spagna2013TelcoCDN}\cite{Wanf2014CacheInTheAir}. Predicting users' behavior, and proactively caching the users' content in the edge of the network, namely base stations and user terminals, also shows that further gains can be obtained in terms of backhaul savings and user satisfaction \cite{Bastug2014LivingOnTheEdge}. Even though the idea of caching in mobile cellular networks is somewhat recent, the origin of caching dates indeed back to 60s, where caching mechanisms are proposed to boost the performance of operating systems \cite{Belady1966Study}. Additionally, in past decades, many web caching schemes such as \cite{Borst2010Distributed} have appeared to sustain the data flow of the Internet. In the context of mobile cellular networks, there have been recent attempts on design of intelligent caching schemes by taking into account the wireless environment of mobile cellular networks. Due to its notorious non-tractability, these proposals are mainly based on approximate or heuristic solutions \cite{Bastug2013Proactive}\cite{Poularakis2014Multicast}\cite{Blasco2014LearningBased}. Beside these solutions, novel formulations and system models have been proposed to assess the performance of caching. For instance, information theoretical formulation of the caching problem is studied in \cite{Maddah2013Fundamental}. The expected cost of uncoded and coded data allocation strategies is given in \cite{Altman2013Coding}, where stochastically distributed cache-enabled nodes in a given area are assumed and the cost is defined as a function of distance. A game theoretical formulation of the caching problem as a many-to-many game is studied in \cite{Hamidouche2014Many} by taking into account data dissemination in social networks. The performance of caching in wireless \ac{D2D} networks is studied in \cite{Ji2014GridD2D} in a scenario where nodes are placed on a grid and cache the content randomly. An alternative \ac{D2D} caching scenario with randomly located nodes is given in \cite{Altieri2014StoGeoD2D} and relevant tradeoffs curves are derived. The contribution of this work is to formulate the caching problem in a scenario where stochastically distributed \glspl{SBS} are equipped with storage units but have the limited backhaul capacity. In particular, we build on a tractable system model and define its performance metrics (outage probability and average delivery rate) as functions of \ac{SINR}, number of \glspl{SBS}, target file bitrate, storage size, file length and file popularity distribution. By coupling the caching problem with \ac{PHY} in this way and relying on recent results from \cite{Andrews2011Tractable}, we show that a certain outage probability can be achieved either by 1) increasing number of \glspl{SBS} while the total storage size budged is fixed, or 2) increasing the total storage size while the number of \glspl{SBS} is fixed. To the best of our knowledge, our work differs from the aforementioned works in terms of studying deployment aspects of cache-enabled \glspl{SBS}. Similar line of work in terms of analysis with stochastic geometry tools can be found in \cite{Altieri2014StoGeoD2D, Altman2013Coding}. However, the system model and performance metrics are different than what is studied here.\footnote{Additionally, the related work \cite{Blaszczyszyn2014Geographic} was made public after the submission of this work.} The rest of this paper is structured as follows. We describe our system model in Section \ref{sec:systemmodel}. The performance metrics and main results are given in Section \ref{sec:permain}. In the same section, much simpler expressions are obtained by making specific assumptions on the system model. We validate these results via numerical simulations in Section \ref{sec:validation} and discuss the impact of parameters on the performance metrics. Then, a tradeoff between the number of deployed \glspl{SBS} and total storage size is given in Section \ref{sec:davidvsgoliath}. Finally, our conclusions and future perspectives are given in Section \ref{sec:conclusions}.\footnote{Compared to \cite{Bastug2014StoGeo}, this work contains more comprehensive mathematical treatment, proofs and the trade-off analysis conducted in Section \ref{sec:davidvsgoliath}.} \section{System Model} \label{sec:systemmodel} The cellular network under consideration consists of \glspl{SBS}, whose locations are modeled according to a \ac{PPP} $\Phi$ with density $\lambda$. The broadband connection to these \glspl{SBS} is provided by a \ac{CS} via wired backhaul links. We assume that the broadband connection is finite and fixed, thus the backhaul link capacity of each \ac{SBS} is a decreasing function of $\lambda$. This in practice means that deploying more \glspl{SBS} in a certain area yields sharing the total broadband capacity among backhaul links. We will define this function more precisely in the next sections. We suppose that every \ac{SBS} has a storage unit with capacity $S$ nats (1 bit = $\text{ln}(2) = 0.693$ nats), thus they cache users' most popular files given in a catalog. The size of each file in the catalog has a length of $L$ nats and bitrate requirement of $T$ nats/sec/Hz. We note that the assumption on file length is for ease of analysis. Alternatively, the files in the catalog can be divided into chunks with the same length. The file popularity distribution of this catalog is a right continuous and monotonically decreasing \ac{PDF}, denoted as $f_{\mathrm{pop}}(f,\gamma)$. The parameter $f$ here corresponds to a point in the support of a file and $\gamma$ is the shape parameter of the distribution. We assume that this distribution is identical among all users. Every user equipped with a mobile user terminal is associated with the nearest \ac{SBS}, where its location falls into a point in a Poisson-Voronoi tessellation on the plane. In this model, we only consider the downlink transmission and overhead due to the file requests of users via uplink is neglected. In the downlink transmission, a tagged \ac{SBS} transmits with the constant transmit power $1/\mu$ Watts, and the standard unbounded power-law pathloss propagation model with exponent $\alpha > 2$ is used for the environment. The tagged \ac{SBS} and tagged user experience Rayleigh fading with mean $1$. Hence, the received power at the tagged user, located $r$-meters far away from its tagged \ac{SBS}, is given by $hr^{-\alpha}$. The random variable $h$ here follows an exponential distribution with mean $1/\mu$, represented as $h \sim \mathrm{Exponential}(\mu)$. Once users are associated with their closest \glspl{SBS}, we assume that they request some files (or chunks) randomly according to the file popularity distribution $f_{\mathrm{pop}}(f,\gamma)$. When requests reach to the \glspl{SBS} via uplink, the users are served immediately, either getting the file from the Internet via backhaul or being served from the local cache, depending on the availability of the file therein. If a requested file is available in the local cache of the \ac{SBS}, a \emph{cache hit} event occurs, otherwise a \emph{cache miss} event is said to be occurred. According to what we have explained so far, a sketch of the network model is given in Figure \ref{fig:systemmodel}. \begin{figure}[!ht] \centering \includegraphics[width=0.96\textwidth]{scenario.pdf} \caption{An illustration of the considered network model. The top right side of the figure shows a snapshot of \ac{PPP} per unit area where the \glspl{SBS} are randomly located. A closer look to communication structure of a cache-enabled \ac{SBS} is shown in the main figure.} \label{fig:systemmodel} \end{figure} In general, the performance of our system depends on several factors. To meet the \ac{QoE} requirements, the downlink rate provided to the requested user has to be equal or higher than the file bitrate $T$, so that the user does not observe any interruption during its experience. Although this requirement can be achieved in the downlink, yet another bottleneck can be the rate of the backhaul in case of cache misses. In the following, we define our performance metrics which take into account the aforementioned situations. We then present our main results in the same section. \section{Performance Metrics and Main Results} \label{sec:permain} Performance metrics of interest in our system model are the \emph{outage probability} and \emph{average delivery rate}. We start by defining these metrics for the downlink. From now on, without loss of generality, we refer to the user $o$ as \emph{typical} user, which is located at the origin on the plane. We know that the downlink rate depends on the \ac{SINR}. The \ac{SINR} of user $o$ which is located at a random distance $r$ far away from its \ac{SBS} $b_o$ is given by: \begin{eqnarray} \label{eq:SINR} \textrm{SINR} &\triangleq \frac{hr^{-\alpha}}{\sigma^2 + I_r}, \end{eqnarray} where \begin{eqnarray} \label{eq:I_r} I_r &\triangleq \sum_{i \in \Phi / b_o}{g_i{R^{-\alpha}_i}}, \end{eqnarray} is the total interference experienced from all other \glspl{SBS} except the connected \ac{SBS} $b_o$. Assume that the \emph{success probability} is the probability of the downlink rate exceeding the file bitrate $T$ and the probability of requested file being in the local cache. Then, the outage probability can be given as the complementary of the success probability as follows: \begin{align} p_{\text{out}}(\lambda,T,\alpha,S, L, \gamma) &\triangleq 1 - \underbrace{\mathbb{P}\Big[\mathrm{ln}(1 + \mathrm{SINR}) > T, f_o \in \Delta_{b_o}\Big]}_\text{success probability}, \end{align} where $f_o$ is the requested file by the typical user, and $\Delta_{b_o}$ is the local cache of serving \ac{SBS} $b_o$. Indeed, such a definition of the outage probability comes from a simple observation. Ideally, if a requested file is in the cache of the serving \ac{SBS} (thus the limited backhaul is not used) and if the downlink rate is higher than the file bitrate $T$ (thus the user does not observe any interruption during the playback of the file), we then expect the outage probability to be close to zero. Given this explanation and the assumptions made in the previous section, we state the following theorem for outage probability. \begin{theorem}[Outage probability] \label{the:outage-general} The typical user has an outage probability from its tagged base station which can be expressed as: \begin{multline} p_{\mathrm{out}}(\lambda,T,\alpha,S, L, \gamma) = 1 - \pi\lambda \int^{\infty}_{0}{} \times \\ { \int^{S/L}_{0} e^{-\pi\lambda v\beta(T,\alpha) - \mu(e^T - 1)\sigma^2v^{\alpha/2}} f_{\mathrm{pop}}(f,\gamma) \mathrm{d}f \mathrm{d}v }, \end{multline} where $\beta(T,\alpha)$ is given by: \begin{multline} \beta(T,\alpha) = \frac{2\left(\mu(e^T - 1)\right)}{\alpha} \times \\ \mathbb{E}_g\left[ g^{\frac{2}{\alpha}} \left( \Gamma\left(-\frac{2}{\alpha},\mu\left(e^T - 1\right)g\right) - \Gamma\left(-\frac{2}{\alpha}\right) \right) \right], \end{multline} where $\Gamma(a,x) = \int^{\infty}_{x}{t^{a-1}e^{-t}\mathrm{d}t}$ is the upper incomplete Gamma function and $\Gamma(x) = \int^{\infty}_{0}{t^{x-1}e^{-t}\mathrm{d}t}$ is the Gamma function. \end{theorem} \begin{proof} The proof is provided in Appendix \ref{app:outage-general}. \end{proof} Yet another useful metric in our system model is the delivery rate, which we define as follows: \begin{align} \label{eq:deliveryrate} \tau &\triangleq \begin{cases} T , & \text{if } \mathrm{ln}(1 + \mathrm{SINR}) > T \mathrm{\;and\;} f_o \in \Delta_{b_o}, \\ C(\lambda) , & \text{if } \mathrm{ln}(1 + \mathrm{SINR}) > T \mathrm{\;and\;} f_o \not\in \Delta_{b_o}, \\ 0, & \text{otherwise}, \end{cases} \hspace{1.4cm} \text{nats/sec/Hz} \end{align} where $C(\lambda)$ is the backhaul capacity provided to the \ac{SBS} for single frequency in the downlink.\footnote{Without loss of generality, more realistic values of delivery rate can be obtained by making a proper \ac{SINR} gap approximation and considering the total wireless bandwidth instead of $1$ Hz.} The definition above can be explained as follows. If the downlink rate is higher than the threshold $T$ (namely the bitrate of the requested file) and the requested file is available in the local cache, the rate $T$ is dedicated to the user by the tagged \ac{SBS}, which in turn is sufficient for \ac{QoE}. On the other hand, if the downlink rate is higher than $T$ but the requested file does not exist in the local cache of the tagged \ac{SBS}, the delivery rate will be limited by the backhaul link capacity $C(\lambda)$, for which we assume that $C(\lambda) < T$. Given this definition for the delivery rate, we state the following theorem. \begin{theorem}[Average delivery rate] \label{the:delivery-general} The typical user has an average delivery rate from its tagged base station which can be expressed as: \begin{multline} {\bar \tau}(\lambda,T,\alpha,S, L, \gamma) = \pi\lambda \int^{\infty}_{0}{ e^{-\pi\lambda v\beta(T,\alpha) - \mu(e^T - 1)\sigma^2v^{\alpha/2}}\mathrm{d}v } \times \\ \left( C(\lambda) + (T - C(\lambda)) \int^{S/L}_{0}{ f_{\mathrm{pop}}(f,\gamma)\mathrm{d}f } \right), \end{multline} where $\beta(T,\alpha)$ has the same definition as in Theorem \ref{the:outage-general}. \end{theorem} \begin{proof} The proof is deferred to Appendix \ref{app:delivery-general}. \end{proof} What we provided above are the general results. The exact values of outage probability and average delivery rate can be obtained by specifying the distribution of the interference, the backhaul link capacity $C(\lambda)$ and the file popularity distribution $f_{\mathrm{pop}}(f,\gamma)$. If this treatment does not yield closed form expressions, numerical integration can be done as a last resort for evaluating the functions. In the next section, as an example, we derive special cases of these results after some specific assumptions, which in turn yield much simpler expressions. \subsection{Special Cases} \label{sec:specialcases} \begin{assumption} \label{ass:special} The following assumptions are given for the the system model: \begin{enumerate} \item The noise power $\sigma^2$ is higher than $0$, and the pathloss component $\alpha$ is $4$. \item Interference is Rayleigh fading, which in turn $g_i \sim \mathrm{Exponential}(\mu)$. \item The capacity of backhaul links is given by: \begin{equation} C\left(\lambda\right) \triangleq \frac{C_1}{\lambda} + C_2, \end{equation} where $C_1 > 0$ and $C_2 \geq 0$ are some arbitrary coefficients such that $C(\lambda) < T$ holds. \item The file popularity distribution of users is characterized by a power law \cite{Newman2005Power} such as: \begin{align} f_{\mathrm{pop}}\left(f,\gamma\right) &\triangleq \begin{cases} \left(\gamma - 1\right)f^{-\gamma}, & f \geq 1, \\ 0, & f < 1, \end{cases} \end{align} where $\gamma > 1$ is the shape parameter of the distribution. \end{enumerate} \end{assumption} The assumption $C(\lambda) < T$ comes from the observation that the high-speed fiber-optic backhaul links might be very costly in densely deployed \glspl{SBS} scenarios. Therefore, we assume that $C(\lambda)$ is lower than the bitrate of file. On the other hand, we characterize the file popularity distribution with a power law. Indeed, this comes from the observation that many real world phenomena can be characterized by power laws (i.e. distribution of files in web proxies, distribution of word counts in natural languages) \cite{Newman2005Power}. According to our system model and the specific assumptions made in Assumption \ref{ass:special}, we state the following results. \begin{proposition}[Outage probability] \label{the:outage-special} The typical user has an outage probability from its tagged base station which can be expressed as: \begin{multline} p_{\mathrm{out}}(\lambda,T,4,S, L, \gamma) = 1 - \frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^T-1}{\mathrm{SNR}}}} \mathrm{exp} \left( \frac{\left(\lambda\pi(1 + \rho(T,4))\right)^2}{4(e^T-1)/\mathrm{SNR}} \right) \times \\ Q \left( \frac{\lambda\pi(1 + \rho(T,4))}{\sqrt{2(e^T-1)/\mathrm{SNR}}} \right) \left(1 - \left(\frac{L}{L+S}\right)^{\gamma - 1}\right), \end{multline} where $\rho(T,4) = \sqrt{e^T - 1}\left(\frac{\pi}{2} - \mathrm{arctan}\left(\frac{1}{\sqrt{e^T-1}}\right) \right)$ and the standard Gaussian tail probability is given as $Q\left(x\right) = \frac{1}{\sqrt{2\pi}}\int_{x}^{\infty}{e^{-y^2/2}\mathrm{d}y}$. \end{proposition} \begin{proof} The proof is given in Appendix \ref{app:outage-special}. \end{proof} \begin{proposition}[Average delivery rate] \label{the:delivery-special} The typical user has an average delivery rate from its tagged base station which can be expressed as: \begin{multline} {\bar \tau}(\lambda,T,4,S, L, \gamma) = \frac{\pi^{\frac{3}{2}}\lambda}{\sqrt{\frac{e^T-1}{\mathrm{SNR}}}} \mathrm{exp} \left( \frac{\left(\lambda\pi(1 + \rho(T,4))\right)^2}{4(e^T-1)/\mathrm{SNR}} \right) \times \\ Q \left( \frac{\lambda\pi(1 + \rho(T,4))}{\sqrt{2(e^T-1)/\mathrm{SNR}}} \right) \left(T + \left(\frac{C_1}{\lambda} + C_2 - T\right)\left(\frac{L}{L+S}\right)^{\gamma - 1}\right), \end{multline} where $\rho(T,4)$ and $Q\left(x\right)$ has the same definition as in Proposition \ref{the:outage-special}. \end{proposition} \begin{proof} The proof is given in Appendix \ref{app:delivery-special}. \end{proof} The expressions obtained for special cases are cumbersome but fairly easy to compute and does not require any integration. Note that $Q\left(x\right)$ function given in the expressions is a well-known function and can be computed by using lookup tables or standard numerical packages. \section{Validation of the Proposed Model} \label{sec:validation} So far we have provided the results for outage probability and average delivery rate. In this section, we validate these results via Monte Carlo simulations. The numerical results shown here are obtained by averaging out over $1000$ realizations. In each realization, the \glspl{SBS} are distributed according to a \ac{PPP}. The file requests, signal and interfering powers of the typical user are drawn randomly according to the corresponding probability distributions. The outage probability and average delivery rate are then calculated by considering \ac{SINR} and cache hit statistics. We note that all simulation curves match the theoretical ones. However, a slight mismatch is observed due to the fact that more precise discretization of continuous variables is avoided for affordable simulation times. As alluded to previously, the target file bit rate as well as average delivery rate are in units of nats/sec/Hz. On the other hand, the storage size and file lengths are in units of nats. \subsection{Impact of storage size} The storage size of \glspl{SBS} is one critical parameter in our system model. The effect of the storage size on the outage probability and the average delivery rate is plotted in Figures \ref{fig:plots-storage-outage} and \ref{fig:plots-storage-delivery}, respectively. Each curve represents a different value of target file bit rate. We observe that the outage probability reduces whereas the average delivery rate increases, as we increase the storage size. Such behavior, observed both in theoretical and simulation curves, confirms our initial intuition. \input{plots-storage-outage} \input{plots-storage-delivery} \subsection{Impact of the number of base stations} The evolution of outage probability with respect to the number of base stations is depicted in Figure \ref{fig:plots-base}. As the base station density increases, the outage probability decreases. This decrement in outage probability can be improved further by increasing the storage size of \glspl{SBS}. \input{plots-base} \subsection{Impact of target file bitrate} Yet another important parameter in our setup is the target file bitrate $T$. Figure \ref{fig:plots-target} shows its impact on the outage probability for different values of storage size. Clearly, increasing the target file bitrate results in higher outage probability. However, this performance reduction can be compensated by increasing the storage size of \glspl{SBS}. The impact of storage size reduces, as $T$ increases. \input{plots-target} \subsection{Impact of file popularity shape} Another crucial parameter in our setup is the shape of the file popularity distribution, parameterized by $\gamma$. The impact of the parameter $\gamma$ on the outage probability, for different storage sizes, is given in Figure \ref{fig:plots-popularity}. Generally, a higher value of $\gamma$ means that only a small portion of files is highly popular compared to the rest of the files. On the contrary, lower values of $\gamma$ correspond to a more uniform behavior on the popularity distribution. Therefore, as $\gamma$ increases, the outage probability reduces due to reduced requirement in terms of storage size. However, in very low and high values of $\gamma$, the impact on the outage probability is not high compared to the intermediate values. \input{plots-popularity} \section{David vs. Goliath: More \glspl{SBS} with less storage or less \glspl{SBS} with more storage?} \label{sec:davidvsgoliath} In the previous section, we have validated our results via numerical simulations and discussed the impact of several parameters on the outage probability and average delivery rate. On top of those, we are interested in finding a tradeoff between the \ac{SBS} density and the total storage size for a fixed set of parameters. We start by making an analogy with well-known David and Goliath story to examine the tradeoff between the \ac{SBS} density and total storage size.\footnote{David vs. Goliath refers to the underlying resource sharing problem which arises in a variety of scenarios including massive MIMO vs. Small Cells \cite{Hoydis2011David}.} More precisely, we aim to answer the following question: Should we increase storage size of current \glspl{SBS} ({\bf David}) or deploy more \glspl{SBS} with less storage ({\bf Goliath}) in order to achieve a certain success probability? The answer is indeed useful for the realization of such a scenario. Putting more \glspl{SBS} in a given area may be not desirable due to increased deployment and operation costs ({\bf Evil}). Therefore, increasing the storage size of already deployed \glspl{SBS} may incur less cost ({\bf Good}). To characterize this tradeoff, we first define the optimal region as follows: \begin{definition}[Optimal region] \label{def:optimalregion} An outage probability $p^{\dagger}$ is said to be achievable if there exist some parameters $\lambda, T, \alpha, S, L, \gamma$ satisfying the following condition: \begin{align} p_{\mathrm{out}}(\lambda,T,\alpha,S, L, \gamma) \leq p^{\dagger}.\notag \end{align} The set of all achievable $p^{\dagger}$ forms the optimal region. \end{definition} The optimal region can be tightened by restricting parameters $\lambda, T, \alpha, S, L, \gamma$ to some intervals. A detailed analysis on this is left for future work. Hereafter, we restrict ourselves to find the optimal \ac{SBS} density for a fixed set of parameters. In such a case, optimal \ac{SBS} density can be readily obtained by plugging these fixed parameters into $p_{\mathrm{out}}$ and solving the equation either analytically or numerically (i.e. bisection method \cite{Press2007Numerical}). In the following, we obtain a tradeoff curve between the \glspl{SBS} density and total storage size, by solving these equations systematically in the form of optimization problem. \begin{definition}[\ac{SBS} density vs. total storage size tradeoff] Define the average total storage as $S_{\mathrm{total}} = {\lambda}S$, and fix $T$, $\alpha$, $L$ and $\gamma$ to some values in the optimal region given in Definition \ref{def:optimalregion}. Denote also $\lambda^{\star}$ as the optimal \ac{SBS} density for a given $S_{\mathrm{total}}$. Then, $\lambda^{\star}$ is obtained by solving the following optimization problem: \begin{align} & \underset{\lambda}{\mathrm{minimize}} & & \lambda & \label{eq:tradeObjective} \\ & \mathrm{subject}\text{ }\mathrm{to} & & p_{\mathrm{out}}(\lambda,T,\alpha,S_{\mathrm{total}}/\lambda,L,\gamma) \leq p^{\dagger}. & \subeqn \label{eq:tradeC1Out} \end{align} The set of all achievable pairs $(\lambda^{\star}, S_{\mathrm{total}})$ characterize a tradeoff between the \ac{SBS} density and total storage size. \end{definition} Figures \ref{fig:optimalDensity1} and \ref{fig:optimalDensity2} show two different configurations of the tradeoff. In these plots, to achieve a certain outage probability (i.e. $p^{\dagger} = 0.3$), we see that it is sufficient to decrease the number of \glspl{SBS} by increasing the total storage size. Alternatively, the total storage size can be decreased by increasing the number of \glspl{SBS}. Moreover, for different values of parameter of interest (i.e. $T \in \{0.1, 0.2\}$ or $L \in \{1, 2\}$), there is also a scaling and shifting in this tradeoff. Regardless of this scaling and shifting, we see that David wins victory against Goliath. \input{optimalDensity1} \input{optimalDensity2} \section{Conclusions} \label{sec:conclusions} We have studied the caching problem in a scenario where \glspl{SBS} are stochastically distributed and have finite-rate backhaul links. We derived expressions for the outage probability and average delivery rate, and validate these results via numerical simulations. The results showed that significant gains in terms of outage probability and average delivery rate are possible by having cache-enabled \glspl{SBS}. We showed that telecom operators can either deploy more base stations or increase the storage size of existing deployment in order to achieve a certain \ac{QoE} level. \bibliographystyle{IEEEtran}
proofpile-arXiv_067-10886
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:1} The equation of state (EOS) of hot and dense matter is an essential ingredient in understanding many astrophysical phenomena, e.g., supernova explosions and neutron star formations~\citep{burr06,jank07,sumi05,sumi09,shen11}. The EOS for the core-collapse supernova simulations must cover wide ranges of temperature, proton fraction, and baryon density (see Table 1 of~\citet{shen11}). Therefore, it is very difficult to build a complete EOS covering the wide range of thermodynamic conditions. Many efforts have been made to investigate the EOS of nuclear matter for the use of supernova simulations and neutron star calculations~\citep{latt91,latt07,shen98a,shen98b,shen11,scha96,webe05}. There are two commonly used EOSs in supernova simulations, namely the Lattimer--Swesty EOS~\citep{latt91}, which employed a compressible liquid-drop model with a Skyrme force, and the Shen EOS~\citep{shen98b,shen11}, which used a relativistic mean-field (RMF) model and Thomas--Fermi approximation with a parameterized nucleon distribution. Recently, \citet{shen10} constructed the EOS based on a relativistic Hartree calculation for the Wigner--Seitz cell which includes nuclear shell effects. These EOSs employ the so-called single nucleus approximation (SNA), in which only a single representative nucleus is included instead of a distribution of different nuclei. It would be desirable to consider the mixture of nuclei based on nuclear statistical equilibrium~\citep{hemp10,blin11,furu11,furu13}. The mixture of nuclei is important for electron captures on nuclei inside supernova core. However, it has been demonstrated that SNA is a reasonable approximation for thermodynamical quantities~\citep{burr84}. In SNA, the thermodynamically favored nucleus is described by a compressible liquid-drop model in the Lattimer--Swesty EOS or by a Thomas--Fermi approximation with parameterized nucleon distribution in the Shen EOS. In this paper, we intend to study the matter at subnuclear densities, in which the heavy nucleus is described by a self-consistent Thomas--Fermi approximation. The self-consistent Thomas--Fermi approximation has been widely used in atomic and nuclear physics. Many properties of nuclei can be described by the Thomas--Fermi approximation with good agreement to experimental data~\citep{TF07}. Recently, the self-consistent Thomas--Fermi approximation has been used to study nuclear pasta phases at subnuclear densities at zero temperature~\citep{Avancini09} and finite temperature~\citep{Avancini10}, where the pasta phases include droplets (bubbles), rods (tubes), and slabs for three, two, and one dimensions, respectively. In our previous work~\citep{shen98a,shen98b,shen11}, a parameterized nucleon distribution was assumed in the Thomas--Fermi approximation and only droplet phase was taken into account. It is, however, not clear how good/bad the assumed nucleon distribution functions are in Shen EOS. Also, whether other pasta phases, like bubble phase, can make a meaningful difference in the transition to uniform nuclear matter. The main purpose of the present work is to study the non-uniform matter at subnuclear densities using the self-consistent Thomas--Fermi approximation. By comparing the nucleon distributions and thermodynamic quantities, we can examine the difference between the self-consistent Thomas--Fermi (STF) approximation and the parameterized Thomas--Fermi (PTF) approximation. In the present work, we consider both droplet and bubble configurations in order to investigate the effect of including the bubble phase, while other pasta phases are neglected for simplicity. For the effective nuclear interaction, we use the relativistic mean-field (RMF) theory, in which nucleons interact via the exchange of isoscalar scalar and vector mesons ($\sigma$ and $\omega$) and an isovector vector meson ($\rho$). In this work, we employ the RMF theory including nonlinear $\sigma$ and $\omega$ terms with the parameter set TM1~\citep{suga94}. It is known that the RMF theory with the parameter set TM1 can well reproduce ground state properties of finite nuclei including unstable ones~\citep{suga94}, and predicts a maximum neutron-star mass of $2.18\ M_\odot$~\citep{shen11}. In Shen EOS, the RMF results of TM1 were taken as input in the PTF calculation. Therefore, a detailed comparison can be made between the STF and PTF approximations based on the same RMF theory. This paper is organized as follows. In Section~\ref{sec:2}, we briefly explain the RMF theory and the STF approximation for the non-uniform matter at subnuclear densities. In Section~\ref{sec:3}, we discuss the calculated results of the STF approximation in comparison with those obtained in the PTF approximation. Section~\ref{sec:4} is devoted to the conclusions. \section{Formalism} \label{sec:2} We first give a brief description of the RMF theory~\citep{sero86,suga94}. We employ the RMF theory to calculate the properties of uniform matter. For non-uniform matter where nuclei exist to decrease the free energy, we use the STF approximation in which the RMF Lagrangian is used to derive the equations of motion for the fields~\citep{Avancini09}. In the RMF theory, nucleons interact via the exchange of mesons. The exchanged mesons are isoscalar scalar and vector mesons ($\sigma$ and $\omega$) and isovector vector meson ($\rho$). We adopt the RMF theory with nonlinear $\sigma$ and $\omega$ terms~\citep{suga94}. For a system consisting of protons, neutrons, and electrons, the Lagrangian density reads, \begin{eqnarray} \label{eq:LRMF} {\cal L}_{\rm{RMF}} & = & \sum_{i=p,n}\bar{\psi}_i\left[i\gamma_{\mu}\partial^{\mu} -M -g_{\sigma}\sigma-g_{\omega}\gamma_{\mu}\omega^{\mu} -g_{\rho}\gamma_{\mu}\tau_a\rho^{a\mu} -e \gamma_{\mu}\frac{1+\tau_3}{2} A^{\mu} \right]\psi_i \nonumber\\ & & +\bar{\psi}_{e}\left[i\gamma_{\mu}\partial^{\mu} -m_{e} +e \gamma_{\mu} A^{\mu} \right]\psi_{e} \nonumber\\ && +\frac{1}{2}\partial_{\mu}\sigma\partial^{\mu}\sigma -\frac{1}{2}m^2_{\sigma}\sigma^2-\frac{1}{3}g_{2}\sigma^{3} -\frac{1}{4}g_{3}\sigma^{4} \nonumber\\ && -\frac{1}{4}W_{\mu\nu}W^{\mu\nu} +\frac{1}{2}m^2_{\omega}\omega_{\mu}\omega^{\mu} +\frac{1}{4}c_{3}\left(\omega_{\mu}\omega^{\mu}\right)^2 \nonumber\\ && -\frac{1}{4}R^a_{\mu\nu}R^{a\mu\nu} +\frac{1}{2}m^2_{\rho}\rho^a_{\mu}\rho^{a\mu} -\frac{1}{4}F_{\mu\nu}F^{\mu\nu}, \end{eqnarray} where $W^{\mu\nu}$, $R^{a\mu\nu}$, and $F^{\mu\nu}$ are the antisymmetric field tensors for $\omega^{\mu}$, $\rho^{a\mu}$, and $A^{\mu}$, respectively. We use the parameter set TM1~\citep{suga94} as give in Table~\ref{tab:1}. It is known that the RMF theory with the parameter set TM1 can reproduce good saturation properties of nuclear matter and satisfactory description of finite nuclei~\citep{suga94,hira96}. Starting with the Lagrangian~(\ref{eq:LRMF}), we derive a set of Euler--Lagrange equations. In the RMF approximation, the meson fields are considered as classical fields and they are replaced by their expectation values. For a static system, the non-vanishing expectation values are $\sigma =\left\langle \sigma \right\rangle$, $\omega =\left\langle \omega^{0}\right\rangle$, $\rho =\left\langle \rho^{30} \right\rangle$, and $ A =\left\langle A^{0}\right\rangle$. The equations of motion for these mean fields have the following form: \begin{eqnarray} \label{eq:ms0} & &-\nabla^2\sigma + m_\sigma^2\sigma +g_{2}\sigma^{2}+g_{3}\sigma^{3} = -g_{\sigma} n_s, \\ \label{eq:mw0} & & -\nabla^2\omega + m_\omega^2\omega +c_3\omega^3 = g_{\omega} n_v, \\ \label{eq:mr0} & & -\nabla^2\rho + m_\rho^2\rho = g_{\rho} n_3, \\ \label{eq:A0} & & -\nabla^2 A = e n_c, \end{eqnarray} where $n_s$, $n_v$, $n_3$, and $n_c$ are the scalar, vector, third component of isovector, and charge densities, respectively. The stationary Dirac equation for nucleons is given by \begin{eqnarray} \label{eq:driacn} & & \left(-\mathbf{\alpha}\cdot \mathbf{\nabla} + \beta M^{*} +g_{\omega}\omega +g_{\rho}\tau_3\rho +e \frac{1+\tau_3}{2} A \right)\psi^{i}= \varepsilon^{i} \psi^{i} , \end{eqnarray} where $M^{*}=M+g_{\sigma}\sigma$ is the effective nucleon mass. $i$ denotes the index of eigenstates, while $\varepsilon^{i}$ is the single-particle energy. For non-uniform matter at subnuclear densities, heavy nuclei exist in order to decrease the free energy. We assume that each spherical nucleus is located in the center of a charge-neutral cell consisting of a vapor of nucleons and electrons. In the present study, we focus on estimating the difference between the STF and PTF approximations, so we ignore the contribution of alpha-particles for simplicity. The alpha-particle fraction has been shown in Figure 6 of~\citet{shen11}, and moreover the contributions from alpha-particles and other light nuclei have been extensively discussed in~\citet{sumi08} and~\citet{hemp10}. We assume that nuclei are arranged in a body-centered-cubic (BCC) lattice to minimize the Coulomb lattice energy~\citep{oyam93}. The Wigner--Seitz cell is introduced to simplify the energy of a unit cell, which is a sphere with the same volume as the unit cell in the BCC lattice. The lattice constant $a$ and the radius of the Wigner--Seitz cell $R_C$ are related to the cell volume by $ V_{\rm{cell}}=a^3=4 \pi R_C^3 / 3=N_B / n_B $, where $N_B$ and $n_{B}$ are the baryon number per cell and the average baryon number density, respectively. In the STF approximation, the nucleon distribution function at position $r$ inside the Wigner--Seitz cell is obtained by \begin{equation} \label{eq:nirmf} n_{i}(r)=\frac{1}{\pi^2} \int_0^{\infty} dk\,k^2\,\left[f_{i}^{k}(r)-f_{\bar{i}}^{k}(r)\right], \end{equation} where $f_{i}^{k}$ and $f_{\bar{i}}^{k}$ ($i=p$, $n$) are the occupation probabilities of the particle and antiparticle for momentum $k$. At zero temperature, $f_{i}^{k}=1$ under the Fermi surface and $f_{i}^{k}=0$ above the Fermi surface. At finite temperature, the occupation probability is obtained by the Fermi--Dirac distribution, \begin{eqnarray} \label{eq:fp} f_{i}^{k}=\frac{1}{1+\exp\left[\left(\sqrt{k^2+{M^{*}}^2}-\nu_{i}\right) /T\right]}, \\ \label{eq:fa} f_{\bar{i}}^{k}=\frac{1}{1+\exp\left[\left(\sqrt{k^2+{M^{*}}^2} +\nu_{i}\right)/T\right]}. \end{eqnarray} The chemical potential $\mu_i$ is related to the effective chemical potential $\nu_i$ as \begin{eqnarray} \label{eq:mup} \mu_{p} &=& \nu_p +g_{\omega}\omega +g_{\rho}\rho + e A, \\ \label{eq:mun} \mu_{n} &=& \nu_n +g_{\omega}\omega -g_{\rho}\rho. \end{eqnarray} We note that the chemical potential is spatially constant throughout the Wigner--Seitz cell, while other quantities such as occupation probabilities and mean-field values depend on the position $r$. As for the electrons, we disregard the electron screening effect caused by the non-uniform charged particle distributions, and assume the electron density is uniform. It was found in~\citet{Maru05} that the electron screening effect is very small at subnuclear densities. For given average baryon density $n_B$ and proton fraction $Y_p$, the electrons do not play any role in the free energy minimization, therefore, we ignore the electron contribution as done in~\citet{shen11}. The free energy per cell contributed from baryons is given by \begin{equation} \label{eq:Fcell} F_{\rm{cell}}=E_{\rm{cell}}- T S_{\rm{cell}}, \end{equation} where $E_{\rm{cell}}$ and $S_{\rm{cell}}$ denote the energy and entropy per cell, respectively. The energy per cell can be written as \begin{eqnarray} E_{\rm{cell}} &=&\int_{\rm{cell}} \epsilon (r) d^3r + \Delta E_C, \label{eq:Ecell} \end{eqnarray} with $\Delta E_C$ being the correction term for the BCC lattice~\citep{oyam93,shen11}. This correction is negligible when the nuclear size is much smaller than the cell size. The entropy per cell is given by \begin{eqnarray} S_{\rm{cell}} &=&\int_{\rm{cell}} s (r) d^3r. \label{eq:Scell} \end{eqnarray} Here $\epsilon (r)$ and $s(r)$ are the local energy density and entropy density at the radius $r$, which can be calculated by using the RMF theory and the STF approximation. The energy density in the STF approximation is given by \begin{eqnarray} \label{eq:ETF} \epsilon &=& \displaystyle{\sum_{i=p,n} \frac{1}{\pi^2} \int_0^{\infty} dk\,k^2\, \sqrt{k^2+{M^*}^2} \left(f_{i}^{k}+f_{\bar{i}}^{k}\right) } \nonumber\\ & & +\frac{1}{2}(\nabla \sigma )^{2} +\frac{1}{2}m_{\sigma}^2\sigma^2+\frac{1}{3}g_{2}\sigma^{3}+\frac{1}{4}g_{3}\sigma^{4} \nonumber\\ & & -\frac{1}{2}(\nabla \omega )^{2}-\frac{1}{2}m_{\omega}^2\omega^2-\frac{1}{4}c_{3}\omega^{4} +g_{\omega}\omega \left(n_p+n_n\right) \nonumber\\ & & \label{eq:STF} -\frac{1}{2}(\nabla \rho )^{2}-\frac{1}{2}m_{\rho}^2\rho^2 +g_{\rho}\rho \left(n_p-n_n\right) \nonumber\\ & & -\frac{1}{2}(\nabla A)^{2}+e A \left(n_p-n_e\right) , \end{eqnarray} the entropy density is given by \begin{eqnarray} s & = \displaystyle{\sum_{i=p,n} \frac{1}{\pi^2} \int_0^{\infty} dk\,k^2 } & \left[ -f_{i}^{k}\ln f_{i}^{k} -\left(1-f_{i}^{k}\right)\ln \left(1-f_{i}^{k}\right) \right. \nonumber\\ & & \left. -f_{\bar{i}}^{k}\ln f_{\bar{i}}^{k} -\left(1-f_{\bar{i}}^{k}\right)\ln \left(1-f_{\bar{i}}^{k}\right) \right] . \end{eqnarray} Here, we have omitted the electron kinetic energy and electron entropy, which do not play any role in the free energy minimization. It is known that nuclear pasta phases could present at subnuclear densities before the transition to uniform matter. In this study, we consider both droplet and bubble phases. The free energy for bubble phase can also be calculated from Equation~(\ref{eq:Fcell}), but with different correction term $\Delta E_C$~\citep{oyam93}. It is interesting and important to compare the difference between the STF used in this study and the PTF adopted in~\citet{shen11}. In the PTF approximation, the energy per cell is given in the form \begin{eqnarray} E^{\rm{PTF}}_{\rm{cell}} &=& E_{\rm{cell}}^b+E_{\rm{cell}}^g+E_{\rm{cell}}^C, \label{eq:EPTF} \end{eqnarray} where $E_{\rm{cell}}^b$ is the bulk energy per cell given by Equation (20) of~\citet{shen11}. The surface (gradient) energy term $E_{\rm{cell}}^g$ due to the inhomogeneity of the nucleon distribution is assumed to have the form \begin{equation} E_{\rm{cell}}^g=\int_{\rm{cell}} F_0 \mid \nabla \left[ \, n_n\left(r\right)+ n_p\left(r\right) \, \right] \mid^2 d^3r, \label{eq:ES} \end{equation} with the parameter $F_0=70 \, \rm{MeV\,fm^5}$ determined by the gross properties of nuclear masses and charge radii~\citep{oyam93,shen11}. The Coulomb energy term $E_{\rm{cell}}^C$ can be calculated as \begin{equation} \label{eq:EC} E_{\rm{cell}}^C=\frac{1}{2}\int_{\rm{cell}} e A\left(r\right) \left[n_p\left(r\right)-n_e\right] d^3r + \Delta E_C, \end{equation} where $A\left(r\right)$ is the electrostatic potential and $\Delta E_C$ is the same as that of Equation~(\ref{eq:Ecell}). By comparing Equation~(\ref{eq:EPTF}) with Equation~(\ref{eq:Ecell}), we recognize that the Coulomb energy in STF and PTF can be calculated in the same way as given by Equation~(\ref{eq:EC}), but the surface (gradient) energy is treated differently. In the STF approximation the gradient energy is included in Equation~(\ref{eq:ETF}) self-consistently, while it is calculated by Equation~(\ref{eq:ES}) with an additional parameter $F_0$ in the PTF approximation. This may cause some differences in energy between these two methods. Another difference between STF and PTF is that the nucleon distribution function $n_i(r)$ ($i=p$ or $n$) is determined self-consistently in the STF approximation by solving Equations~(\ref{eq:ms0})-(\ref{eq:A0}) inside the Wigner--Seitz cell. In the PTF method, the nucleon distribution function is assumed to have the form~\citep{oyam93,shen11} \begin{equation} \label{eq:nitf} n_i\left(r\right)=\left\{ \begin{array}{ll} \left(n_i^{\rm{in}}-n_i^{\rm{out}}\right) \left[1-\left(\frac{r}{R_i}\right)^{t_i} \right]^3 +n_i^{\rm{out}}, & 0 \leq r \leq R_i, \\ n_i^{\rm{out}}, & R_i \leq r \leq R_C. \\ \end{array} \right. \end{equation} It is important to examine the effect of these differences on thermodynamic quantities at subnuclear densities, so that we can estimate how good/bad the PTF approximation is in~\citet{shen11}. We minimize the free energy per baryon, $F=F_{\rm{cell}}/N_B$, at given temperature $T$, proton fraction $Y_p$, and baryon mass density $\rho_B$. Note that the baryon mass density is defined as $\rho_B=m_{u} n_B$ with $m_{u}$ being the atomic mass unit~\citep{shen11}. The thermodynamically favored state is the one with the lowest $F$ among all configurations considered. In the PTF approximation, the minimization procedure was realized with respect to several independent parameters as described in~\citet{shen11}. In the STF approximation, we minimize $F$ with respect to the Wigner--Seitz cell radius, $R_C$, and finally determine the most stable configuration among droplet, bubble, and homogeneous phases by comparing their free energies. To compute $F_{\rm{cell}}$ at a fixed $R_C$, we numerically solve the coupled Equations~(\ref{eq:ms0})-(\ref{eq:A0}) together with the nucleon distribution and occupation probability given by Equations~(\ref{eq:nirmf})-(\ref{eq:fa}) in coordinate space. Starting from an initial guess for the mean-field values $\sigma (r)$, $\omega (r)$, $\rho (r)$, and $A(r)$, we determine the chemical potential $\mu_i$ ($i=p$ or $n$) by the condition $\int_{0}^{R_C} n_i (r) 4\pi r^2 dr=N_i$, where the proton and neutron numbers per cell are respectively given by $N_p=Y_p N_B$ and $N_n=(1-Y_p) N_B$. Once the chemical potentials are known, the occupation probabilities and density distributions can be obtained. Then, using these densities we solve Equations~(\ref{eq:ms0})-(\ref{eq:A0}) to get new mean-field values. This procedure is iterated until self-consistency is achieved. For the numerical integrations in coordinate space, we use a composite Simpson's rule with 1201 grid points. In general, this number of grid points is sufficient to achieve good convergence in both solving coupled equations and performing numerical integrations. \section{Results and discussion} \label{sec:3} In this section, we show and discuss the results of the non-uniform matter at subnuclear densities obtained using the STF approximation. In comparison with the PTF method used in~\citet{shen11}, the nucleon distribution and the surface effect are self-consistently calculated within the STF approximation. In addition, we take into account the bubble phase that may be present before the transition to uniform matter. At given temperature $T$, proton fraction $Y_p$, and baryon mass density $\rho_B$, we minimize the free energy per baryon, $F=F_{\rm{cell}}/N_B$, with respect to the independent variables in the model. In the STF approximation, there is only one independent variable, namely the cell radius $R_C$. However, there are about seven independent variables ($n_n^{\rm{in}}$, $R_n$, $t_n$, $n_p^{\rm{in}}$, $R_p$, $t_p$, and $R_C$) in the PTF approximation~\citep{shen11}. It is interesting and important to make a detailed comparison between STF and PTF. In Figure~\ref{fig:1F}, we show the resulting free energy per baryon $F$ versus the baryon mass density $\rho_B$ for $Y_p=0.3$ and $0.5$ at $T=1$ MeV and $10$ MeV. Note that we focus here on the non-uniform matter phase in which heavy nuclei are formed in the medium-density and low-temperature region, while the behavior of $F$ and other thermodynamic quantities over the wide range of EOS has been discussed in our earlier work~\citep{shen98b,shen11}. We present in Figure~\ref{fig:1F} the results of STF (PTF) with a droplet configuration by black solid (blue dashed) lines. The bubble phase is also taken into account in the STF calculation as shown by red dash-dotted lines. It is shown that the onset density of the bubble phase is above $10^{13.9}\,\rm{g\,cm^{-3}}$. The inclusion of the bubble phase causes a visible decrease in the free energy at $\rho_B > 10^{13.9}\,\rm{g\,cm^{-3}}$. On the other hand, the appearance of the bubble phase can delay the transition to uniform matter as indicated by the vertical dashed lines. We find that there is a small difference in $F$ between STF and PTF, especially in the case of $T=1$ MeV (top panel). The free energy per baryon $F$ obtained in PTF is systematically lower than that of STF for the same droplet configuration. This may be due to the different treatment of surface effect and nucleon distribution between these two methods. In order to estimate how much difference can be caused by the different treatment of surface effect, we should compare corresponding terms in Equation~(\ref{eq:EPTF}). However, it is difficult in the STF approximation to separate the gradient energy from the bulk energy, because both of them are involved in the first term of Equation~(\ref{eq:Ecell}). On the other hand, the Coulomb energy can be easily separated from Equation~(\ref{eq:Ecell}) as defined by Equation~(\ref{eq:EC}), so that it is possible to compare the difference in the Coulomb energy between STF and PTF. In~\citet{oyam93} and~\citet{oyam03}, the authors have pointed out that the gradient energy in equilibrium should be as large as the Coulomb energy in general cases, which means that $E_{\rm{cell}}^g \simeq E_{\rm{cell}}^C$ could hold in both STF and PTF. This relation corresponds to the well-known equilibrium condition in the liquid-drop model that the surface energy is twice as much as the Coulomb energy. In the results of PTF, we do obtain $E_{\rm{cell}}^g = E_{\rm{cell}}^C$ (see Table~\ref{tab:2}). Therefore, we can use the relation $E_{\rm{cell}}^g = E_{\rm{cell}}^C$ to estimate the gradient energy in the STF approximation, and define the bulk energy as $E_{\rm{cell}}^b=E_{\rm{cell}}-E_{\rm{cell}}^g-E_{\rm{cell}}^C$. In Table~\ref{tab:2}, we compare various quantities between STF and PTF for the cases of $Y_p=0.3$ and $T=1$ MeV. The definitions of these quantities are as follows: $F=F_{\rm{cell}}/N_B$ is the free energy per baryon, $E=E_{\rm{cell}}/N_B$ is the energy per baryon, $S=S_{\rm{cell}}/N_B$ is the entropy per baryon, $E_b=E_{\rm{cell}}^b/N_B$ is the bulk energy per baryon, $E_g=E_{\rm{cell}}^g/N_B$ is the gradient energy per baryon, $E_C=E_{\rm{cell}}^C/N_B$ is the Coulomb energy per baryon, and $R_C$ is the radius of the Wigner--Seitz cell. Note that $F_0=70 \, \rm{MeV\,fm^5}$ has been used in the PTF method~\citep{shen11}, so we first compare the results of STF with those of PTF ($F_0=70$). It is shown that there is no much difference in $S$ and $E_b$, but $F$, $E$, $E_g$, and $E_C$ of PTF ($F_0=70$) are all slightly lower than those of STF. Furthermore, the difference in $F$ (which is $0.217$ MeV at $\rho_B = 10^{13.0}\,\rm{g\,cm^{-3}}$) is about twice as much as that in $E_C$ ($\sim 0.1$ MeV). This implies that the difference in $F$ is mostly caused by the sum $E_g + E_C = 2 E_g $, namely the surface effect. It seems that $E_g$ with $F_0=70 \, \rm{MeV\,fm^5}$ in the PTF approximation is relatively small compared to the self-consistent calculation of STF. To analyze the influence of the parameter $F_0$, we recalculated the results of PTF with $F_0=90 \, \rm{MeV\,fm^5}$, that are also listed in Table~\ref{tab:2}. By comparing the results of PTF ($F_0=70$) and PTF ($F_0=90$), we find that $E_g$ and $E_C$ of PTF ($F_0=90$) are significantly enhanced and closer to the values of STF. As a results, the differences in $F$ between STF and PTF ($F_0=90$) are much smaller than those between STF and PTF ($F_0=70$). Since $E_g$ and $E_C$ of PTF ($F_0=90$) are very close to the values of STF, the small differences in $F$ between STF and PTF ($F_0=90$) should be caused by the different treatment of nucleon distributions between these two methods. In the last column of Table~\ref{tab:2}, we compare the cell radius $R_C$ obtained by different methods. It is shown that $R_C$ of PTF ($F_0=70$) is obviously smaller than that of STF and PTF ($F_0=90$). This is because a smaller surface energy favors a smaller nuclear size and cell radius based on the liquid-drop model~\citep{Maru05}. Therefore, the increase of the surface energy in PTF ($F_0=90$) leads to a larger $R_C$ compared to that in PTF ($F_0=70$). In the bottom panel of Figure~\ref{fig:1F}, we show the results for the case of $T=10$ MeV. The density range of the non-uniform matter phase at high temperature becomes very narrow as shown in Figure 2 of~\citet{shen11}, so we just compare the results in this density range. It is seen that the differences between STF and PTF at $T=10$ MeV (bottom panel) are generally smaller than those at $T=1$ MeV (top panel). This is because at higher temperature the entropy becomes more dominant and the treatment of surface effect plays a less important role in determining the free energy. We plot in Figure~\ref{fig:2S} the entropy per baryon $S$ versus $\rho_B$ for $Y_p=0.3$ and $0.5$ at $T=1$ MeV and $10$ MeV. In the case of $T=1$ MeV (top panel), there is almost no difference between STF and PTF for $Y_p=0.3$ (also see Table~\ref{tab:2}), while there is a small difference in the case of $Y_p=0.5$. At $T=10$ MeV (bottom panel), the results of STF and PTF are almost identical for both $Y_p=0.3$ and $Y_p=0.5$. We note that the behavior of the entropy has a strong $Y_p$ dependence, which is due to the formation of heavy nuclei as discussed in~\citet{shen98b,shen11}. In Figures~\ref{fig:3DT1} and~\ref{fig:4DT10}, we show the density distributions of protons and neutrons inside the Wigner--Seitz cell for the cases of $Y_p=0.3$ at $T=1$ MeV and $T=10$ MeV, respectively. The horizontal axis denotes the radial distance from the center of the cell, while the cell radius is indicated by the hatch. The results obtained in STF (black solid lines) are compared with those of PTF (blue dashed lines). At lower densities, there is no obvious difference in the density profiles between STF and PTF. However, as density increases, the difference becomes noticeable as shown in the top panels. It is seen that the densities at the center of the cell are significantly lower than those at the surface region obtained in the STF approximation. This is because the Coulomb interaction is explicitly included in the equation of motion for the protons, and as a result, more protons are pushed off to the surface. The same behavior has been observed in~\citet{Maru05} where the authors compare results obtained with different treatment of the Coulomb interaction. In the STF approximation, the nucleon distributions are obtained self-consistently with the cell radius $R_C$ determined by the free energy minimization. However, the nucleon distributions in the PTF approximation are forced to have the form of Equation~(\ref{eq:nitf}) with all parameters including $R_C$ determined in the minimization procedure. Comparing the results of $T=10$ MeV (Figure~\ref{fig:4DT10}) with those of $T=1$ MeV (Figure~\ref{fig:3DT1}), the differences between STF and PTF are very similar. It is shown that more free nucleons exist outside the nuclei in the case of $T=10$ MeV. This is because at higher temperature the entropy becomes more dominant, and as a result, the free energy could be decreased by more nucleons dripping out of the nuclei. In both Figures~\ref{fig:3DT1} and~\ref{fig:4DT10}, the cell radius $R_C$ obtained in STF is obviously larger than that of PTF. This is related to the treatment of surface effect as discussed above. It is known based on the liquid-drop model that a smaller surface energy favors a smaller nuclear size and cell radius~\citep{Maru05}. In the PTF method, $F_0=70 \, \rm{MeV\,fm^5}$ has been used in the calculation of surface (gradient) energy, which seems not large enough in comparison with the results of STF (see Table~\ref{tab:2}). Therefore, the smaller surface energy of PTF leads to a smaller $R_C$ compared to that of STF. We consider both droplet and bubble phases in this study. It is found that the bubble could have a lower free energy than the droplet near the transition density to uniform matter. In Figure~\ref{fig:5D}, we show the density distributions of protons and neutrons obtained with droplet and bubble configurations using the STF approximation for the cases of $Y_p=0.3$ and $\rho_B = 10^{14.0}\,\rm{g\,cm^{-3}}$ at $T=1$ MeV (top panel) and $T=10$ MeV (bottom panel). We minimize the free energy per baryon with respect to the cell radius for both droplet and bubble configurations, and then determine the most stable droplet and bubble. By comparing their free energies, we determine which is the most favorable configuration among droplet, bubble, and homogeneous phases. The onset of the bubble phase can be seen in Figure~\ref{fig:1F}. Generally, the bubble has the lowest free energy at $\rho_B \sim 10^{14}\,\rm{g\,cm^{-3}}$ ($n_B \sim 0.06\,\rm{fm^{-3}}$). We present in Table~\ref{tab:3} the resulting properties of stable droplet and bubble in the STF approximation, while the results of PTF and those of uniform matter are also listed for comparison. It is shown that the difference in $F$ between the droplet and bubble phases is less than $1\%$, but there are significant differences in $\mu_p$ and $\mu_n$. On the other hand, the inclusion of the bubble phase can increase the transition density to uniform matter. For instance, at $\rho_B = 10^{14.2}\,\rm{g\,cm^{-3}}$ with $Y_p=0.3$ and $T=1$ MeV, the bubble phase has the lowest free energy among droplet, bubble, and homogeneous phases, but it favors the homogeneous phase if the bubble configuration is not taken into account. We examine the droplet properties in non-uniform matter and investigate their density dependence. In Figure~\ref{fig:6AZ}, we show the nuclear mass number $A_d$ and charge number $Z_d$ inside the droplet as a function of $\rho_B$ for the cases of $Y_p=0.3$ at $T=1$ MeV (top panel) and $T=10$ MeV (bottom panel). Note that these quantities are different from those shown in Figure 5 of~\citet{shen11}. Here the background nucleon gas is subtracted in order to isolate the nucleus from the surrounding nucleon gas, namely $A_d=N_B-V_{\rm{cell}} n_B(R_C)$ and $Z_d=N_p-V_{\rm{cell}} n_p(R_C)$. This subtraction procedure has been widely used in Thomas--Fermi calculations~\citep{De01,Gril12}. For comparison, we calculate $A_d$ and $Z_d$ using the PTF approximation and show with blue dashed lines. It is seen that $A_d$ and $Z_d$ increase rapidly with increasing density. At the same $\rho_B$ and $Y_p$, the values of $A_d$ and $Z_d$ at $T=10$ MeV are significantly less than those at $T=1$ MeV, which is due to more nucleons can drip out of the nuclei at higher temperature. It is found that there is a small difference between STF and PTF for both $T=1$ MeV and $T=10$ MeV. This should be related to the difference in nucleon distributions as shown in Figures~\ref{fig:3DT1} and~\ref{fig:4DT10}. Generally, the droplet properties obtained in STF are very similar to those of PTF. In Figure~\ref{fig:7Yi}, we show the fractions of nuclei ($X_A$), neutron gas ($X_n$), and proton gas ($X_p$) as a function of $\rho_B$ for the same case as Figure~\ref{fig:6AZ}. These fractions are defined by $X_A=A_d / N_B$, $X_n=V_{\rm{cell}} n_n(R_C)/N_B$, and $X_p=V_{\rm{cell}} n_p(R_C)/N_B$. In the case of $T=1$ MeV and $Y_p=0.3$ (top panel), there is almost no proton gas ($X_p \simeq 0$), while the neutron gas fraction $X_n$ is very small and decreases with increasing density. This implies that nucleons inside the droplet are dominant at low temperature. For the case of $T=10$ MeV and $Y_p=0.3$ (bottom panel), more nucleons can drip out of the nuclei as shown in Figures~\ref{fig:4DT10}, and as a result $X_n$ is of the same order of $X_A$, while $X_p$ is about one order lower than $X_n$. Comparing the results between STF and PTF, it is hard to see any significant difference at $T=10$ MeV (bottom panel), while there is a small difference in $X_n$ at $T=1$ MeV (top panel). In Figures~\ref{fig:8Mup} and~\ref{fig:9Mun}, we show the chemical potentials of protons and neutrons, $\mu_p$ and $\mu_n$, as a function of $\rho_B$ with $Y_p=0.3$ and $0.5$ at $T=1$ MeV and $10$ MeV. The results of PTF are taken from EOS2 of~\citet{shen11}, which were calculated through the thermodynamic relations given in Equations~(A16) and (A17) of~\citet{shen11}. In the STF approximation, the chemical potentials given in Equations~(\ref{eq:mup}) and~(\ref{eq:mun}) are obtained self-consistently as described in Section~\ref{sec:2}, which are spatially constant throughout the Wigner--Seitz cell. It is shown that the appearance of the bubble phase at $\rho_B > 10^{13.9}\,\rm{g\,cm^{-3}}$ causes sudden jumps in $\mu_p$ and $\mu_n$ within the STF approximation. This is mainly because the Coulomb potential in the bubble is very different from that in the droplet. As for comparison between STF and PTF, it is found that there are visible differences between STF and PTF in $\mu_p$ as shown in Figure~\ref{fig:8Mup}, while the chemical potentials of neutrons are almost identical between STF and PTF with the same droplet configuration as shown in Figure~\ref{fig:9Mun}. The difference in $\mu_p$ may be related to the difference in Coulomb interaction between STF and PTF. As discussed above, the Coulomb and surface energies in PTF with $F_0=70 \, \rm{MeV\,fm^5}$ is relatively small compared to those of STF, which means that the Coulomb potential in PTF should be smaller than that in STF. According to Equation~(\ref{eq:mup}), a larger Coulomb potential corresponds to a higher $\mu_p$. Therefore, we obtain higher $\mu_p$ in STF due to its larger Coulomb potential. On the other hand, $\mu_n$ is not directly related to the Coulomb potential, so the difference in $\mu_n$ between STF and PTF is very small as shown in Figure~\ref{fig:9Mun}. \section{Conclusion} \label{sec:4} In this paper, we have studied the non-uniform matter at subnuclear densities using the STF approximation. For the effective nuclear interaction, we have adopted the RMF theory including nonlinear $\sigma$ and $\omega$ terms with the parameter set TM1 which can reproduce good saturation properties of nuclear matter and satisfactory description of finite nuclei. We have made a detailed comparison between the STF approximation used in this study and the PTF approximation adopted in~\citet{shen11}. In addition, we have included the bubble phase that could be present before the transition to uniform matter. It has been found that the inclusion of the bubble phase can significantly affect the chemical potentials of protons and neutrons, while its effects on free energy and entropy are relatively small. Furthermore, the appearance of the bubble phase can delay the transition to uniform matter. We have examined the difference between STF and PTF. In the STF method, the nucleon distribution and the surface effect are treated self-consistently. We have minimized the free energy with respect to the cell radius at given temperature $T$, proton fraction $Y_p$, and baryon mass density $\rho_B$. The thermodynamically favored state is the one with the lowest free energy among all configurations considered. The results obtained in the STF approximation have been compared with those of PTF. It have been found that there is no obvious difference in nucleon distributions at lower densities, while the difference becomes noticeable near the transition density to uniform matter. For thermodynamical quantities, such as the free energy and entropy per baryon, the results of both methods generally agree well with each other. However, there are some small differences between STF and PTF which need to be analyzed. The free energy per baryon obtained in PTF is slightly lower than that of STF for the same droplet configuration. This is mainly caused by the inconsistent treatment of the surface effect in PTF, namely the surface and Coulomb energies with the parameter $F_0=70 \, \rm{MeV\,fm^5}$ is relatively small compared to those obtained self-consistently in STF. In addition, the smaller surface energy in PTF leads to a smaller cell radius in comparison to that of STF. On the other hand, the proton chemical potential obtained in STF is slightly higher than that of PTF, which is also related to the difference in the Coulomb and surface energies between STF and PTF. Therefore, we can draw the conclusion that most of the differences between STF and PTF should be due to the different treatment of surface effect, namely the parameter $F_0$ used in PTF is not large enough in comparison with the results obtained in the STF approximation. Considering the wide range of thermodynamic conditions in the whole EOS~\citep{shen11}, the differences between STF and PTF are thought to be negligible and cannot affect the general behavior of the EOS. Therefore, we conclude that the PTF approximation is a reasonable description for non-uniform matter, and can produce very similar EOS with that obtained in the STF approximation which is considered to be self-consistent in the treatment of surface effect and nucleon distribution. \acknowledgments This research is supported in part by the National Natural Science Foundation of China (Grants No. 11075082 and No. 11375089).
proofpile-arXiv_067-10901
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Shape and topology optimization in fluid mechanics is an important mathematical field attracting more and more attention in recent years. One reason therefore is certainly the wide application fields spanning from optimization of transport vehicles like airplanes and cars, over biomechanical and industrial production processes to the optimization of music instruments. Due to the complexity of the emerging problems those questions have to be treated carefully with regard to modelling, simulation and interpretation of the results. Most approaches towards shape optimization, in particular in the field of shape optimization in fluid mechanics, deal mainly with numerical methods, or concentrate on combining reliable CFD methods to shape optimization strategies like the use of shape sensitivity analysis. Anyhow, it is a well-known fact that well-posedness of problems in optimal shape design is a difficult matter where only a few analytical results are available so far, see for instance \cite{ulbrichulbrich,BrandenburgLindemannUlbrichUlbrich_advancedNumMeth_DesignNSflow, kawohl2000optimal, pironneau,Schmidt_shape_derivative_NavierStokes, sverak}. In particular, classical formulations of shape optimization problems lack in general existence of a minimizer and hence the correct mathematical description has to be reconsidered. Among first approaches towards well-posed formulations in this field we mention in particular the work \cite{borrvall}, where a porous medium approach is introduced in order to obtain a well-posed problem at least for the special case of minimizing the total potential power in a Stokes flow. As discussed in \cite{evgrafov,evgrafov2} it is not to be expected that this formulation can be extended without further ado to the stationary Navier--Stokes equations or to the use of different objective functionals. In this work we propose a well-posed formulation for shape optimization in fluids, which will turn out to even allow for topological changes. Therefore, we combine the porous medium approach of \cite{borrvall} and a phase field approach including a regularization by the Ginzburg-Landau energy. This results in a diffuse interface problem, which can be shown to approximate a sharp interface problem for shape optimization in fluids that is penalized by a perimeter term. Perimeter penalization in shape optimization problems was already introduced by \cite{ambrosioButtazzo} and has since then been applied to a lot of problems in shape optimization, see for instance \cite{bourdin_chambolle}. Also phase field approximations for the perimeter penalized problems have been discussed in this field, and we refer here for instance to \cite{relatingphasefield, bourdin_chambolle, burger}. But to the best of our knowledge, neither a perimeter penalization nor a phase field approach has been applied to a fluid dynamical setting before. Here we use the stationary incompressible Navier--Stokes equations as a fluid model, but we briefly describe how the Stokes equations could also be used here. The resulting diffuse interface problem is shown to inherit a minimizer, in contrast to most formulations in shape optimization. The resulting formulation turns out to be an optimal control problem with control in the coefficients, and hence one can derive optimality conditions in form of a variational inequality. Thus, we can formulate a gradient flow for the corresponding reduced objective functional and arrive in a Cahn--Hilliard type system. Similar to \cite{HintermuellerHinzeTber__AFEM_for_CH}, we use a Moreau--Yosida relaxation in order to handle the pointwise constraints on the design variable. We formulate the finite element discretization of the resulting problem using a splitting approach for the Cahn--Hilliard equation. The Navier--Stokes system is discretized with the help of Taylor--Hood elements and both variables in the Cahn--Hilliard equation are discretized with continuous, piecewise linear elements. In addition, we introduce an adaptive concept using residual based error estimates and a D\"orfler marking strategy, see also \cite{HintermuellerHinzeKahle_AFEM_for_CHNS, HintermuellerHinzeTber__AFEM_for_CH}. The proposed approach is validated by means of several numerical examples. The first one shows in particular that even topological changes are allowed during the optimization process. The second example is the classical example of optimizing the shape of a ball in an outer flow. We obtain comparable results as in the literature and discuss the results for different Reynolds numbers and penalization parameters. For this example, comparison values for further investigations are provided. As a third example and outlook, we briefly discuss the optimal embouchure of a bassoon, which was already examined by an engineering group at the Technical University of Dresden, see \cite{grundmann}. Besides, the behaviour of the different parameters of the model and their influence on the obtained solution in the above-mentioned numerical examples are investigated. \section{Shape and topology optimization for Navier--Stokes flow} \label{sec:Analysis} We study the optimization of some objective functional depending on the shape, geometry and topology of a region which is filled with an incompressible Navier--Stokes fluid. We use a holdall container $\Omega\subset\mathbb{R}^d$ which is fixed throughout this work and fulfills \begin{list}{\theAssCount}{\usecounter{AssCount}} \item\label{a:Omega} $\Omega\subseteq\mathbb{R}^d$, $d\in\{2,3\}$, is a bounded Lipschitz domain with outer unit normal $\b n$ such that $\mathbb{R}^d\setminus\overline\Omega$ is connected. % \setcounter{AssListCount}{\value{AssCount}} \end{list} Requiring the complement of $\overline\Omega$ to be connected simplifies certain aspects in the analysis of the Navier--Stokes system but could also be dropped, cf. \cite[Remark 2.7]{hecht}. As we do not want to prescribe the topology or geometric properties of the optimal fluid region in advance, we state the optimization problem in the general framework of Caccioppoli sets. Thus, a set is admissible if it is a measurable subset of $\Omega$ with finite perimeter. Additionally, we impose a volume constraint by introducing a constant $\beta\in\left(-1,1\right)$ and optimize over the sets with volume equal to $0.5(\beta+1)\left|\Omega\right|$. Since an optimization problem in this setting lacks in general existence of minimizers, see for instance \cite{haberjog}, we introduce moreover a perimeter regularization. Thus the perimeter term, multiplied by some weighting parameter $\gamma>0$ and a constant $c_0=\frac\pi2$ arising due to technical reasons, is added to the objective functional that we want to minimize. The latter is given by $\int_\Omega f\left(x,\b u,\mathrm{D}\b u\right)\,\mathrm dx$, where $\b u\in\b U:=\{\b u\in\b H^1(\Omega)\mid\,\mathrm{div}\,\b u=0,\b u|_{\partial\Omega}=\b g\}$ denotes the velocity of the fluid, and we assume \begin{list}{\theAssCount}{\usecounter{AssCount}}\setcounter{AssCount}{\value{AssListCount}} \item\label{a:ObjectiveFctl} the functional $f:\Omega\times\mathbb{R}^d\times\mathbb{R}^{d\times d}\to\mathbb{R}$ is given such that \begin{align*} &F:\b H^1(\Omega)\to\mathbb{R},\\ &F\left(\b u\right):=\int_\Omega f\left(x,\b u(x),\mathrm{D}\b u(x)\right)\,\mathrm dx \end{align*} is continuous, weakly lower semicontinuous, radially unbounded in $\b U$, which means \begin{align}\label{a:FctlRadiallyUnbounded} \lim_{k\to\infty}\left\|\b u_k\right\|_{\b H^1(\Omega)}=+\infty\implies \lim_{k\to\infty}F\left(\b u_k\right)=+\infty \end{align} for any sequence $\left(\b u_k\right)_{k\in\mathbb{N}}\subseteq\b U$. Additionally, $F|_{\b U}$ has to be bounded from below. \setcounter{AssListCount}{\value{AssCount}} \end{list} Here and in the following we use the following function space: $$\b V:=\left\{\b v\in\b H^1_0(\Omega)\mid\,\mathrm{div}\,\b u=0\right\}.$$ Additionally, we denote for some $\varphi\in BV\left(\Omega,\left\{\pm1\right\}\right)$ the set $E^\varphi:=\{\varphi\equiv 1\}$ and introduce \begin{align*} \b U^\varphi:=\left\{\b u\in\b U\mid\b u=\b 0 \text{ a.e. in }\Omega\setminus E^\varphi\right\}, % \quad % \b V^\varphi:=\left\{\b v\in\b V\mid\b v=\b 0 \text{ a.e. in }\Omega\setminus E^\varphi\right\}, \end{align*} where we remark, that we denote $\mathbb{R}^d$-valued functions and function spaces of vector valued functions by boldface letters. \begin{remark} For the continuity of $F:\b H^1(\Omega)\to\mathbb{R}$, required in Assumption~\ref{a:ObjectiveFctl}, it is sufficient, that $f:\Omega\times\mathbb{R}^d\times\mathbb{R}^{d\times d}\to\mathbb{R}$ is a Carath\'{e}odory function, i.e. $f$ fulfills for a.e. $x\in\Omega$ a growth condition of the form \begin{align*} \left|f\left(x,\b v,\b A\right)\right|\leq a(x)+b_1(x)|\b v|^p+b_2(x)|\b A|^2, \quad\forall \b v\in\mathbb{R}^d, \b A\in\mathbb{R}^{d\times d} \end{align*} for some $a\in L^1(\Omega)$, $b_1,b_2\in L^\infty(\Omega)$ and some $p\geq 2$ for $d=2$ and $2\leq p\leq\nicefrac{2d}{d-2}$ for $d=3$. \end{remark} \bigskip For the fluid mechanics, we use Dirichlet boundary conditions on $\partial\Omega$, thus there may be some inflow or some outflow, and we allow additionally external body forces on the whole domain $\Omega$. \begin{list}{\theAssCount}{\usecounter{AssCount}}\setcounter{AssCount}{\value{AssListCount} } \item\label{a:Forces} Here, $\b f\in\b L^2(\Omega)$ is the applied body force and $\b g\in\b H^{\frac12}\left(\partial\Omega\right)$ is some given boundary function such that $\int_{\partial\Omega}\b g\cdot\b n\,\mathrm ds=0$, \setcounter{AssListCount}{\value{AssCount}} \end{list} which are assumed to be given and fixed throughout this paper. A typical objective functional used in this context is the total potential power, which is given by \begin{align}\label{e:TotalPotPower} f\left(x,\b u,\mathrm{D}\b u\right):=\frac\mu2\left|\mathrm{D}\b u\right|^2-\b f(x)\cdot\b u. \end{align} In particular, we remark that this functional fulfills Assumption \ref{a:ObjectiveFctl}. To formulate the problem, we introduce an one-to-one correspondence of Caccioppoli sets and functions of finite perimeter by identifying $E\subset\Omega$ with $\varphi:=2\chi_E-1\in BV\left(\Omega,\left\{\pm1\right\}\right)$ and notice that for any $\varphi\in BV\left(\Omega,\left\{\pm1\right\}\right)$ the set $E^\varphi:=\left\{\varphi=1\right\}$ is the corresponding Caccioppoli set describing the fluid region. We shall write $P_\Omega(E)$ for the perimeter of $E\subseteq\Omega$ in $\Omega$. For a more detailed introduction to the theory of Caccioppoli sets and functions of bounded variations we refer for instance to \cite{evans_gariepy,giusti}. Altogether we arrive in the following optimization problem: \begin{align}\label{e:ObjFctlSharp} \min_{\left(\varphi,\b u\right)} J_0\left(\varphi,\b u\right) :=\int_\Omega f\left(x,\b u,\mathrm{D}\b u\right)\,\mathrm dx+\gamma c_0P_\Omega\left(E^\varphi\right) \end{align} subject to \begin{align*} \varphi\in\Phi_{ad}^0 :=\left\{\varphi\in BV\left(\Omega,\left\{\pm1\right\}\right)\mid \int_\Omega\varphi\,\mathrm dx=\beta\left|\Omega\right|, \b U^\varphi\neq\emptyset\right\} \end{align*} and \begin{subequations} \label{e:StatNSSharpStrong} \begin{align} -\mu\Delta\b u+\left(\b u\cdot\nabla\right)\b u+\nabla p&=\b f &&\text{in }E^\varphi,\label{e:StatNSSharpStrong1}\\ % -\,\mathrm{div}\,\b u&=0&&\text{in }\Omega,\\ % \b u&=\b0&&\text{in }\Omega\setminus E^\varphi,\\ % \b u&=\b g&&\text{on }\partial\Omega. \end{align} \end{subequations} We point out that the velocity of the fluid is not only defined on the fluid region $E^\varphi$, for some $\varphi\in\Phi_{ad}^0$, but rather on the whole of $\Omega$, where in $E^\varphi$ it is determined by the stationary Navier-Stokes equations, and on the remainder we set it equal to zero. And so for an arbitrary function $\varphi\in BV(\Omega,\left\{\pm1\right\})$ the condition $\b u=\b 0$ a.e. in $\Omega\setminus E^\varphi$ and the non-homogeneous boundary data $\b u=\b g$ on $\partial\Omega$ may be inconsistent. To exclude this case we impose the condition $\b U^\varphi\neq\emptyset$ on the admissible design functions in $\Phi_{ad}^0$. The state constraints \eqref{e:StatNSSharpStrong} have to be fulfilled in the following weak sense: find $\b u\in\b U^\varphi$ such that it holds \begin{align*} \int_\Omega\mu\nabla\b u\cdot\nabla\b v+\left(\b u\cdot\nabla\right)\b u\cdot\b v\,\mathrm dx=\int_\Omega\b f\cdot\b v\,\mathrm dx\quad\forall\b v\in\b V^\varphi. \end{align*} Even though this shape and topology optimization problem gives rise to a large class of possible solutions, numerics and analysis prefer more regularity for handling optimization problems. One common approach towards more analytic problem formulations is a phase field formulation. It is a well-known fact, see for instance \cite{modica}, that a multiple of the perimeter functional is the $L^1(\Omega)$-$\Gamma$-limit for $\epsilon\searrow0$ of the Ginzburg-Landau energy, which is defined by \begin{align*} \mathcal E_\epsilon\left(\varphi\right):= \begin{cases} \int_\Omega\frac{\epsilon}{2}\left|\nabla\varphi\right|^2+\frac1\epsilon\psi\left(\varphi\right)\,\mathrm dx, & \text{if }\varphi\in H^1(\Omega),\\ +\infty, & \text{otherwise.} \end{cases} \end{align*} Here $\psi:\mathbb{R}\to\overline\mathbb{R}$ is a potential with two global minima and in this work we focus on a double obstacle potential given by \begin{align*} \psi(\varphi):= \begin{cases} \psi_0\left(\varphi\right), & \text{if }\left|\varphi\right|\leq1,\\ +\infty,&\text{otherwise,} \end{cases}\quad \psi_0\left(\varphi\right):=\frac12\left(1-\varphi^2\right). \end{align*} Thus replacing the perimeter functional by the Ginzburg-Landau energy in the objective functional, we arrive in a so-called diffuse interface approximation, where the hypersurface between fluid and non-fluid region is replaced by a interfacial layer with thickness proportional to some small parameter $\epsilon>0$. Then the design variable $\varphi$ is allowed to have values in $\left[-1,1\right]$ instead of only $\pm1$. To make sense of the state equations in this setting, we introduce an interpolation function $\alpha_\epsilon:\left[-1,1\right]\to \left[0,\overline\alpha_\epsilon\right]$ fulfilling the following assumptions: \begin{list}{\theAssCount}{\usecounter{AssCount}}\setcounter{AssCount}{\value{AssListCount}} \item \label{a:Alpha} Let $\alpha_\epsilon:\left[-1,1\right]\to\left[0,\overline\alpha_\epsilon\right]$ be a decreasing, surjective and twice continuously differentiable function for $\epsilon>0$. It is required that $\overline\alpha_\epsilon>0$ is chosen such that $\lim_{\epsilon\searrow0}\overline\alpha_\epsilon=+\infty$ and $\alpha_\epsilon$ converges pointwise to some function $\alpha_0:[-1,1]\to[0,+\infty]$. Additionally, we impose $\alpha_\delta(x)\geq\alpha_\epsilon(x)$ if $\delta\leq\epsilon$ for all $x\in\left[-1,1\right]$, $\lim_{\epsilon\searrow0}\alpha_\epsilon(0)<\infty$ and a growth condition of the form $\overline\alpha_\epsilon=\hbox{o}\left(\epsilon^{-\frac23}\right)$.% \setcounter{AssListCount}{\value{AssCount}} \end{list} \begin{remark}\label{r:ConvergenceRateTwoDim} We remark, that for space dimension $d=2$ we can even choose $\overline\alpha_\epsilon=\hbox{o}\left(\epsilon^{-\kappa}\right)$ for any $\kappa\in(0,1)$. \end{remark} By adding the term $\alpha_\epsilon(\varphi)\b u$ to \eqref{e:StatNSSharpStrong1} we find that the state equations \eqref{e:StatNSSharpStrong} then ``interpolate'' between the steady-state Navier--Stokes equations in $\left\{\varphi=1\right\}$ and some Darcy flow through porous medium with permeability $\overline\alpha_\epsilon^{-1}$ at $\left\{\varphi=-1\right\}$. Thus simultaneously to introducing a diffuse interface approximation, we weaken the condition of non-permeability through the non-fluid region. This porous medium approach has been introduced for topology optimization in fluid flow by \cite{borrvall}. To ensure that the velocity vanishes outside the fluid region in the limit $\epsilon\searrow0$ we add moreover a penalization term to the objective functional and finally arrive in the following phase field formulation of the problem: \begin{equation}\label{e:ObjFctl} \begin{split} \min_{\left(\varphi,\b u\right)}J_\epsilon\left(\varphi,\b u\right)&:= \int_\Omega\frac12\alpha_\epsilon\left(\varphi\right)\left|\b u\right|^2\,\mathrm dx +\int_\Omega f\left(x,\b u,\mathrm{D}\b u\right)\,\mathrm dx\\ % &+\frac{\gamma\epsilon}{2}\int_\Omega\left|\nabla\varphi\right|^2\,\mathrm dx +\frac\gamma\epsilon\int_\Omega\psi\left(\varphi\right)\,\mathrm dx \end{split} \end{equation} subject to \begin{align} \varphi\in\Phi_{ad} :=\left\{ \varphi\in H^1(\Omega) \mid\left|\varphi\right|\leq1\text{ a.e. in }\Omega, \int_\Omega\varphi\,\mathrm dx=\beta\left|\Omega\right| \right\}, \label{eq:ObjtFctl_PhiConstraint} \end{align} and \begin{subequations}\label{e:StatNSStrong} \begin{align} \alpha_\epsilon(\varphi)\b u-\mu\Delta\b u+\left(\b u\cdot\nabla\right)\b u+\nabla p&=\b f &&\text{in }\Omega,\\ -\,\mathrm{div}\,\b u&=0&&\text{in }\Omega,\\ \b u&=\b g&&\text{on }\partial\Omega. \end{align} \end{subequations} Considering the state equations \eqref{e:StatNSStrong}, we find the following solvability result: \begin{lemma}\label{l:StateEquationsWellDefined} For every $\varphi\in L^1(\Omega)$ such that $\left|\varphi\right|\leq1$ a.e. in $\Omega$ there exists some $\b u\in\b U$ such that \eqref{e:StatNSStrong} is fulfilled in the following sense: \begin{align}\label{e:StatNSWeak} \int_\Omega\alpha_\epsilon\left(\varphi\right)\b u\cdot\b v +\mu\nabla\b u\cdot\nabla\b v +\left(\b u\cdot\nabla\right)\b u\cdot\b v\,\mathrm dx =\int_\Omega\b f\cdot\b v\,\mathrm dx\quad\forall\b v\in\b V. \end{align} Besides, if there exists a solution $\b u\in\b U$ of \eqref{e:StatNSWeak} such that it holds \begin{align}\label{e:SmallnessForUnique} \left\|\nabla\b u\right\|_{\b L^2(\Omega)}<\frac{\mu}{K_\Omega}, \quad K_\Omega:= \begin{cases} \nicefrac23\sqrt2|\Omega|^{\frac23}, & \text{if }d=3,\\ 0.5\sqrt{\left|\Omega\right|}, & \text{if }d=2, \end{cases} \end{align} then this is the only solution of \eqref{e:StatNSWeak}. \end{lemma} \begin{proof} The existence proof is based on the theory on pseudo-monotone operators and the uniqueness statement follows similar to classical results concerning stationary Navier--Stokes equations, see for instance \cite{galdi,hecht}. \end{proof} \begin{remark}\label{r:PressureStateEquations} Standard results infer from \eqref{e:StatNSWeak} that there exists a pressure $p\in L^2(\Omega)$ associated to $\b u\in\b U$ such that \eqref{e:StatNSStrong} is fulfilled in a weak sense, see \cite{galdi}. But as we are not considering the pressure dependency in the optimization problem, we drop those considerations in the following. For details on how to include the pressure in the objective functional in this setting we refer to \cite{hecht}. \end{remark} Using this result, one can show well-posedness of the optimal control problem in the phase field formulation stated above by exploiting the direct method in the calculus of variations. \begin{theorem} There exists at least one minimizer $(\varphi_\epsilon,\b u_\epsilon)$ of \eqref{e:ObjFctl}--\eqref{e:StatNSStrong}. \end{theorem} The proof is given in \cite{hecht}. \bigskip To derive first order necessary optimality conditions for a solution $(\varphi_\epsilon,\b u_\epsilon)$ of \eqref{e:ObjFctl}--\eqref{e:StatNSStrong} we introduce the Lagrangian $\mathcal L_\epsilon:\Phi_{ad}\times\b U\times\b V\to\mathbb{R}$ by \begin{align*} \mathcal L_\epsilon\left(\varphi,\b u,\b q\right):= J_\epsilon(\varphi,\b u) -\int_\Omega\alpha_\epsilon\left(\varphi\right)\b u\cdot\b q+\mu\nabla\b u\cdot\nabla\b q+\left(\b u\cdot\nabla\right)\b u\cdot\b q-\b f\cdot\b q\,\mathrm dx. \end{align*} The variational inequality is formally derived by \begin{equation} \begin{split}\label{e:LagrangianVariationalInequality} \mathrm D_\varphi\mathcal L_\epsilon\left(\varphi_\epsilon,\b u_\epsilon,\b q_\epsilon\right)\left(\varphi-\varphi_\epsilon\right), % % \geq0\quad\forall\varphi\in\Phi_{ad} \end{split} \end{equation} and the adjoint equation can be deduced by \begin{align*} \mathrm D_{\b u}\mathcal L_\epsilon\left(\varphi_\epsilon,\b u_\epsilon,\b q_\epsilon\right)\left(\b v\right) % % =0\quad\forall\b v\in\b V. \end{align*} Even though those calculations are only formally, we obtain therefrom a first order optimality system, which can be proved to be fulfilled for a minimizer of the optimal control problem stated above, see \cite{hecht}: \begin{theorem}\label{t:OptimalitySystem} Assume $\left(\varphi_\epsilon,\b u_\epsilon\right)\in\Phi_{ad}\times\b U$ is a minimizer of \eqref{e:ObjFctl}--\eqref{e:StatNSStrong} such that $\left\|\nabla\b u_\epsilon\right\|_{\b L^2(\Omega)}<\nicefrac{\mu}{K_\Omega}$. Then the following variational inequality is fulfilled: \begin{equation}\label{e:VariationalInequality} \begin{split} \left(\frac12\alpha'_\epsilon\left(\varphi_\epsilon\right)\left|\b u_\epsilon\right|^2 +\frac\gamma\epsilon\psi'_0\left(\varphi_\epsilon\right) -\alpha'_\epsilon\left(\varphi_\epsilon\right)\b u_\epsilon\cdot\b q_\epsilon +\lambda_\epsilon, \varphi-\varphi_\epsilon\right)_{L^2(\Omega)}\\ % +\left(\gamma\epsilon\nabla\varphi_\epsilon,\nabla\left(\varphi-\varphi_\epsilon\right)\right) _{\b L^2(\Omega)}\geq0\quad\forall\varphi\in\overline{\Phi}_{ad}, \end{split} \end{equation} with \begin{align*} \overline{\Phi}_{ad} :=\left\{\varphi\in H^1(\Omega)\mid\left|\varphi\right|\leq1\,\text{ a.e. in }\Omega\right\}, \end{align*} where $\b q_\epsilon\in\b V$ is the unique weak solution to the following adjoint system: \begin{subequations}\label{e:AdjointSystem} \begin{align} \alpha_\epsilon\left(\varphi_\epsilon\right)\b q_\epsilon -\mu\Delta\b q_\epsilon +\left(\nabla\b u_\epsilon\right)^T\b q_\epsilon -\left(\b u_\epsilon\cdot\nabla\right)\b q_\epsilon +\nabla\pi_\epsilon&=\alpha_\epsilon\left(\varphi_\epsilon\right)\b u_\epsilon\nonumber\\ % &\hspace{-4cm}+\mathrm{D}_2f\left(\cdot,\b u_\epsilon,\mathrm{D}\b u_\epsilon\right)-\,\mathrm{div}\, \mathrm{D}_3f\left(\cdot,\b u_\epsilon,\mathrm{D}\b u_\epsilon\right)&&\text{in }\Omega,\\ % -\,\mathrm{div}\,\b q_\epsilon&=0&&\text{in }\Omega,\\ % \b q_\epsilon&=\b0&&\text{on }\partial\Omega. \end{align} \end{subequations} Here, we denote by $\mathrm{D}_if\left(\cdot,\b u_\epsilon,\mathrm{D}\b u_\epsilon\right)$ with $i=2$ and $i=3$ the differential of $f:\Omega\times\mathbb{R}^d\times\mathbb{R}^{d\times d}$ with respect to the second and third component, respectively. Besides, $\b u_\epsilon$ solves the state equations $\eqref{e:StatNSStrong}$ corresponding to $\varphi_\epsilon$ in the weak sense and $\lambda_\epsilon\in\mathbb{R}$ is a Lagrange multiplier for the integral constraint. Additionally, $\pi_\epsilon\in L^2(\Omega)$ can as in Remark~\ref{r:PressureStateEquations} be obtained as pressure associated to the adjoint system. \end{theorem} Under certain assumptions on the objective functional it can be verified that a minimizer $(\varphi_\epsilon,\b u_\epsilon)$ of \eqref{e:ObjFctl}--\eqref{e:StatNSStrong} fulfills $\left\|\nabla\b u_\epsilon\right\|_{\b L^2(\Omega)}<\nicefrac{\mu}{K_\Omega}$. This implies by Lemma~\ref{l:StateEquationsWellDefined} that $\b u_\epsilon$ is the only solution of \eqref{e:StatNSStrong} corresponding to $\varphi_\epsilon$, see \cite{hecht}. In particular, for minimizing the total potential power, see \eqref{e:TotalPotPower}, this condition is equivalent to stating ``smallness of data or high viscosity'' as can be found in classical literature. For details and the proof of Theorem \ref{t:OptimalitySystem} we refer the reader to \cite{hecht}. Hence it is not too restrictive to assume from now on that in a neighborhood of the minimizer $\varphi_\epsilon$ the state equations \eqref{e:StatNSStrong} are uniquely solvable, such that we can introduce the reduced cost functional $j_\epsilon(\varphi):=J_\epsilon(\varphi,\b u)$ where $\b u$ is the solution to \eqref{e:StatNSStrong} corresponding to $\varphi$. The optimization problem \eqref{e:ObjFctl}--\eqref{e:StatNSStrong} is then equivalent to $\min_{\varphi\in\Phi_{ad}}j_\epsilon(\varphi)$. Following \cite{HintermuellerHinzeTber__AFEM_for_CH}, we consider a Moreau--Yosida relaxation of this optimization problem \begin{align}\tag{$\hat P_\infty$} \min_{\varphi\in \Phi_{ad}} j_\epsilon(\varphi) \end{align} in which the primitive constraints $|\varphi| \leq 1$ $a.e.$ in $\Omega$ are replaced (relaxed) through an additional quadratic penalization term in the cost functional. The optimization problem then reads \begin{align}\tag{$\hat P_s$} \min_{\varphi\in H^1(\Omega), \int_\Omega\varphi\,\mathrm dx=\beta|\Omega|} j_\epsilon^s(\varphi), \end{align} where \begin{align} \label{e:ObjFctlRelaxed} j_\epsilon^s(\varphi) :=j_\epsilon(\varphi)+\frac s2\int_\Omega\left|\max\left(0,\varphi-1\right)\right|^2\,\mathrm dx +\frac s2\int_\Omega\left|\min\left(0,\varphi+1\right)\right|^2\,\mathrm dx. \end{align} Here, $s \gg 1$ plays the role of the penalization parameter. The associated Lagrangian $\mathcal L_\epsilon^s$ reads then correspondingly \begin{equation} \begin{split} \mathcal L_\epsilon^s\left(\varphi,\b u,\b q\right)&:= J_\epsilon(\varphi,\b u)+\frac s2\int_\Omega\left|\max\left(0,\varphi-1\right)\right|^2\,\mathrm dx+\frac s2\int_\Omega\left|\min\left(0,\varphi+1\right)\right|^2\,\mathrm dx\\ &-\int_\Omega\alpha_\epsilon\left(\varphi\right)\b u\cdot\b q+\mu\nabla\b u\cdot\nabla\b q+\left(\b u\cdot\nabla\right)\b u\cdot\b q-\b f\cdot\b q\,\mathrm dx. \end{split} \end{equation} Similar analysis as above yields the gradient equation \begin{equation}\label{e:VariationalEqualityS} \begin{split} &\mathrm D_\varphi\mathcal L_\epsilon^s\left(\varphi_\epsilon,\b u_\epsilon,\b q_\epsilon\right)\varphi =\left(\frac12\alpha'_\epsilon\left(\varphi_\epsilon\right)\left|\b u_\epsilon\right|^2+\frac\gamma\epsilon\psi'_0\left(\varphi_\epsilon\right)-\alpha'_\epsilon\left(\varphi_\epsilon\right)\b u_\epsilon\cdot\b q_\epsilon+\lambda_s(\varphi_\epsilon),\varphi\right)_{L^2(\Omega)}\\ &+\left(\gamma\epsilon\nabla\varphi_\epsilon,\nabla\varphi\right)_{\b L^2(\Omega)}=0, \end{split} \end{equation} which has to hold for all $\varphi\in H^1(\Omega)$ with $\int_\Omega\varphi\,\mathrm dx=0$. Here we use $\lambda_s(\varphi_\epsilon)=\lambda_s^+(\varphi_\epsilon)+\lambda_s^-(\varphi_\epsilon)$ with $\lambda_s^+(\varphi_\epsilon):=s\max\left(0,\varphi_\epsilon-1\right)$ and $\lambda_s^-(\varphi_\epsilon):=s\min\left(0,\varphi_\epsilon+1\right)$, and $\b q_\epsilon\in\b V$ is the adjoint state given as weak solution of \eqref{e:AdjointSystem}. The functions $\lambda_s^+(\varphi_\epsilon)$ and $\lambda_s^-(\varphi_\epsilon)$ can also be interpreted as approximations of Lagrange multipliers for the pointwise constraints $\varphi\leq1$ a.e. in $\Omega$ and $\varphi\geq-1$ a.e. in $\Omega$, respectively. It can be shown, that the sequence of minimizers $\left(\varphi_\epsilon,\b u_\epsilon\right)_{\epsilon>0}$ of \eqref{e:ObjFctl}--\eqref{e:StatNSStrong} has a subsequence that converges in $L^1(\Omega)\times\b H^1(\Omega)$ as $\epsilon\searrow 0$. If the sequence $\left(\varphi_\epsilon\right)_{\epsilon>0}$ converges of order $\mathcal O\left(\epsilon\right)$ one obtains that the limit element actually is a minimizer of \eqref{e:ObjFctlSharp}--\eqref{e:StatNSSharpStrong}. In these particular cases, one can additionally prove that the first order optimality conditions given by Theorem \ref{t:OptimalitySystem} are an approximation of the classical shape derivatives for the shape optimization problem \eqref{e:ObjFctlSharp}--\eqref{e:StatNSSharpStrong}. For details we refer the reader to \cite{hecht}. \begin{remark} The same analysis and considerations can be carried out in a Stokes flow. For the typical example of minimizing the total potential power \eqref{e:TotalPotPower} it can then even be shown, that the reduced objective functional corresponding to the phase field formulation $\Gamma$-converges in $L^1(\Omega)$ to the reduced objective functional of the sharp interface formulation. Moreover, the first order optimality conditions are much simpler since no adjoint system is necessary any more. For details we refer to \cite{hecht}. \end{remark} \section{Numerical solution techniques} To solve the phase field problem \eqref{e:ObjFctl}--\eqref{e:StatNSStrong} numerically, we use a steepest descent approach. For this purpose, we assume as above that in a neighborhood of the minimizer $\varphi_\epsilon$ the state equations \eqref{e:StatNSStrong} are uniquely solvable, and hence the reduced cost functional $j_\epsilon(\varphi):=J_\epsilon(\varphi,\b u)$, with $\b u$ the solution to \eqref{e:StatNSStrong} corresponding to $\varphi$, is well-defined. In addition, we introduce an artificial time variable $t$. Our aim consists in finding a stationary point in $\Phi_{ad}$ of the following gradient flow: \begin{align}\label{e:Gradientflow} \left\langle\partial_t\varphi,\zeta\right\rangle =-\mathrm{grad} j_\epsilon^s(\varphi)(\zeta) =-\mathrm Dj_\epsilon^s(\varphi)(\zeta)\quad\forall\zeta\in H^1(\Omega), \int_\Omega\zeta\,\mathrm dx=0, \end{align} with some inner product $\left\langle\cdot,\cdot\right\rangle$, where $j_\epsilon^s$ is the Moreau--Yosida relaxed cost functional defined in \eqref{e:ObjFctlRelaxed}. This flow then decreases the cost functional $j_\epsilon^s$.\\ Now a stationary point $\varphi_\epsilon\in\Phi_{ad}$ of this flow fulfills the necessary optimality condition \eqref{e:VariationalEqualityS}. Obviously, the resulting equation depends on the choice of the inner product. Here, we choose an $H^{-1}$-inner product which is defined as \begin{align*} \left(v_1,v_2\right)_{H^{-1}(\Omega)}:= \int_\Omega\nabla\left(-\Delta\right)^{-1}v_1\cdot\nabla\left(-\Delta\right)^{-1}v_2\,\mathrm dx, \end{align*} where $y=(-\Delta)^{-1}v$ for $v\in \left(H^1(\Omega)\right)^\star$ with $\left< v,1\right>=0$ is the weak solution of $-\Delta y=v$ in $\Omega$, $\partial_\nu y=0$ on $\partial\Omega$. The gradient flow \eqref{e:Gradientflow} with this particular choice of $\left\langle\cdot,\cdot\right\rangle=\left(\cdot,\cdot\right)_{H^{-1}(\Omega)}$ reads as follows: \begin{align*} \partial_t\varphi&=\Delta w&&\quad\text{in }\Omega,\\ \left(-w,\xi\right)_{L^2(\Omega)}&=-\mathrm Dj_\epsilon^s(\varphi)(\xi) &&\quad\forall\xi\in H^1(\Omega), \int_\Omega\xi\,\mathrm dx=0, \end{align*} together with homogeneous Neumann boundary conditions on $\partial\Omega$ for $\varphi$ and $w$. The resulting problem can be considered as a generalised Cahn--Hilliard system. It follows from direct calculations that this flow preserves the mass, i.e. $\int_\Omega\varphi(t,x)\,\mathrm dx=\int_\Omega\varphi(0,x)\,\mathrm dx$ for all $t$. In particular, no Lagrange multiplier for the integral constraint is needed any more. After fixing some initial condition $\varphi_0\in H^1(\Omega)$ such that $\left|\varphi_0\right|\leq1$ a.e. and $\int_\Omega\varphi_0\,\mathrm dx=\beta\left|\Omega\right|$, and some final time $T>0$ this results in the following problem:\\ \noindent\parbox{13cm}{ \noindent\underline{\textit{Cahn--Hilliard System:}} \medskip \noindent Find sufficiently regular $\left(\varphi,w,\b u\right)$ such that \begin{subequations}\label{eq:MY:CahnHilliard} \begin{align} \partial_t\varphi&=\Delta w&&\text{in }\Omega\times(0,T),\label{eq:MY:CahnHilliard:first}\\ % -\gamma\epsilon\Delta\varphi +\lambda_s(\varphi) + \frac\gamma\epsilon\psi'_0(\varphi) % + \alpha'_\epsilon(\varphi) \left( \frac12 \left|\b u\right|^2-\b u\cdot\b q \right) &=w &&\text{in }\Omega\times(0,T),\label{eq:MY:CahnHilliard:second} \\ % \varphi(0)&=\varphi_0&&\text{in }\Omega,\\ % \partial_\nu\varphi=0,\, \partial_\nu w&=0 &&\text{on }\partial\Omega\times\left(0,T\right), \end{align} \end{subequations} where $\b u(t)$ fulfills the state equations \eqref{e:StatNSStrong} corresponding to $\varphi(t)$, and $\b q(t)$ is the adjoint variable defined by \eqref{e:AdjointSystem}.} \medskip \subsection{Numerical implementation} For a numerical realization of the gradient flow method for finding (locally) optimal topologies we discretize the systems \eqref{e:StatNSStrong}, \eqref{e:AdjointSystem} and \eqref{eq:MY:CahnHilliard} in time and space. For this let $0 = t_0< t_1 <\ldots<t_k<t_{k+1}<\ldots$ denote a time grid with step sizes $\tau_k = t_{k}-t_{k-1}$. For ease of presentation we use a fixed step size and thus set $\tau_k \equiv \tau$, but we note, that in our numerical implementation $\tau$ is adapted to the gradient flow in direction $\nabla w$, see Section \ref{ssec:num:TimeStepLength}. Next a discretization in space using the finite element method is performed. For this let $\mathcal{T}^{k}$ denote a conforming triangulation of $\Omega$ with closed simplices $T\subset \overline \Omega$. For simplicity we assume that $\overline \Omega$ is exactly represented by $\mathcal{T}^{k}$, i.e. $\overline \Omega = \bigcup_{T\in \mathcal{T}^k}T$. The set of faces of $\mathcal{T}^{k}$ we denote by $\mathcal{E}^{k}$, while the set of nodes we denote by $\mathcal{N}^{k}$. For each simplex $T\in \mathcal{T}^k$ we denote its diameter by $h_T$, and for each face $E\in\mathcal{E}^k$ its diameter by $h_E$. We introduce the finite element spaces \begin{align*} \mathcal{V}^1(\mathcal{T}^{k}) &= \{v\in C(\overline\Omega)\,|\,v|_T \in P_1(T), \,\forall T\in \mathcal{T}^{k}\},\\ % \b{\mathcal{V}}^2_{\b g_h}(\mathcal{T}^{k}) &= \{v\in C(\overline\Omega)^d\,|\,v|_T \in P_2(T)^d, \,\forall T\in \mathcal{T}^{k},\, v|_{\partial\Omega} = \b g_h\}, \end{align*} where $P_k(T)$ denotes the set of all polynomials up to order $k$ defined on the triangle $T$. The boundary data $\b v|_{\partial\Omega} = \b g$ is incorporated by a suitable approximation $\b g_h$ of $\b g$ on the finite element mesh. Now at time instance $t_k$ we by $\b u_h \in \b{\mathcal{V}}^2_{\b g_h}(\mathcal{T}^{k+1})$ denote the fully discrete variant of $\b u$ and by $\b q_h\in\b{\mathcal{V}}^2_0(\mathcal{T}^{k+1})$ the fully discrete variant of $\b q$. Accordingly we proceed with the discrete variants $\varphi_h, w_h,p_h,\pi_h \in \mathcal{V}^1(\mathcal{T}^{k})$ of $\varphi,w,p$, and $\pi$, where $\int_\Omega p_h\,\mathrm dx = \int_\Omega \pi_h\,\mathrm dx = 0$ is required. Let $\b q^k$ and $\varphi^k$ denote the adjoint velocity and the phase field variable from the time step $t_k$, respectively. At time instance $t_{k+1}$ we consider \begin{subequations}\label{eq:FE:NavierStokes} \begin{align} \alpha_\epsilon(\varphi^k)\b u_h - \mu \Delta \b u_h + \left(\b u_h \cdot \nabla\right) \b u_h + \nabla p_h &= \b f,\label{eq:FE:NavierStokes1}\\ % \,\mathrm{div}\, \b u_h &= 0,\label{eq:FE:NavierStokes2} \end{align} \end{subequations} \begin{subequations}\label{eq:FE:Adjoint} \begin{align} \alpha_\epsilon(\varphi^k)\b q_h- \mu \Delta \b q_h - \left(\b u_h \cdot \nabla\right) \b q_h + \nabla \pi_h &= \alpha_\epsilon(\varphi^k)\b u_h + \mathrm{D}_2f(\cdot,\b u_h,\mathrm{D}\b u_h)\\ &-\,\mathrm{div}\,\mathrm{D}_3f\left(\cdot,\b u_h,\mathrm{D}\b u_h\right) - \left(\nabla \b u_h\right)^T\b q^k,\\ % \,\mathrm{div}\, \b q_h &= 0, \end{align} \end{subequations} \begin{subequations}\label{eq:FE:CahnHilliard} \begin{align} \tau^{-1}(\varphi_h -\varphi^k) - \Delta w_h &= 0,\label{eq:FE:CahnHilliard:first}\\ % -\gamma\epsilon \Delta \varphi_h + \lambda_s(\varphi_h) + \frac{\gamma}{\epsilon}\psi_0'(\varphi^k) +\alpha_\epsilon'(\varphi_h)\left(\frac{1}{2}|\b u_h|^2 - \b u_h\cdot \b q_h\right) &= w_h,\label{eq:FE:CahnHilliard:second} \end{align} \end{subequations} as discrete counterpart to \eqref{e:StatNSStrong}, \eqref{e:AdjointSystem} and \eqref{eq:MY:CahnHilliard}, respectively. The weak form of \eqref{eq:FE:CahnHilliard} using $\psi_0'(\varphi^k) = -\varphi^k$ reads \begin{subequations}\label{eq:FE:CahnHilliard_weak} \begin{align} F^1((\varphi_h,w_h),v) &=\tau^{-1}(\varphi_h -\varphi^k,v)_{L^2(\Omega)} + \left(\nabla w_h,\nabla v\right)_{\b L^2(\Omega)} =0, &\hspace{-1cm} \forall v\in \mathcal{V}^1(\mathcal{T}^{k+1}), \label{eq:FE:CahnHilliard_weak:first}\\ % F^2((\varphi_h,w_h),v) &= \gamma\epsilon (\nabla \varphi_h,\nabla v)_{\b L^2(\Omega)} + (\lambda_s(\varphi_h),v)_{L^2(\Omega)} - \frac{\gamma}{\epsilon}(\varphi^k,v)_{L^2(\Omega)}\nonumber\\ &\hspace{-2cm}+\left(\alpha_\epsilon'(\varphi_h)\left(\frac{1}{2}|\b u_h|^2 - \b u_h\cdot \b q_h\right),v\right)_{L^2(\Omega)} -(w_h,v)_{L^2(\Omega)} = 0, & \hspace{-1cm}\forall v \in \mathcal{V}^1(\mathcal{T}^{k+1}). \label{eq:FE:CahnHilliard_weak:second} \end{align} \end{subequations} The time discretization is chosen to obtain a sequential coupling of the three equations of interest. Namely to obtain the phase field on time instance $t_{k+1}$ we first solve \eqref{eq:FE:NavierStokes} for $\b u_h$ using the phase field $\varphi^k$ from the previous time step. With $\b u_h$ and $\varphi^k$ at hand we then solve \eqref{eq:FE:Adjoint} to obtain the adjoint velocity $\b q_h$ which then together with $\b u_h$ is used to obtain a new phase field $\varphi^{k+1}$ from \eqref{eq:FE:CahnHilliard}. \begin{remark} It follows from the structure of \eqref{eq:FE:NavierStokes}--\eqref{eq:FE:CahnHilliard}, that $\varphi_h$ and $\b u_h,\b q_h$ could be discretized on different spatial grids. In the numerical part we for simplicity use one grid for all variables involved. \end{remark} To justify the discretization \eqref{eq:FE:NavierStokes}--\eqref{eq:FE:CahnHilliard} we state the following assumptions. \begin{list}{\theAssCount}{\usecounter{AssCount}} \setcounter{AssCount}{\value{AssListCount}} \item \label{a:alphaBounded} The interpolation function $\alpha_\epsilon:[-1,1] \to [0,\overline {\alpha_\epsilon}]$ is extended to $\tilde\alpha_\epsilon:\mathbb{R}\to \mathbb{R}$ fulfilling Assumption \ref{a:Alpha}, so that there exists $0\leq\delta<\infty$ such that $\tilde\alpha_\epsilon(\varphi)\geq -\delta$ for all $\varphi\in\mathbb{R}$, with $\delta$ sufficiently small. For convenience we in the following do not distinguish $\alpha_\epsilon$ and $\tilde\alpha_\epsilon$. \setcounter{AssListCount}{\value{AssCount}} \end{list} \begin{list}{\theAssCount}{\usecounter{AssCount}} \setcounter{AssCount}{\value{AssListCount}} \item \label{a:uumuq} For given $\varphi^k\in\mathcal{V}^1(\mathcal{T}^k)$ let $\b u_h$ denote the solution to \eqref{eq:FE:NavierStokes} and $\b q_h$ denote the corresponding solution to \eqref{eq:FE:Adjoint}. Then there holds \begin{align*} \frac12|\b u_h|^2 - \b u_h\cdot \b q_h \geq 0. \end{align*} \setcounter{AssListCount}{\value{AssCount}} \end{list} \begin{list}{\theAssCount}{\usecounter{AssCount}}\setcounter{AssCount}{\value{AssListCount}} \item \label{a:alphaConvex} Additional to Assumption \ref{a:Alpha}, we assume that $\alpha_\epsilon$ is convex. \setcounter{AssListCount}{\value{AssCount}} \end{list} \begin{remark} Assumption \ref{a:alphaBounded} is required to ensure existence of unique solutions to \eqref{eq:FE:NavierStokes} and \eqref{eq:FE:Adjoint} if $\delta$ is sufficiently small. Assumption \ref{a:uumuq} is fulfilled in our numerics for small Reynolds numbers but can not be justified analytically. This assumption might be neglected if $\alpha_\epsilon'$ is discretized explicitly in time in \eqref{eq:FE:CahnHilliard:second}. Due to the large values that $\alpha_\epsilon'$ takes, we expect a less robust behaviour of the numerical solution process if we discretize $\alpha_\epsilon'$ explicitly in time. Using Assumption \ref{a:alphaBounded} and Assumption \ref{a:uumuq} the existence of a unique solution to \eqref{eq:FE:CahnHilliard} follows from \cite{HintermuellerHinzeTber__AFEM_for_CH}. For a general $\alpha_\epsilon$ one can use a splitting $\alpha_\epsilon = \alpha_\epsilon^+ + \alpha_\epsilon^-$ where $\alpha_\epsilon^+$ denotes the convex part of $\alpha_\epsilon$ and $\alpha_\epsilon^-$ denotes the concave part. Then $\alpha_\epsilon^+$ is discretized implicitly in time as in \eqref{eq:FE:CahnHilliard:second}, and $\alpha_\epsilon^-$ is discretized explicitly in time to obtain a stable discretization, see e.g. \cite{eyre_CH_semi_implicite,GarckeHinzeKahle_CHNS_AGG_linearStableTimeDisc}. \end{remark} The system \eqref{eq:FE:NavierStokes} is solved by an Oseen iteration, where at step $j+1$ of the iteration the transport $\b u_h^j$ in the nonlinear term $(\b u_h^j \cdot \nabla)\b u_h^{j+1}$ is kept fix and the resulting linear Oseen equation is solved for $(\b u_h^{j+1},p_h^{j+1})$. The existence of solutions to the Oseen equations for solving \eqref{eq:FE:NavierStokes} and the Oseen equation \eqref{eq:FE:Adjoint} are obtained from \cite[Th. II 1.1]{GiraultRaviart_FEM_for_NavierStokes} using Assumption \ref{a:alphaBounded}. In \eqref{eq:FE:Adjoint} we use the adjoint variable from the old time instance for discretizing $\left(\nabla \b u\right)^T \b q$ in time. In this way \eqref{eq:FE:Adjoint} yields a discretized Oseen equation for which efficient preconditioning techniques are available. As mentioned above, the nonlinearity in system \eqref{eq:FE:NavierStokes} is solved by an Oseen fixed-point iteration. The resulting linear systems are solved by a preconditioned gmres iteration, see \cite{Saad_gmres}. The restart is performed depending on the parameter $\mu$ and yields a restart after 10 to 40 iterations. The employed preconditioner is of upper triangular type, see e.g. \cite{Benzi_numericalSaddlePoint}, including the $F_p$ preconditioner from \cite{kayLoghinWelford_FpPreconditioner}. The block arising from the momentum equation \eqref{eq:FE:NavierStokes1} is inverted using umfpack \cite{UMFPACK}. Since \eqref{eq:FE:Adjoint} is an Oseen equation the same procedure is used for solving for $\b q_h$. The gradient equation \eqref{eq:FE:CahnHilliard} is solved by Newton's method, see \cite{HintermuellerHinzeTber__AFEM_for_CH} for details in the case of the pure Cahn--Hilliard equation. For applying Newton's method to \eqref{eq:FE:CahnHilliard} Assumption \ref{a:uumuq} turns out to be numerically essential. The linear systems appearing in Newton's method are solved directly using umfpack \cite{UMFPACK}. Here we also refer to \cite{BoschStollBenner_fastSolutionCH} concerning iterative solvers and preconditioners for the solution of the Cahn--Hilliard equation with Moreau--Yosida relaxation. The simulation of the gradient flow is stopped as soon as $\|\nabla w_h\|_{L^2(\Omega)}\leq tol_{abs} + tol_{rel}\|w^0\|_{L^2(\Omega)}$ holds. Typically we use $tol_{abs} = 10^{-6}$ and $tol_{rel}=10^{-12}$. \subsubsection{The adaptive concept}\label{ssec:adaptConcept} For resolving the interface which separates the fluid and the porous material we adapt the adaptive concept provided in \cite{HintermuellerHinzeKahle_AFEM_for_CHNS,HintermuellerHinzeTber__AFEM_for_CH} to the present situation. We base the concept only upon the gradient flow structure, thus the Cahn--Hilliard equation, and derive a posteriori error estimates up to higher order terms for the approximation of $\nabla\varphi$ and $\nabla w$. \medskip \noindent We define the following errors and residuals: \begin{align*} e_\varphi &= \varphi_h-\varphi, & e_w &= w_h-w,\\ % r_h^{(1)} &= \varphi_h-\varphi^k, & r_h^{(2)} &= \alpha_\epsilon^\prime(\varphi_h) \left(\frac{1}{2}|\b u_h|^2-\b u_h\cdot\b q_h\right) +\lambda_s(\varphi_h) - \frac{\gamma}{\epsilon}\varphi^k-w_h,\\ % \eta_{T_E}^{(1)} &= \sum_{E\subset T}\!\!h_E^{1/2}\|\!\left[\!\nabla w_h\!\right]_E\! \|_{\b L^2(E)}, & \eta_{T_E}^{(2)} &= \sum_{E\subset T}\!\!h_E^{1/2}\|\!\left[\!\nabla \varphi_h\!\right]_E\!\|_{\b L^2(E)}, \\ % \eta_N^{(1)} &= h_N^2\|r_h^{(1)}-R_N^{(1)}\|_{L^2(\omega_N)}^2, & \eta_N^{(2)} &= h_N^2\|r_h^{(2)}-R_N^{(2)}\|_{L^2(\omega_N)}^2. \end{align*} The values $\eta_N^{(i)},\, i=1,2$ are node-wise error values, while $\eta_{T_E}^{(i)},\, i=1,2$ are edgewise error contributions, where for each triangle $T$ the contributions over all edges of $T$ are summed up. For a node $N \in \mathcal{N}^{k+1}$ we by $\omega_N$ denote the support of the piecewise linear basis function located at $N$ and set $h_N := \mbox{diam}(\omega_N)$. The value $R_N^{(i)} \in \mathbb{R},\, i=1,2$ can be chosen arbitrarily. Later they represent appropriate means. By $[\cdot]_E$ we denote the jump across the face $E$ in normal direction $\nu_E$ pointing from simplex with smaller global number to simplex with larger global number. $\nu_E$ denotes the outer normal at $\Omega$ of $E\subset \partial \Omega$. To obtain a residual based error estimator we follow the construction in \cite[Sec. 7.1]{HintermuellerHinzeTber__AFEM_for_CH}. We further use \cite[Cor. 3.1]{Carstensen_QuasiInterpolation} to obtain lower bounds for the terms $\eta_N^{(1)}$ and $\eta_N^{(2)}$. For convenience of the reader we state \cite[Cor. 3.1]{Carstensen_QuasiInterpolation} here. \begin{theorem}[{\cite[Cor. 3.1]{Carstensen_QuasiInterpolation}}] \label{cor:CarstensenInterpolation} There exists a constant $C>0$ depending on the domain $\Omega$ and on the regularity of the triangulation $\mathcal T$ such that \begin{align*} \int_\Omega R(u-\mathcal I u)\,\mathrm dx + &\int_\mathcal{E}J(u-\mathcal I u)\,\mathrm ds\\ % &\leq C\|\nabla u\|_{\b L^p(\Omega)} \left( \sum_{N\in \mathcal N}h_N^q\|R-R_N\|_{L^p(\omega_N)}^q + \sum_{T\in \mathcal T}h_T\|J\|_{L^q(\mathcal{E}\cap \partial T)}^q \right)^{1/q} \end{align*} holds for all $J\in L^q(\mathcal E)$, $R\in L^q(\Omega)$, $u\in W^{1,p}(\Omega)$, and arbitrary $R_N\in \mathbb R$ for $N\in \mathcal N$, where $1<p,q<\infty$ satisfy $\frac1p+\frac1q=1$. \end{theorem} Here $\mathcal I:L^1(\Omega) \to \mathcal{V}^{\mathcal T}$ denotes a modification of the Cl\'ement interpolation operator proposed in \cite{Carstensen_QuasiInterpolation,CarstensenVerfuerth_EdgeResidualDominate}. In \cite{CarstensenVerfuerth_EdgeResidualDominate} it is shown, that in general the error contributions arising from the jumps of the gradient of the discrete objects dominate the error contributions arising from triangle wise residuals. In our situation it is therefore sufficient to use the error indicators $\eta_{T_E}^{(i)}$, $i=1,2$, in an adaptation scheme to obtain well resolved meshes. let us assume that $R\in H^1(\Omega)$ in Corollary \ref{cor:CarstensenInterpolation}. Then with $R_N = \int_{\omega_N} R\,\mathrm dx$ we obtain $\|R-R_N\|_{L^2(\Omega)}\leq C(\omega_N)\|\nabla R\|_{\b L^2(\Omega)}$, and $C(\omega_N)\leq \mbox{diam}(\omega_N)\pi^{-1}$, cf. \cite{Payne_PoincareConstant}. Since the construction of the estimator is standard we here only briefly describe the procedure. We use the errors $e_w$ and $e_\varphi$ as test functions in \eqref{eq:FE:CahnHilliard_weak:first} and \eqref{eq:FE:CahnHilliard_weak:second}, respectively. Since $e_w,e_\varphi\in H^1(\Omega)$ they are valid test functions in \eqref{eq:MY:CahnHilliard}. Subtracting \eqref{eq:FE:CahnHilliard_weak:first} and the weak form of \eqref{eq:MY:CahnHilliard:first}, tested by $e_w$, as well as subtracting \eqref{eq:FE:CahnHilliard_weak:second} and the weak form of \eqref{eq:MY:CahnHilliard:second}, tested by $e_\varphi$ and adding the resulting equations yields \begin{align*} &\tau \|\nabla e_w\|_{\b L^2(\Omega)}^2 + \gamma\epsilon\|\nabla e_\varphi\|^2_{\b L^2(\Omega)}\\ % &+ \left(\lambda_s(\varphi_h)- \lambda_s(\varphi),e_\varphi\right)_{L^2(\Omega)} + \left( \left[\alpha_\epsilon^\prime(\varphi_h)-\alpha_\epsilon^\prime(\varphi) \right] \left(\frac{1}{2}|\b u_h|^2 - \b u_h \cdot \b q_h\right),e_\varphi \right)_{L^2(\Omega)}\\ % &\leq F^{(1)}((\varphi_h,w_h),e_w) + F^{(2)}((\varphi_h,w_h),e_\varphi)\\ &+\left( \alpha_\epsilon^\prime(\varphi) \left[ \left(\frac{1}{2}|\b u|^2 - \b u \cdot \b q\right) - \left(\frac12|\b u_h|^2 - \b u_h \cdot \b q_h\right) \right] ,e_\varphi\right)_{L^2(\Omega)}. \end{align*} For convenience we investigate the term $F^{(1)}((\varphi_h,w_h),e_w)$. Since $\mathcal I e_w \in \mathcal{V}^1(\mathcal T^{k+1})$ it is a valid test function for \eqref{eq:FE:CahnHilliard_weak:first}. We obtain \begin{align*} F^{(1)}((\varphi_h,w_h),e_w) &= F^{(1)}((\varphi_h,w_h),e_w-\mathcal{I}e_w)\\ % &=\tau^{-1}\int_\Omega (\varphi_h-\varphi^k)(e_w-\mathcal{I}e_w)\,\mathrm dx + \int_\Omega \nabla w_h\cdot \nabla (e_w-\mathcal{I}e_w)\,\mathrm dx\\ % &+ \tau^{-1}\int_\Omega r_h^{(1)}(e_w-\mathcal{I}e_w)\,\mathrm dx + \sum_{E\subset \mathcal E}\int_E \left[ \nabla w_h \right]_E (e_w-\mathcal{I}e_w)\,\mathrm ds. \end{align*} Applying Corollary \ref{cor:CarstensenInterpolation} now gives \begin{align*} F^{(1)}&((\varphi_h,w_h),e_w)\\ &\leq C\|\nabla e_w\|_{\b L^2(\Omega)} \left( \tau^{-2}\sum_{N\in\mathcal{N}} h_N^2\|r_h^{(1)}\|^2_{L^2(\omega_N)} +\sum_{T\in \mathcal T} h_T \|\left[ \nabla w_h \right]_E\|_{L^2(\partial T)}^2 \right)^{1/2}. \end{align*} For $F^{(2)}((\varphi_h,w_h),e_\varphi)$ a similar result holds. Using Young's inequality we obtain the following theorem. \begin{theorem} There exists a constant $C>0$ independent of $\tau,\gamma,\epsilon,s$ and $h:=\max_{T\in\mathcal{T}}h_T$ such that there holds: \begin{align*} &\tau \|\nabla e_w\|^2_{\b L^2(\Omega)} + \gamma\epsilon\|\nabla e_\varphi\|^2\\ % & + \left(\lambda_s(\varphi_h)- \lambda_s(\varphi),e_\varphi\right)_{L^2(\Omega)} + \left( \left[\alpha_\epsilon^\prime(\varphi_h)-\alpha_\epsilon^\prime(\varphi) \right] (\frac{1}{2}|\b u_h|^2 - \b u_h \cdot \b q_h),e_\varphi \right)_{L^2(\Omega)}\\ % &\leq C\left(\eta_\Omega^2 + \eta_{h.o.t.}^2\right), \end{align*} where \begin{align*} \eta_\Omega^2 := \frac{1}{\tau}\sum_{N\in \mathcal{N}^{k+1}} \left(\eta_N^{(1)}\right)^2 +\frac{1}{\gamma\epsilon}\sum_{N\in \mathcal{N}^{k+1}} \left(\eta_N^{(2)}\right)^2 +\tau\sum_{T\in \mathcal{T}^{k+1}} \left(\eta_E^{(1)}\right)^2 +\gamma\epsilon\sum_{T\in \mathcal{T}^{k+1}} \left(\eta_E^{(2)}\right)^2, \end{align*} and \begin{align*} \eta_{h.o.t.}^2 :=& \frac{1}{\gamma\epsilon}\sum_T \left\|\alpha_\epsilon^\prime(\varphi) \left( \left( \frac{1}{2}|\b u_h|^2-\b u_h\cdot\b q_h\right) - \left(\frac{1}{2}|\b u|^2 - \b u\cdot \b q\right) \right)\right\|_{L^2(T)}^2. \end{align*} \end{theorem} \begin{remark} \begin{enumerate} \item Since $\lambda_s$ is monotone there holds $\left(\lambda_s(\varphi_h)-\lambda_s(\varphi),e_\varphi\right)_{L^2(\Omega)}\geq 0$. \item We note that due to Assumption \ref{a:uumuq} and the convexity of $\alpha_\epsilon$ we obtain $\left( \left[\alpha_\epsilon^\prime(\varphi_h)-\alpha_\epsilon^\prime(\varphi) \right] (\frac{1}{2}|\b u_h|^2 - \b u_h\cdot \b q_h),e_\varphi \right)_{L^2(\Omega)} \geq 0$. \item Due to using quadratic elements for both the velocity field $\b u_h$ and the adjoint velocity field $\b q_h$ we expect that the term $\eta_{h.o.t.}$ can be further estimated with higher powers of $h$. It therefore is neglected in our numerical implementation. \item The values $R_N^{(i)},\,i=1,2$ can be chosen arbitrarily in $\mathbb{R}$. By using the mean value $R_N^{(i)} = \int_{\omega_N} r_h^{(i)}\,\mathrm dx$ and the Poincar\'e-Friedrichs inequality together with estimates on the value of its constant (\cite{Payne_PoincareConstant}) the terms $\eta_N^{(i)},\, i=1,2$ are expected to be of higher order and thus are also are neglected in the numerics. \item Efficiency of the estimator up to terms of higher order can be shown along the lines of \cite[Sec. 7.2]{HintermuellerHinzeTber__AFEM_for_CH} by the standard bubble technique, see e.g. \cite{AinsworthOden_Aposteriori}. \end{enumerate} \end{remark} For the adaptation process we use the error indicators $\eta_{T_E}^{(1)}$ and $\eta_{T_E}^{(2)}$ in the following D\"orfler marking strategy (\cite{Doerfler}) as in \cite{HintermuellerHinzeKahle_AFEM_for_CHNS,HintermuellerHinzeTber__AFEM_for_CH}. \paragraph{The adaptive cycle} We define the simplex-wise error indicator $\eta_{T_E}$ as \begin{align*} \eta_{T_E} =\eta_{T_E}^{(1)} + \eta_{T_E}^{(2)}, \end{align*} and the set of admissible simplices \begin{align*} \mathcal{A} = \{ T\in \mathcal{T}^{k+1}\,|\, a_{\min}\leq |T| \leq a_{\max} \}, \end{align*} where $a_{\min}$ and $a_{\max}$ are the a priori chosen minimal and maximal sizes of simplices. For adapting the computational mesh we use the following marking strategy: \begin{enumerate} \item Fix constants $\theta^r$ and $\theta^c$ in $(0,1).$ \item Find a set $\mathcal{M}^{E} \subset \mathcal{T}^{k+1}$ such that \[ \sum_{T\in\mathcal{M}^{E}} \eta_{T_E} \geq \theta^r \sum_{T\in\mathcal{T}^{k+1}} \eta_{T_E}. \] \item Mark each $T\in (\mathcal{M}^{E} \cap \mathcal{A})$ for refinement. \item Find the set $\mathcal{C}^E \subset \mathcal{T}^{k+1}$ such that for each $T \in \mathcal{C}^E$ there holds \begin{align*} \eta_{T_E} & \leq \frac{\theta^c}{N_T} \sum_{T\in\mathcal{T}^{k+1}} \eta_{T_E}. \end{align*} \item Mark all $T \in \left ( \mathcal{C}^{E} \cap \mathcal{A}\right )$ for coarsening. \end{enumerate} Here $N_T$ denotes the number of elements of $\mathcal{T}^{k+1}$. \medskip We note that by this procedure a simplex can both be marked for refinement and coarsening. In this case it is refined only. We further note, that we apply this cycle once per time step and then proceed to the next time instance. \section{Numerical examples} In this section we discuss how to choose the values incorporated by our porous material -- diffuse interface approach. We note that there are a several approaches on topology optimization in Navier--Stokes flow, see e.g. \cite{borrvall,HansenHaberSigmun_TopoOptChannelFlow,KreisslMaute_fluidTopoOptXFEM, OlesenOkelsBruus_TopoOptSteadyStateNS,pingen_TopoOpt_Boltzmann}. On the other hand it seems, that so far no quantitative values to describe the optimal shapes are available in the literature. All publications we are aware of give qualitative results or quantitative results that seem not to be normalized for comparison with other codes. In the following we start with fixing the interpolation function $\alpha_\epsilon$ and the parameters $\tau$, $s$, and $\epsilon$. We thereafter in Section \ref{ssec:num:TH} investigate how the phase field approach can find optimal topologies starting from a homogeneously distributed porous material. In Section \ref{ssec:num:RB} we present numerical experiments for the rugby ball, see als \cite{borrvall}, \cite{pingen_TopoOpt_Boltzmann} and \cite{Schmidt_shape_derivative_NavierStokes}. Here we provide comparison value for the friction drag of the optimized shape, and as second comparison value we introduce the circularity describing the deviation of the ball from a circle. As last example, and as outlook, we address the optimal shape of the embouchure of a bassoon in Section \ref{ssec:num:FG}. In the following numerical examples we always assume the absence of external forces, hence $\b f\equiv\b 0$. The optimization aim is always given by minimizing the dissipative energy \eqref{e:TotalPotPower}, which in the absence of external forces is given by \begin{align*} F = \int_\Omega \frac{\mu}{2}|\nabla \b u|^2\,\mathrm dx. \end{align*} The Moreau--Yosida parameter in all our computations is set to $s=10^6$. We do not investigate its couplings to the other parameters involved. For later referencing we here state the parabolic in-/outlet boundary data that we use frequently throughout this section \begin{align} \label{eq:num:parabolicBoundary} g(x) = \begin{cases} h\left(1-\left(\frac{x-m}{l/2}\right)^2\right) & \mbox{if } |x-m|<l/2, \\ 0 & \mbox{otherwise}. \end{cases} \end{align} In the following this function denotes the normal component of the boundary data at portions of the boundary, where inhomogeneous Dirichlet boundary conditions are prescribed. The tangential component is set to zero if not mentioned differently. \subsection{Time step adaptation}\label{ssec:num:TimeStepLength} For a faster convergence towards optimal topologies we adapt the length of the time steps $\tau^{k+1}$. Here we use a CFL-like condition to ensure that the interface is not moving too fast into the direction of the flux $\nabla w_h$. With \begin{align*} \tau^*= \min_{T\in \mathcal{T}^k} \frac{h_T}{\|\nabla w^k\|_{L^{\infty}(T)}} \end{align*} we set \begin{align*} \tau^{k+1} = \max(\tau_{\max},\tau^*), \end{align*} where $\tau_{\max}$ denotes an upper bound on the allowed step size and typically is set to $\tau_{\max} = 10^4$. Thus the time step size for the current step is calculated using the variable $w^k$ from the previous time instance. We note that especially for $\nabla w^k \to 0$ we obtain $t_k\to \infty$, and thus when we approach the final state, we can use arbitrarily large time steps. We further note, that if we choose a constant time step the convergence towards a stationary point of the gradient flow in all our examples is very slow and that indeed large time steps close to the equilibrium are required. \subsection{The interfacial width}\label{ssec:num:epsilon} As discussed in Section~\ref{sec:Analysis} the phase field problem can be verified to approximate the sharp interface shape optimization problem as $\epsilon\searrow 0$ in a certain sense. Hence we assume that the phase field problems yield reasonable approximations of the solution for fixed but small $\epsilon>0$, and we do not vary its value. Typically, in the following we use the fixed value $\epsilon=0.005$. \subsection{The interpolation function}\label{ssec:num:alphaepsilon} We set (see \cite{borrvall}) \begin{align} \alpha_\epsilon(\varphi) := \frac{\overline \alpha}{2\sqrt{\epsilon}} (1-\varphi)\frac{q}{(\varphi+1+q)}, \label{eq:num:alpha} \end{align} with $\overline \alpha > 0$ and $q>0$. In our numerics we set $\overline \alpha = 50$. In Figure \ref{fig:num:alpha} the function $\alpha_\epsilon$ is depicted in dependence of $q$. We have $\alpha_\epsilon(-1)=\overline{\alpha}_\epsilon =\overline{\alpha}\epsilon^{-1/2}$ and Assumption \ref{a:Alpha} is fulfilled, except that $\lim_{\epsilon \searrow 0}\alpha_\epsilon(0) < \infty$ holds. Anyhow, the numerical results with this choice of $\alpha_\epsilon$ are reasonable and we expect that this limit condition has to be posed for technical reasons only. \begin{figure} \centering \epsfig{file=alphaEps_various_q, width=6cm,height=3cm} \caption{The shape of the interpolation function $\alpha_\epsilon$ for $q = 10^i, i=-2,\ldots 2$ (bottom to top).} \label{fig:num:alpha} \end{figure} To fulfill Assumption~\ref{a:alphaBounded} we cut $\alpha_\epsilon$ at $\varphi\equiv\varphi_c>1$ and use any smooth continuation yielding $\alpha_\epsilon(\varphi)\equiv \mbox{const}$ for $\varphi\geq \varphi_c$. The parameter $q$ controls the width of the transition zone between fluid and porous material. In \cite{borrvall} the authors typically use a rather small value of $q=0.01$. They also show how different values of $q$ might lead to different local optimal topologies. Since here we also have the parameter $\epsilon$ for controlling the maximal width of the transition zone we fix $q:=10$. The fluid material is assumed to be located at $\varphi = 1$ where $\alpha(1)=0$ holds. Since we use Moreau--Yosida relaxation we allow $\varphi$ to take values larger then $+1$ and smaller then $-1$. The choice of $q=10$ and $s=10^6$ in our setting always guarantees, that $\varphi+1+q\gg 0$ holds, and that the violation of $\alpha_\epsilon(\varphi)\geq 0$ at $\varphi=1$ only is small. Using an interpolation function that yields a smooth transition to zero at $\varphi=1$, say a polynomial of order 3, in our numerics especially for small values of $\gamma$ yields undesired behaviour of the numerical solvers. We for example obtain that fluid regions disappear resulting in a constant porous material. The reason is, that then $\alpha_\epsilon(\beta)\approx 0$ if $\beta$ is chosen in the flat region of $\alpha_\epsilon$. If $\beta$ can be chosen small enough, the choice of $\varphi\equiv\beta$ yields constant porous material and hence a very small total potential power. Thus, $\varphi\equiv\beta$ is at least a local minimizer. The benefit of small values of $q$ described in \cite{borrvall} stays valid and for large values of $\gamma$, say $\gamma=1$, small values of $q$ can help finding a valid topology when starting from a homogeneous material. This property is the reason to use this function instead of a linear one, although $\alpha_\epsilon$ can be regarded as linear for the value of $q=10$ that we use here. \subsubsection{The influence of $\overline{\alpha}_\epsilon$ on the interface} \label{ssec:num:alphaInterface} The separating effect not only arises from the contribution of the Ginzburg--Landau energy, but also the term \linebreak[4] $\alpha_\epsilon(\varphi)\left(\frac{1}{2}|\b u|^2 - \b u \cdot \b q\right)$ yields the demixing of fluid and porous material. Since $\alpha_\epsilon$ scales with $\overline{\alpha}_\epsilon$ we next investigate the relative effect of $\overline{\alpha}$ and $\epsilon$ concerning the demixing and thus the width of the resulting interface. This is done for several values of the parameter $\gamma$, which weights the two separating forces. The numerical setup for this test is described in the following. In the computational domain $\Omega = (0,1)^2$ we have a parabolic inlet at $x\equiv 0$ with $m = 0.5, l=0.2$, and $h=1$. At $x\equiv 1$ we have an outlet with the same values. The viscosity is set to $\mu = 1$. We investigate the evolution of the value \begin{equation*} I = \frac{\int_{\{|\varphi|\leq 1\}}\,\mathrm dx}{\int_{\{\varphi=0\}}\,\mathrm ds}, \end{equation*} which is the size of the area of the transition zone between fluid and material, and which is normalized by the length of the interface. This value estimates the thickness of the interfacial region. For this test we fix $\epsilon\equiv 1$ in \eqref{eq:num:alpha} and use the interpolation function \begin{equation*} \alpha(\varphi) = \alpha_1(\varphi) = \frac{\overline \alpha}{2} (1-\varphi)\frac{q}{(\varphi+1+q)}, \end{equation*} so that $\overline \alpha\equiv \alpha(-1)$. For fix $\gamma$ we calculate the optimal topology for several combinations of $\epsilon$ and $\overline{\alpha}$. In Figure \ref{fig:num:atEps} we depict the value of $I$ depending on $\overline{\alpha}$ for several $\epsilon$. We used $\gamma \in\{0.5,0.05,0.005 \}$ (left to right). \begin{figure} \centering \epsfig{file=I_over_aT_for_several_eps_sigma_0_5,width=0.3\textwidth}\hfill \epsfig{file=I_over_aT_for_several_eps_sigma_0_05,width=0.3\textwidth}\hfill \epsfig{file=I_over_aT_for_several_eps_sigma_0_005,width=0.3\textwidth} \caption{Size of the scaled interfacial area for various combinations of $\overline{\alpha} = \alpha(-1)$ and $\epsilon$, with $\gamma=0.5$ (left), $\gamma=0.05$ (middle) and $\gamma=0.005$ (right).} \label{fig:num:atEps} \end{figure} We observe that there is a regime of values for $\overline \alpha$ where the interfacial width only depends on $\epsilon$. But we also see, that, depending on $\gamma$ and $\epsilon$, there is a regime where the interfacial width scales like $\overline \alpha^\kappa$ with some $\kappa\in \mathbb{R}$ which depends on $\gamma$. The change in the behaviour of the interfacial width occurs at $\alpha(-1)\approx C(\gamma)\epsilon^{-1}$, where $C(\gamma)$ is a constant depending linearly on $\gamma$. This is exactly the convergence rate necessary to get analytical convergence results, compare Remark~\ref{r:ConvergenceRateTwoDim} and \cite{hecht}. We recall that this test is run with constant $\mu=1$ and that the results might differ for different values of $\mu$. In particular the value of $\mu$ also has an influence on the interfacial width through the mixing energy $\frac12|\b u_h|^2 - \b u_h \cdot \b q_h$, see Section \ref{ssec:num:MixingEnergy}. \subsection{A treelike structure}\label{ssec:num:TH} In this first example we investigate how our phase field approach is able to find optimal topologies starting from a homogeneous porous material. This example is similar to an example provided in \cite{Hansen_Dissertation}. The setup is as follows. The computational domain is $\Omega = (0,1)^2$. On the boundary we have one parabolic inlet as described in \eqref{eq:num:parabolicBoundary} and four parabolic outlets. The corresponding parameters are given in Table \ref{tab:num:TH:boundary}. \begin{table} \footnotesize \centering \begin{tabular}{ccccc} direction & boundary & m & l & h\\ \hline inflow & $\{x\equiv 0\}$ & 0.80 & 0.2 & 3\\ outflow & $\{y\equiv 0\}$ & 0.80 & 0.1 & 1\\ outflow & $\{y\equiv 1\}$ & 0.65 & 0.1 & 1\\ outflow & $\{x\equiv 1\}$ & 0.70 & 0.2 & 1\\ outflow & $\{x\equiv 1\}$ & 0.25 & 0.2 & 1 \end{tabular} \caption{Boundary data for the treelike structure.} \label{tab:num:TH:boundary} \end{table} We use $\gamma = 0.01$ and $\mu = 0.01$. For $\alpha_\epsilon$ we start with $\overline \alpha = 5$ and increase it later. The phase field is initialized with a homogeneous porous material $\varphi_0 = 0$. We start with a homogeneous mesh with mesh size $2e-5$ to obtain a first guess of the optimal topology. After the material demixes, i.e. $\|\nabla w^h\|_{\b L^2(\Omega)}\leq 2$, we start with adapting the mesh to the resulting structures using the adaptation procedure described in Section \ref{ssec:adaptConcept}. For the adaptive process we use the parameter $a_{\min}= 4e-7$, $a_{\max}=0.01$, $\theta^r = 0.1$, and $\theta^c=0.05$. As soon as $\|\nabla w^h\|_{\b L^2(\Omega)}\leq 1$ holds we start with increasing $\overline \alpha$ to $\overline \alpha = 50$ and stop the allover procedure as soon as $\overline\alpha = 50$ and $\|\nabla w^h\|_{\b L^2(\Omega)}\leq 1e-5$ holds. In Figure \ref{fig:num:TH:evolution} we depict the temporal evolution of the optimization process. The images are numbered from top left to bottom right. Starting from a homogeneous distribution of porous material, we see that the inlet and the outlets are found after very few time instances and that the main outlets on the right and the inlet are connected after only a few more time steps. At the bottom left of the computational domain we first obtain finger like structures that thereafter vanish. We note that, due to the porous material approach, not all outlets are connected with the inlet during the whole computation. At the final stage of the optimization the evolution slows down and we end with the topology depicted at the bottom right after 188 time steps of simulation. \begin{figure} \centering \hfill \epsfig{file=TH_c00,width=0.29\textwidth} \hfill \epsfig{file=TH_c03,width=0.29\textwidth} \hfill \epsfig{file=TH_c06,width=0.29\textwidth} \hfill\\[2ex] % \hfill \epsfig{file=TH_c18,width=0.29\textwidth} \hfill \epsfig{file=TH_c35,width=0.29\textwidth} \hfill \epsfig{file=TH_c94,width=0.29\textwidth} \hfill \caption{The initial phase field $\varphi_0$ for the treelike structure and the phase field after 6, 12, 36, 70, and 188 time steps (top left to bottom right).} \label{fig:num:TH:evolution} \end{figure} \subsection{A rugby ball}\label{ssec:num:RB} We next investigate the overall behaviour of the adaptive concept and give an example showing the influence of the parameters $\gamma$ and $\mu$ on the interfacial area. The aim is to optimize the shape of a ball in an outer flow as is investigated in \cite{borrvall,pingen_TopoOpt_Boltzmann,Schmidt_shape_derivative_NavierStokes}. In the computational domain $\Omega = (0,1)\times(0,5)$ we have a circle located at $M = (0.5,0.5)$ with radius $r = \sqrt{(10\pi)^{-1}}$. On the boundary $\partial \Omega$ we impose Dirichlet data $\b g\equiv(0,1)^T$ for the Navier--Stokes equations. The domain is chosen large enough to neglect the influence of the outflow boundary on the optimized topology. In \cite{borrvall} it is shown that for Stokes flow the optimal topology equals a rugby ball, while in \cite{pingen_TopoOpt_Boltzmann,Schmidt_shape_derivative_NavierStokes} the authors obtain an airfoil-like shape for Navier--Stokes flow and small values of $\mu$. The parameters used here are $\epsilon=0.005$ and $\overline{\alpha}=50$. For the adaptive concept we fix $\theta^r = 0.2$, $\theta^c = 0.05$, $a_{\min}=10^{-7}$ and $a_{\max} = 5 \cdot 10^{-4}$. As initial mesh we use a homogeneous mesh with mesh size $a_{\text{init}} = 1/1600$ and refine the region $|\varphi_0|\leq 1$ to the finest level, where $\varphi_0$ denotes the initial phase field. \subsubsection{Optimal shapes for various $\gamma$ and $\mu$} We start with depicting our numerical findings for various values of $\gamma$ and $\mu$. Here we proceed as follows. We optimize the shape for decreasing values of $\gamma \in [10^{-4},10]$ and $\mu = 1$. The optimal geometry for $\mu=1$ and $\gamma=10^{-4}$ thereafter is used as initial value for decreasing $\mu \in [500^{-1},1]$ while $\gamma=10^{-4}$ is kept fix. In Figure \ref{fig:num:RB:Results_gamma} we depict the optimal shapes for $\mu=1$ and $\gamma \in \{10,0.1,0.01,0.0001\}$, and in Figure \ref{fig:num:RB:Results_RE} we depict the optimal shapes for $\gamma = 10^{-4}$ and $\mu \in \{10^{-1},100^{-1},300^{-1},500^{-1}\}$. \begin{figure} \centering \fbox{ \epsfig{file=RuBa_0001_sigma_1ep1, width=0.20\textwidth} } \hfill \fbox{ \epsfig{file=RuBa_0009_sigma_1em1, width=0.20\textwidth} } \hfill \fbox{ \epsfig{file=RuBa_0013_sigma_1em2, width=0.20\textwidth} } \hfill \fbox{ \epsfig{file=RuBa_0023_sigma_1em4, width=0.20\textwidth} } \caption{Optimal topologies for the rugby ball example for $\mu=1$ and $\gamma \in \{10, 0.1, 0.01, 0.0001\}$ (left to right).} \label{fig:num:RB:Results_gamma} \end{figure} \begin{figure} \centering \fbox{ \epsfig{file=RuBa_0027_RE_10, width=0.20\textwidth} } \hfill \fbox{ \epsfig{file=RuBa_0032_RE_100, width=0.20\textwidth} } \hfill \fbox{ \epsfig{file=RuBa_0036_RE_300, width=0.20\textwidth} } \hfill \fbox{ \epsfig{file=RuBa_0038_RE_500, width=0.20\textwidth} } \caption{Optimal topologies for the rugby ball example for $\gamma=10^{-4}$ and $\mu \in \{10^{-1}, 100^{-1}, 300^{-1}, 500^{-1}\}$ (left to right).} \label{fig:num:RB:Results_RE} \end{figure} We see that for large values of $\gamma$ the Ginzburg--Landau energy dominates the minimizing problem and thus we obtain an optimal shape which is close to a circle. With $\gamma$ getting smaller we obtain shapes that resemble rugby balls like shapes as obtained in \cite{borrvall} for the Stokes flow. In particular we see that the top and bottom tip get sharper as we decrease the value of $\gamma$. This can be explained by the Ginzburg--Landau energy. This term penalises the interfacial size and explains why for large values of $\gamma$ the optimal shape is close to a circle. Note that the optimal shape can locate freely in the computational domain and therefore the optimal shape for $\gamma=10^{-4}$ has a slightly larger distance to the bottom boundary than the optimal shapes for larger $\gamma$. As argued in \cite{pingen_TopoOpt_Boltzmann}, for $\mu$ taking smaller values, the optimal shape tends to an airfoil. This is what we observe in our numerics, see Figure \ref{fig:num:RB:Results_RE}. For a quantitative description of the optimal shapes we follow \cite[Rem. 12]{Schmidt_shape_derivative_NavierStokes} and introduce the friction drag of an obstacle in free flow as \begin{equation}\label{eq:num:drag} F_D = \int_{\{\varphi=0\}} -\mu \left(\left(\nu\cdot\nabla\right)\b u\right)\cdot a + p \nu\cdot a\,\mathrm ds. \end{equation} Here $\nu$ is the unit normal on the boundary of the ball pointing inwards and $a$ is the direction of attack of the flow field. In our example we have $a = (0,1)^T$ since the flow is attaining from the bottom. By using the Gauss theorem we write $F_D$ as an integral over the ball given by $\varphi <0$ and obtain \begin{equation}\label{eq:num:drag_gauss} F_D =-\int_{\{\varphi<0\}} \,\mathrm{div}\, \left( -\mu \nabla u_2 + (0,p)^T \right)\,\mathrm dx = -\int_{\{\varphi<0\}}-\mu \Delta u_2 + p_y\,\mathrm dx. \end{equation} Note that the normal $\nu$ in \eqref{eq:num:drag} points into the rugby ball and thus we obtain the minus sign in \eqref{eq:num:drag_gauss}. Here $u_2$ denotes the second component of the velocity field $\b u$ and $p_y$ denotes the derivative of $p$ in $y$-direction. As second comparison value we define the circularity of the rugby ball. This value is introduced in \cite{Wadell_Circularity} to describe the deviation of circular objects from a circle. It is defined by \begin{equation}\label{eq:num:circularity} \theta = \frac{\mbox{Circumference of circle with same area}} {\mbox{Circumference of object}} = \frac{\sqrt{4\pi\int_{\{\varphi<0\}}\,\mathrm dx}} {\int_{\{\varphi=0\}}\,\mathrm ds} \leq 1, \end{equation} where a value of $\theta \equiv 1$ indicates a circle. In Table \ref{tab:num:RB:dissPow_drag_circ} we give results for our numerical findings. \begin{table} \centering \footnotesize \begin{tabular}[t]{ccccc} $\gamma$ & $\mu$ & $F$ & $\theta$ & $F_D$\\ \hline 10.0000& 1& 7.2266& 0.9996& 21.6140\\ 1.0000& 1& 6.5317& 0.9664& 18.5820\\ 0.1000& 1& 6.1828& 0.8005& 16.6710\\ 0.0100& 1& 6.1494& 0.7722& 16.4640\\ 0.0010& 1& 6.1480& 0.7681& 16.4510\\ 0.0001& 1& 6.1427& 0.7674& 16.4310 \end{tabular} \begin{tabular}[t]{ccccc} $\gamma$ & $\mu$ & $F$ & $\theta$ & $F_D$ \\ \hline 0.0001& 10$^{-1}$& 1.1353& 0.7335& 2.3596\\ 0.0001& 100$^{-1}$& 0.1830& 0.6349& 0.3244\\ 0.0001& 200$^{-1}$& 0.1188& 0.5901& 0.1910\\ 0.0001& 300$^{-1}$& 0.0942& 0.5568& 0.1395\\ 0.0001& 400$^{-1}$& 0.0805& 0.5403& 0.1114\\ 0.0001& 500$^{-1}$& 0.0715& 0.5253& 0.0930 \end{tabular} \caption{Comparison values for the rugby example. $F$ is the dissipative power, $\theta$ denotes the circularity, and $F_D$ the drag force. The optimization aim is the minimization of the dissipative power.} \label{tab:num:RB:dissPow_drag_circ} \end{table} As discussed above for large values of $\gamma$ the Ginzburg--Landau energy dominates the functional under investigation. This results in optimal shapes that are close to circles as can be seen for $\gamma=1$ and $\gamma=1$ where we have $\theta=1$ and $\theta=0.97$ respectively. We further see that for $\gamma=0.01$ the optimal shape is determined by the dissipative power, since the results for $\gamma=0.01$ and $\gamma=0.0001$ are very close together. Concerning the dependence with respect to $\mu$ we see how the dissipative energy, which scales with $\mu$, decreases with decreasing $\mu$. We also obtain that both the circularity and the drag are reduced for smaller values of $\mu$. For the drag we have approximately $F_D \sim \mu^{0.84}$. \subsubsection{Behaviour of the adaptive concept} Next we investigate the behaviour of the adaptive concept. Since the error indicators only contain the jumping terms of the gradient, we expect the indicators mainly to be located at the borders of the interface, i.e. the isolines $\varphi=\pm 1$. In Figure \ref{fig:num:RB:ErrorEst} we depict the distribution of the error indicator $\eta_{T_E}$ for the optimal topology for $\gamma=10^{-4}$ and $\mu=1$. \begin{figure} \centering \fbox{ \epsfig{file=RuBa_0022_interface_etaTE, width=0.40\textwidth} } \fbox{ \epsfig{file=RuBa_0022_interface_mesh, width=0.40\textwidth} } \caption{The bottom arc of the rugby ball for $\gamma=10^{-4}$ and $\mu=1$. The distribution of $\eta_{T_E}$ across the interface is shown in the left plot, where darker areas indicate larger error. The spatial resolution of the interface is depicted in the right plot. The bold lines indicate the discrete sets $\varphi\equiv \pm1$. } \label{fig:num:RB:ErrorEst} \end{figure} We observe from the left plot, that the indicator $\eta_{T_E}$ is concentrated at the discrete isolines $\varphi=\pm 1$. Here the mesh is refined to the finest level as we see in the right plot. Inside the interface the triangles are only mildly refined. Here the phase field tends to be linear and thus a high spatial resolution is not required to get a well resolved phase field. \subsubsection{A view on mixing energy} \label{ssec:num:MixingEnergy} From the point of view of Cahn--Hilliard theory, \eqref{eq:MY:CahnHilliard} alone for fixed vector fields $\b u,\b q$ can be regarded as the Cahn--Hilliard system with a free energy $F$ given by \begin{align} F(\varphi) = \frac{\gamma}{\epsilon}(1-\varphi^2) + \frac{s}{2}\lambda^2(\varphi) + \alpha_\epsilon(\varphi)\left(\frac{1}{2}|\b u|^2 - \b u \cdot \b q\right). \label{eq:num:RB:mixingEnergy} \end{align} The term $\frac{1}{2}|\b u|^2 - \b u \cdot \b q$ is assumed to be non negative. For $F$ we require two distinct minima located at $\approx \pm 1$. If $|\varphi|>1$ holds it is reasonable to assume that $s \lambda^2(\varphi)$ is the dominating term and in the ongoing we investigate the distribution of $F$ inside the interface defined by $|\varphi|\leq 1$. We note that this distribution in fact depends on $\gamma,\epsilon,\overline \alpha$, and $\mu$. As in Section \ref{ssec:num:epsilon} we fix $\epsilon$ to be $0.005$. Since both $\overline \alpha$ and $\gamma$ give a weighting of the two energy terms we also fix $\overline \alpha \equiv 50$ as proposed in Section \ref{ssec:num:alphaepsilon}. Thus the free parameters in this investigation are $\gamma$ and $\mu$. In Figure \ref{fig:num:RB:MixingEnergy} we show the distribution of the terms $\alpha_\epsilon(\varphi)(\frac{1}{2}|\b u|^2 - \b u\cdot \b q)$ and $\frac{\gamma}{\epsilon}(1-\varphi^2)$ at the bottom arc of the optimized rugby ball. \begin{figure} \centering \epsfig{file=RuBa_0022_ayymyq,width=0.4\textwidth} \epsfig{file=RuBa_0022_GL_free,width=0.4\textwidth} \caption{The energies $\alpha_\epsilon(\varphi)(\frac{|\b u|^2}{2} - \b u \cdot \b q)$ (left plot) and $\frac{\gamma}{\epsilon}(1-\varphi^2)$ (right plot) at the bottom of the optimized topology for $\gamma=10^{-4}$ and $\mu=1$.} \label{fig:num:RB:MixingEnergy} \end{figure} We see, that the term $\alpha_\epsilon(\varphi)(\frac{1}{2}|\b u|^2 - \b u\cdot \b q)$ is larger then $\frac{\gamma}{\epsilon}(1-\varphi^2)$ and thus dominates the demixing. The term admits a maximum inside the interface and takes smaller values outside of the interface. We note that the term $\frac{\gamma}{\epsilon}(1-\varphi^2)$ is symmetric across the interface, while $\alpha_\epsilon(\varphi)(\frac{1}{2}|\b u|^2 - \b u\cdot \b q)$ takes its maximum near $\varphi=-1$ and especially also takes large values inside the porous material. \subsection{An optimal embouchure for a bassoon}\label{ssec:num:FG} As outlook we investigate the optimal shape of an embouchure for a bassoon. In the group of Professor Grundmann at the Technische Universit\"at Dresden by experiments an optimized shape was found that has a smaller pressure loss along the pipe, while it only slightly changes the sound of the bassoon, see \cite{grundmann}. We apply our optimization algorithm to the problem of finding an optimal embouchure in order to illustrate possible fields of application of our approach. We note that again we minimize the dissipative energy, and that we do not take further optimization constraints into account. We proceed as described in Section \ref{ssec:num:TH} to find optimal shapes in $\Omega=(0,1)^2$ for the parameters $\gamma=1e-4$, $\epsilon=0.005$, $\overline \alpha =50$ and $\mu= 1e-3$. We start with a constant initial phase field using $\beta=0.1$. The inflow is set to $x\equiv 1$ and we use the parameters $m_i=0.5$, $l_i=0.1$, $h_i=1$ in \eqref{eq:num:parabolicBoundary} both for the $x$ and $y$ direction of the boundary velocity field, resulting in an inflow pointing $45\degree$ upwards. We set the outflow to $y\equiv 0$ and consider two scenarios. For the first scenario we use the values $m_1=0.8$, $l_1=0.2$ and $h_1=0.5$ in \eqref{eq:num:parabolicBoundary}, and for the second example we use $m_2=0.3$, $l_2=0.2$ and $h_2=0.5$. In Figure \ref{fig:num:FG:results} we show our numerical finding. We obtain a straight and wide pipe that directly connects inflow and outflow boundary. This corresponds to our optimization aim, i.e. minimizing the dissipative power. Similar trends for the optimized shape of the embouchure were also observed by the group in Dresden. \begin{figure} \centering \epsfig{file=FG_L_RE1000,width=0.4\textwidth} \hspace{2cm} \epsfig{file=FG_S_RE1000,width=0.24\textwidth} \caption{Optimized shapes for the bassoon example for $\mu=1000^{-1}$. First scenario on left side, second scenario on right side. The inflow is on the right side.} \label{fig:num:FG:results} \end{figure} \bibliographystyle{plain}
proofpile-arXiv_067-10981
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} HERA was a high-energy electron\footnote{Here and in the following the term electron denotes generically both the electron and the positron.}-proton collider, at a centre-of-mass (cms) energy of 320 GeV. It started operating in 1992 and was closed in 2007. Due to the accessible high values of virtuality, $Q^2$, of the exchanged boson (see Fig. 1), reaching values up to about 40 000 GeV$^2$, it could 'look' into the proton with a resolution $\lambda$ of about 10$^{-3}$ fm. \begin{figure}[h!] \begin{minipage}{0.25\linewidth} \centerline{\includegraphics[width=0.7\linewidth]{ep-diag.pdf}} \vspace{-0.5cm} \caption{Diagram describing $ep$ collisions.} \end{minipage} \hfill \begin{minipage}{0.7\linewidth} At HERA many experiments were performed by changing the virtuality of the exchanged photon from almost-real photons ($Q^2 \sim$ 0), the photoproduction region, through the start of the deep inelastic scattering (DIS) region, $Q^2 \sim$ 4 GeV$^2$ ($\lambda$=0.1 fm), to the very high-$Q^2$ region, $Q^2 \sim$ 40 000 GeV$^2$ ($\lambda=10^{-3}$ fm), where electroweak physics could be studied. \end{minipage} \end{figure} In this talk, two of the most recent results concerning the proton structure will be presented. The first is a measurement~\cite{h1fl,zeusfl}, by both collaborations, of the longitudinal structure funcion, $F_L$. The second, carried out by the ZEUS collaboration~\cite{zeushighx}, is the high $Q^2$ measurements in the high Bjorken $x$ region up to values of $x\cong 1$. \section{Measuring the longitudinal structure function $F_L$} The $F_L$ structure function was measured at HERA only during the last months of its running in 2007. Up to that time, measurements of the $F_2$ structure function were limited~\cite{rmp} to low $y$, where $y$ is the fraction of the lepton energy transferred to the proton in its rest frame. The coefficient in front of the $F_L$ term is $y^2$ and thus its contribution to the cross section, compared to that of the $F_2$ structure function, is very small for low-$y$ values. \begin{figure}[h!] \begin{minipage}{0.38\linewidth} \includegraphics[width=0.95\linewidth]{HowtoFL.pdf} \vspace{-0.5cm} \caption{A sketch of the linear dependence of $\sigma_r$ on $y^2/Y_+$. The intercept is $F_2$ and the slope gives $F_L$.} \label{fig:Rosen} \end{minipage} \hfill \begin{minipage}{0.6\linewidth} The reduced cross section, $\sigma_r$, can be expressed by two terms in the region where the $Z$ exchange can be neglected, meaning $Q^2$ values far below the square of the $Z$ mass, \begin{equation} \sigma_r = F_2(x,Q^2) - (y^2/Y_+)F_L(x,Q^2), \end{equation} where $Y_+ = 1 + (1 - y)^2$. Measuring $\sigma_r$ at different $y$ but at the same $x,Q^2$ values gives a linear dependence of $\sigma_r$ on $y^2/Y_+$ and therefore allows a simultaneous determination of the two structure functions $F_2$ and $F_L$. This is shown in Fig.~\ref{fig:Rosen}. Since $y = Q^2 / (x s)$, where $s$ is the cms squared of the $ep$ system, the way to vary $y$ is to vary $s$. This has been done by changing the proton-beam energy to 460 and 575 GeV. \end{minipage} \end{figure} The determination of $F_L$ needs the measurement of high-$y$ events. The variable $y$ is a function of the scattered electron kinematics, \begin{equation} y = 1 - \frac{E^\prime}{E_e (1 - \cos\theta)}, \end{equation} where $E_e$ is the electron-beam energy, $E^\prime$ and $\theta$ are the energy and angle of the scattered electron, respectively. Thus high values of $y$ means low $E^\prime$ of the scattered electron. Electron finders of both collaborations, prior to this measurement, were very well trained to identify scattered electrons with energies $E^\prime >$ 10 GeV. For lower energies, the efficiencies and purities of the finders deteriorate because of the photoproduction background. The ZEUS collaboration succeeded to improve their finder to allow to include in the $F_L$ measurements events with $E^\prime >$ 6 GeV. The H1 collaboration, whose detector is better suited for this measurement could go down to $E^\prime >$ 3 GeV. \begin{figure}[h!] \begin{minipage}{0.45\linewidth} \includegraphics[width=0.55\linewidth]{Ee-H1.pdf} \caption{Comparison of data and Monte Carlo for the scattered electron energy distribution at proton-beam energy of 460GeV for the H1 collaboration. The shaded region is the photoproduction background.} \label{fig:ee-H1} \end{minipage} \hfill \begin{minipage}{0.45\linewidth} \includegraphics[width=0.55\linewidth]{Ee-ZEUS.pdf} \caption{Comparison of data and Monte Carlo for the scattered electron energy distribution at proton-beam energy of 460GeV for the ZEUS collaboration. The dark-shaded region is the photoproduction background.} \label{fig:ee-ZEUS} \end{minipage} \end{figure} Control plots showing a comparison between data and Monte Carlo for the $E^\prime$ variable for the low-energy run (proton beam of 460 GeV) are shown in Figs.~\ref{fig:ee-H1} and~\ref{fig:ee-ZEUS}. The photoproduction background is shown in the dark-shaded region and is seen to increase sharply for low $E^\prime$ values. \begin{figure}[h!] \begin{minipage}{0.5\linewidth} Following the limitations on the energy of the scattered electron, the ZEUS collaboration measured $F_L$ in the kinematic range $9 < Q^2 < 110$ GeV$^2$ while the H1 collaboration covered the region $1.5 < Q^2 < 800$ GeV$^2$. The results are shown in Fig.~\ref{fig:fl-both}. The uncertainties of the ZEUS results are larger than those of H1. The ZEUS results, though consistently lower than those of H1, are consistent with them because of the correlated uncertainties. Taking into account the correlations between the ZEUS data points and neglecting the correlations between the H1 data points a $\chi^2$ of 12.2 is obtained for 8 degrees of freedom. The predictions shown by the shaded area are in reasonable agreement with both data sets. \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=0.98\linewidth]{Fl_meas_V3.pdf} \caption{$F_L$ as a function of $Q^2$ as measured by the H1 and ZEUS collaborations. The shaded area are predictions based on different parameterisation, as indicated in the figure.} \label{fig:fl-both} \end{minipage} \end{figure} \section{High x, extending to $x\cong 1$} \begin{figure}[h!] \begin{minipage}{0.38\linewidth} \includegraphics[width=0.95\linewidth]{moti.pdf} \vspace{-0.5cm} \caption{Example of the sizable differences between some parameterisation description of the $u$ valence quark, $u_V$.} \label{fig:motivation} \end{minipage} \hfill \begin{minipage}{0.6\linewidth} The DIS cross sections have been measured by both collaborations with very high precision. These measurements were combined and produced text-book results with even higher precision~\cite{combined}. Nevertheless, the highest $x$ value for which measurements were done was 0.65. There are fixed-target experiments~\cite{pl:b223:485,pl:b282:475,jferson} which measure higher values of $x$ but in a low $Q^2$ region. In global perturbative quantum chromodynamic fits of parton distribution functions (PDFs), a parameterisation of the form $(1 - x)^\beta$ is assumed in order to extend PDFs ro $x$ = 1. Although all fitters use the same parameterisation, sizeable differences are obtained in the high-$x$ region~\cite{allen-eps}, as shown in Fig.~\ref{fig:motivation}. \end{minipage} \end{figure} \begin{figure}[h!] \begin{center} \includegraphics[width=0.75\linewidth]{xedge.pdf} \caption{Left-hand side: a one-jet event with a scattered electron in the BCAL and the jet fully contained in the FCAL.Also seen in FCAL are the proton remnent. Right-hand side: A zero-jet event where the scattereed electron is in BCAL and the jet remains inside the beam pipe. The proton remnant and possibly some energy emerging from the jet in the beampipe are seen in FCAL.} \label{fig:xedge} \end{center} \end{figure} The ZEUS collaboration showed in an earlier publication~\cite{epj:c49:523-544} that the kinematics of HERA and the design of the detectors allow extension of the measurements of the neutral current (NC) cross sections up to $x$ = 1. The results presented here are based on a much larger data sample and an improved analysis procedure. A typical NC high-$Q^{2}$ and high-$x$ event consists of the scattered electron and a high-energy collimated jet of particles in the direction of the struck quark. The electron and the jet are balanced in transverse momentum. The proton remnant mostly disappears down the beam pipe. The $x$ and $Q^2$ of events, in which the jet is well contained in the detector, may be determined by various techniques. However, the maximum $x$ value that can be reached is limited by the fact that at the low values of $y$ typical of these events, the uncertainty on $x=Q^2/ys$ increases as $\Delta x\sim \Delta y/y^2$. An improved $x$ reconstruction is achieved by observing that, in the limit of $x\rightarrow 1$, the energy of the struck quark represented by a collimated jet is $E_\mathrm{jet} \cong xE_p$. The expression for $x$ is \begin{equation} x = \frac {E_\mathrm{jet}(1+\cos \theta_\mathrm{jet})}{2 E_p \left( 1- \frac {E_\mathrm{jet}(1-\cos\theta_\mathrm{jet})}{2E_{e}} \right) } \, , \label{eq-xpt} \end{equation} where $\theta_\mathrm{jet}$ is the scattering angle of the jet in the detector. As $x$ increases and the jet associated with the struck quark disappears down the beam-pipe (see Fig.~\ref{fig:xedge}), the ability to reconstruct $x$ is limited by the energy loss. However, in these events, the cross section integrated from a certain limit in $x$, $x_\mathrm{edge}$, up to $x=1$ is extracted. The value of $x_\mathrm{edge}$ below which the jet is fully contained in the detector depends on $Q^2$ and the higher the $Q^2$, the higher the value of $x_\mathrm{edge}$. \begin{figure}[h!] \begin{minipage}{0.48\linewidth} \includegraphics[width=0.95\linewidth]{highx_compare_eMp_heraPdf.pdf} \caption{ Ratio of the double-differential cross section for NC $e^-p$ scattering and of the double-differential cross section integrated over $x$ to the Standard Model expectation evaluated using the HERAPDF1.5 PDFs as a function of $x$ at different $Q^2$ values as described in the legend. For HERAPDF1.5, the uncertainty is given as a band. The expectation for the integrated bin is also shown as a hatched box. The error bars show the statistical and systematic uncertainties added in quadrature. The expectations of other commonly used PDF sets normalised to HERAPDF1.5 PDFs are also shown, as listed in the legend. Note that the scale on the $y$ axis changes with $Q^2$. } \label{fig:eMp} \end{minipage} \hfill \begin{minipage}{0.48\linewidth} \includegraphics[width=0.95\linewidth]{highx_compare_ePp_heraPdf.pdf} \caption{ Ratio of the double-differential cross section for NC $e^+p$ scattering and of the double-differential cross section integrated over $x$ to the Standard Model expectation evaluated using the HERAPDF1.5 PDFs as a function of $x$ at different $Q^2$ values as described in the legend. For HERAPDF1.5, the uncertainty is given as a band. The expectation for the integrated bin is also shown as a hatched box. The error bars show the statistical and systematic uncertainties added in quadrature. The expectations of other commonly used PDF sets normalised to HERAPDF1.5 PDFs are also shown, as listed in the legend. Note that the scale on the $y$ axis changes with $Q^2$. } \label{fig:ePp} \end{minipage} \end{figure} The double-differential Born-level cross sections as a function of $Q^2$ and $x$ have been measured in finer binning in $x$ because of the large data samples in this analysis (53 099 for the $e^- p$ and 37 361 for the $e^+ p$ sample). For the highest integrated $x$ bin, the respective average cross sections, defined as \begin{equation} I(x) = \frac{1}{1-x_{\rm {edge}}}\int_{x_{\rm {edge}}}^{1}\frac{d^2\sigma(x,Q^2)}{dxdQ^2}dx \;\; , \label{eqn-I(x)} \end{equation} have been obtained and plotted at $x=(x_\mathrm{edge}+1)/2$. The ratio of the measured cross sections to those expected from HERAPDF1.5~\cite{herapdf1.5} are shown in Figs.~\ref{fig:eMp} and~\ref{fig:ePp}. Note that for bins where no events are observed, the limit is quoted at $68$\% probability, neglecting the systematic uncertainty. Also shown are the predictions from a number of other PDF sets (ABM11~\cite{abm11}, CT10~\cite{ct10}, MSTW2008~\cite{mstw2008}, NNPDF2.3~\cite{nnpdf2.3}), normalised to the predictions from HERAPDF1.5. Within the quoted uncertainties, the agreement between measurements and expectations is good. \section{Summary} Final measurements of the $F_L$ structure functions are being published by HERA. The H1 collaboration covers a large kinematic range in $Q^2$, $1.5 < Q^2 < 800$ GeV$^2$. This is made possible by measuring scattered electrons down to 3 GeV due to good tracking and electromagnetic calorimetry in the rear direction. The results of the ZEUS collaboration in the $Q^2$ region covered by their measurements, $9 < Q^2 < 110$ GeV$^2$, are in general lower that those of H1 but taking into account correlated uncertainties, are consistent with those of H1. Both results are consistent with expectations, though at low $Q^2$ there are large uncertainties in the theoretical predictions. The ZEUS collaboration measured double-differential cross sections for $e^\pm p$ NC DIS events at $Q^2 >$ 725 GeV$^2$ up to $x\cong 1$. Fine binning in $x$ and extension of kinematic coverage up to $x\cong 1$ make the data important input to fits constraining the PDFs in the valence-quark domain. \section*{Acknowledgments} This activity was partially supported by the Israel Science Foundation. \section*{References}
proofpile-arXiv_067-11033
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The study of low frequency modes often provides a convenient and non destructive tool to explore the properties of systems of trapped particles. For instance, the measurement of the breathing mode and the center-of-mass (c.o.m.) mode (also called sloshing mode or Kohn mode \cite{Kohn_1961}) frequencies allows to extract the Debye length \cite{Bonitz_2010} and the particle charge \cite{Melzer_2001, Sheridan_2005} in complex plasmas. Similarly, in \cite{Moritz_2003} the authors characterize a trapped one-dimensional Bose gas by the frequency ratio of these two modes. Numerous papers are devoted to the theoretical understanding of these low lying modes (see for instance \cite{Kohn_1961, Bonitz_2007, Henning_2008}). However, there is a need for theories covering more complex cases, such as particles experiencing a friction depending on space and/or velocity, or anharmonic traps, \ldots. For dusty plasma it has been shown that a negative friction may appear due to ion absorption by grains and create active particles \cite{Trigger_2003, Trigger_2003_2}. These features are also used to describe the so-called active Brownian particles in the context of plasma physics \cite{Dunkel_2004} or biological physics \cite{Mikhailov_1999, Erdmann_2005, Ebeling_2008}. In cold atoms experiments, a space dependent friction may have important dynamical consequences for the stability of magneto-optical traps \cite{Labeyrie_2006, Pohl_2006}. Finally, let us also mention that trapping anharmonicity may be essential to understand the dynamics of Bose-Einstein condensate \cite{Ott_2003}. Our goal in this paper is to introduce a simple method able to deal qualitatively with the center-of-mass (c.o.m.) of trapped particles' systems, in situations where the friction may depend on space and/or velocity, and the trap may be anharmonic. This allows i) to avoid the simulations of the full system, which may be numerically costly ii) to emphasize the physical phenomena at play in the c.o.m motion in a simple qualitative analytical model. We then compare our findings with experimental measurements on a Magneto-Optical Trap. This paper is organized as follows. In section \ref{Sec: Scaling Ansatz} we introduce our model and the simplifying ansatz, which yields a prediction for the c.o.m. mode evolution. In section \ref{Sec: Test} we perform numerical tests, confronting the simplified theory with direct simulations of a one dimensional plasma system. Using different friction profiles and trapping potentials, we assess the validity of the method, and also emphasize its limitations. Finally, in section \ref{Sec: MOT}, we study the c.o.m. mode for an atomic cloud in a Magneto-Optical Trap and predict that, for some parameters, the relaxation may change drastically as the number of atoms in the cloud is increased. This prediction is confirmed by experimental data obtained with a large magneto-optical trap of Rb$^{85}$. \section{Equation of the Sloshing motion}\label{Sec: Scaling Ansatz} Let us consider a system of $N$ particles confined by a trapping potential $\Phi(\mathbf{r})$, subject to binary interaction forces $\mathbf{F}_{bin}$. To derive the motion of c.o.m. we consider the continuum limit of that system and write the first equation of the Bogolyubov-Born-Green-Kirkwood-Yvon (BBGKY) hierarchy plus a Fokker-Planck operator: \begin{equation}\label{EQ: BBGKY + FP} \frac{\partial f}{\partial t} + \boldsymbol\nabla_{\mathbf{r}}.\left(\mathbf{v}f\right) - \boldsymbol{\nabla}_{\mathbf{r}}\Phi.\boldsymbol\nabla_{\mathbf{v}}f+C[g] = \Delta_{\mathbf{v}}\left(Df\right)+\boldsymbol\nabla_{\mathbf{v}}.\left(\kappa\mathbf{v}f\right) \end{equation} with $f(\mathbf{r}, \mathbf{v}, t)$ the one-particle distribution, $D$ the diffusion coefficient and $\kappa(\mathbf{r}, \mathbf{v})$ the friction which may depend on the space and/or velocity coordinates. The interaction term is denoted $C[g]$ which is given by: \begin{equation} C[g](\mathbf{r}, \mathbf{v}, t) = \int \mathbf{F}_{bin}(\mathbf{r}, \mathbf{r}').\boldsymbol{\nabla}_{\mathbf{v}}g(\mathbf{r}, \mathbf{v}, \mathbf{r}', \mathbf{v}', t)d\mathbf{r}'d\mathbf{v}' \end{equation} and $g(\mathbf{r}, \mathbf{v}, \mathbf{r}', \mathbf{v}', t)$ the two-particles distribution. Let us stress that Eq. \eqref{EQ: BBGKY + FP} is equivalent to Langevin equations for the particle dynamics because we have made no assumption about the unknown function $g$. However, solving the BBGKY hierarchy is as difficult as solving Langevin equations. In order to obtain our approximation of the c.o.m. mode we limit ourselves to the first moments of Eq. \eqref{EQ: BBGKY + FP}. Multiplying the equation \eqref{EQ: BBGKY + FP} by $r_j/N$ (resp. $v_j/N$), integrating over $d\mathbf{r}d\mathbf{v}$ and combining the results leads to: \begin{equation}\label{EQ: Center-of-mass evolution} \begin{array}{ll} \displaystyle \frac{\partial^2 \left\langle r_j\right\rangle_{f}}{\partial t^2} = & \displaystyle -\left\langle \frac{\partial \Phi}{\partial r_j}(\mathbf{r})\right\rangle_{f} -\left\langle \kappa(\mathbf{r},\mathbf{v})v_j\right\rangle_{f}\vspace{2mm}\\ \displaystyle &+\frac{1}{N}\int F_{bin}^j(\mathbf{r}, \mathbf{r}') g(\mathbf{r}, \mathbf{v}, \mathbf{r}', \mathbf{v}', t)d\mathbf{r}d\mathbf{v}d\mathbf{r}'d\mathbf{v}', \end{array} \end{equation} where $j$ is a coordinate label, $F_{bin}^j$ the $j^{th}$ component of $\mathbf{F}_{bin}$ and we have set: \begin{equation} \left\langle \chi\right\rangle_{f} = \frac{1}{N}\int \chi(\mathbf{r},\mathbf{v}) f(\mathbf{r},\mathbf{v})d\mathbf{r}d\mathbf{v}. \end{equation} Thanks to the action reaction principle, the last term in \eqref{EQ: Center-of-mass evolution} vanishes, because the two-particles distribution is permutation invariant: \begin{equation} g(\mathbf{r}, \mathbf{v}, \mathbf{r}', \mathbf{v}', t) = g(\mathbf{r}', \mathbf{v}', \mathbf{r}, \mathbf{v}, t). \end{equation} We find here the classical result stating that the c.o.m. motion does not depend explicitly on the particles interaction. Note that this cancellation does not require any mean field hypothesis. However, it is important to remark that for an anharmonic potential and/or non constant friction, the interaction appears implicitly in the distribution profile $f$, which is unknown. Eq. \eqref{EQ: Center-of-mass evolution} is then not tractable. To deal with the unknown distribution $f$, we drastically simplify the problem by considering a dynamics ansatz to only take into account the c.o.m. motion of the particles: \begin{equation}\label{Eq: Ansatz} f(\mathbf{r},\mathbf{v},t)= f_0(\varphi_t(\mathbf{r},\mathbf{v})) \end{equation} with \begin{equation} \varphi_t(\mathbf{r},\mathbf{v}) = \left( \mathbf{r}-\boldsymbol{\eta}(t), \mathbf{v}-\dot{\boldsymbol{\eta}}(t) \right) \end{equation} and $f_0$ is a stationary solution of Eq. \eqref{EQ: BBGKY + FP}. We also assume, without loss of generality, that $\langle \mathbf{r}\rangle_{f_0}=0$. With this hypothesis all the time dependence in the dynamics is now included in the function $\boldsymbol{\eta}$ which is simply equal to $\langle \mathbf{r}\rangle_{f}$. When the local mean velocity in the stationary state does not vanish, we expect that one should rather use: \begin{equation}\label{Eq: Ansatz Part2} \left\{ \begin{array}{ll} \varphi_t(\mathbf{r},\mathbf{v}) = \left(\mathbf{r}-\boldsymbol{\eta}(t)\right., \left. \mathbf{u}_0\left(\mathbf{r}-\boldsymbol{\eta}(t)\right)+\mathbf{v}-\dot{\boldsymbol{\eta}}(t)\right)\\ \mathbf{u}_0(\mathbf{r}) = \int \mathbf{v}f_0(\mathbf{r},\mathbf{v})d\mathbf{v} / \int f_0(\mathbf{r},\mathbf{v})d\mathbf{v}, \end{array} \right. \end{equation} In this article, we will stick to cases where $\mathbf{u}_0=0$. Now, using the ansatz \eqref{Eq: Ansatz} and the Eq.\eqref{EQ: Center-of-mass evolution}, we easily obtain: \begin{equation}\label{EQ: Sloshing motion} \ddot{\eta}_j+ \langle \frac{\partial \Phi}{\partial r_j}(\mathbf{r} + \boldsymbol{\eta})_{f_0} + \left\langle \kappa\left( \mathbf{r} +\boldsymbol{\eta}, \mathbf{v}+\dot{\boldsymbol{\eta}}\right) \left(v_j+\dot{\eta}_j\right)\right\rangle_{f_0}=0 \end{equation} with $\eta_j$ the $j^{th}$ component of $\boldsymbol{\eta}$. This result gives a generalization of the Kohn theorem \cite{Kohn_1961} where the whole system is spatially shifted. Let us stress that in contrast with the constant friction case, even if it seems that the interactions do not appear, they are implicitly included in the shape of $f_0$, and thus they modify the evolution of $\mathbf{\eta}$. \section{Numerical Test}\label{Sec: Test} In this section we present numerical tests on the validity of Eq. \eqref{EQ: Sloshing motion} for different friction profiles and trapping potentials. The continuous description used so far was convenient to develop the theory; on the other hand numerical simulations are easier going back to particles. Thus, we shall compare the theory~\eqref{EQ: Sloshing motion} to direct $N$-body simulations. We integrate the $N$ Langevin equations using the Euler-Maruyama method. This method is adequate to compute average quantities when one is not interested in the exact trajectories of particles \cite{Talay_1990}. Indeed, this scheme only uses one evaluation of the interaction forces at each time step and saves computation time in the most expensive part: the computation of binary forces, which is $O(N^2)$. We first reach a stationary state, and then at time $t=0$ we spatially shift the whole system, and monitor the c.o.m. dynamics. The benchmark system in this section is a trapped unidimensional plasma. It consists of $N$ particles confined by an external trap. We use a harmonic trap $\Phi(x)=(1/2)\omega^2x^2$, except in section \ref{Subsection: Anharmonic trap}. The particles interact through a repulsive one dimensional Coulomb force. In this simple case, the force depends only on the relative position of the particles: $\mathbf{F}_{bin}(x,x')=C\times\text{sgn}(x-x')$, and it allows to perform $N$-body numerical simulations with a high number of particles, without considering an approximate algorithm scheme, such as a tree-code \cite{Barnes_1986}, to compute binary forces. Another advantage of our choice is that it allows to use an analytical approximation of the stationary space distribution profile $\rho_0$. In the limit of strongly interacting/cold plasma with constant friction, the distribution profile is: \begin{equation}\label{Eq: Plasma 1D stationnary profile} \rho_0(x) = \left\{ \begin{array}{ll} \frac{N}{2L_h} ,& \text{if } |x|\leq L_h\\ 0, &\text{elsewhere}, \end{array} \right. \end{equation} as long as the typical size $L_h = NC/\omega^2$ is larger than $(D/\kappa_0\omega^2)^{1/2}$. Remark that the size of the system varies linearly with the number of particles and the interaction strength.\\ We will now compare the results of the analytical model (Eq.~\eqref{EQ: Sloshing motion}) as well as the analytical expression of the stationnary solution (Eq.~\eqref{Eq: Plasma 1D stationnary profile}) to the numerical simulations. We will also see how a more precise knowledge of $f_0$ from simulations of the stationary state may increase the accuracy of Eq.~\eqref{EQ: Sloshing motion}. Finally, we note that one expects a mean field description based on a Vlasov-Fokker-Planck equation to be accurate in the regimes explored numerically. Other tests would be needed to test the method in regimes where mean field descriptions break down. \subsection{Space dependant friction}\label{Subsection: Space dependant friction} We start by considering the following friction profile: \begin{equation}\label{Eq: Space Friction profile} \kappa(x,v) = \kappa(x) = \kappa_0\left(1 + \frac{|x|}{l}\right), \end{equation} with $l$ a typical size and $\kappa_0$ the friction value at $x=0$. A priori, there is no reason that the stationary density $\rho_0$ keeps the shape given in Eq.~\eqref{Eq: Plasma 1D stationnary profile}; however our numerical simulations showed that Eq.~\eqref{Eq: Plasma 1D stationnary profile} remains a good approximation as long as we set parameters such that \begin{equation} L_h\gg \frac{1}{\omega} \left(\frac{D}{\underset{|x|\leq L_h}{\min}\kappa(x)}\right)^{1/2} \end{equation} Using Eq.~\eqref{Eq: Plasma 1D stationnary profile} to compute the averages in Eq. \eqref{EQ: Sloshing motion}, we obtain a (non linear) equation for he c.o.m. motion: \begin{equation}\label{Eq: Prediction Friction space variable} \ddot{\eta} + \omega^2 \eta + \kappa_0\left(1+\frac{L_h}{2l}\right)\dot{\eta} + \frac{\kappa_0}{2lL_h}\dot{\eta}\eta^2 = 0. \end{equation} Figures \ref{Fig: Friction space variable} and \ref{Fig: Friction space variable 2} compare simulations and predictions with different perturbation amplitudes. A good agreement is obtained in strongly non-linear cases; it is even better when the local friction felt by the particles decreases. The agreement is less good however when the friction strongly varies (see figure \ref{Fig: Friction space variable 2} inset). A similar behavior was observed for the breathing oscillation with space variable friction in~\cite{Olivetti_2011} and it is closely related to the ansatz assumption that the profile does not change during the oscillation. To give a schematic view, let us consider two particles with the same velocity but with different positions. The first one being in a small friction area while the other one is highly damped. Considering the same time step, the two particles do not cover the same distance. Then the profile suffers some compression or dilatation, not included in the dynamical ansatz. In summary, if the friction varies a lot and its values are not negligible with respect to the trapping constant, one may the assumptions behind \eqref{Eq: Ansatz} to be violated. The validity of our approach is thus related to the ratio friction/trapping, and we can summarize our findings as follows: if $\max_{|x|<L_h}(\kappa(x))\ll \omega$, or the relative fluctuations of $\kappa(x)$ are small in the whole the system, then the dynamical ansatz assumptions are well satisfied. \begin{figure}[!htpb] \includegraphics[scale=0.35]{./Figure_1.ps} \caption{(Color on line) Comparison between $N$-body simulations and theoretical predictions given by \eqref{Eq: Prediction Friction space variable} for small and large perturbations. Parameters are $\Delta t=0.001$, $N=10^5$, particles interaction $C=10^{-2}$, $l=1.0$, $\omega=17.8$, $\kappa_0=10.0$, $D=1.0$ and $\eta(0)\simeq 0.02\times L_h$. We use the same parameters for the inset except $\eta(0)\simeq 2\times L_h$.\label{Fig: Friction space variable}} \end{figure} \begin{figure}[!ht] \includegraphics[scale=0.35]{./Figure_2.ps} \caption{(Color on line) Comparison between $N$-body simulations and theoretical predictions given by \eqref{Eq: Prediction Friction space variable} for small and large perturbations. Parameters are $\Delta t=0.001$, $N=10^5$, $C=1$, $l=1.0$, $\omega=17.8$, $\kappa_0=10.0$, $D=1.0$ and $\eta(0)\simeq 0.02\times L_h$. We use the same parameters for the inset except $\eta(0)\simeq 2\times L_h$.\label{Fig: Friction space variable 2}} \end{figure} Beyond the comparison between predictions and simulations, figures \ref{Fig: Friction space variable} and \ref{Fig: Friction space variable 2} show an interesting phenomenon: when the size of the system increases, the c.o.m. motion changes drastically. Small systems undergo an underdamped relaxation, whereas large systems become overdamped. Clearly, this is related to the ratio between the number of particles experiencing a large friction and those feeling a small friction. Eq.~\eqref{EQ: Sloshing motion} yields an approximate value for the threshold. Indeed by considering the linear expansion of Eq.~\eqref{EQ: Sloshing motion} we obtain the simple criterion: \begin{equation}\label{Eq: Threshold prediction Over/Underdamped} \left\langle \frac{\partial\kappa(\mathbf{r},\mathbf{v})v_j}{\partial v_j}\right\rangle_{f_0}^2 - 4\left\langle \frac{\partial^2 \Phi}{\partial r_j^2}(\mathbf{r}) \right\rangle_{f_0}^2 \left\{\begin{array}{ll} < 0 & \Rightarrow \text{Underdamped}\\ > 0 &\Rightarrow \text{Overdamped} \end{array}\right.. \end{equation} It leads to the critical number of particles (or equivalently interaction strength in our specific example, see Eq.~\eqref{Eq: Space Friction profile}): \begin{equation} N_c = 2\frac{l\omega^2}{C}\left(2\frac{\omega}{\kappa}-1\right), \end{equation} and we conclude that for the same range of parameters, the behavior of c.o.m. of the plasma qualitatively changes when the number of particles increases. \subsection{Velocity dependent friction} We consider now another test case, where the friction varies linearly with the velocity: \begin{equation}\label{Eq: Velocity Friction profile} \kappa(x, v) = \kappa(v) = \kappa_0\frac{|v|}{v_0}, \end{equation} with $v_0$ a typical velocity and $\kappa_0$ a typical friction. We have no analytical expression for the stationary distribution $f_0$. However, we will see that a numerical estimation of $f_0$ is sufficient to obtain a quite good prediction of the c.o.m. motion. In this case, we consider the discrete distribution of particles $f_0^N$ obtain from numerical simulation, which is more or less the one particle distribution: \begin{equation} f_0^N(x,v)=\sum_{i=1}^N\delta(x-x_i)\delta(v-v_i)\simeq f_0(x,v). \end{equation} Using $f_0^N$ to approximate $f_0$, Eq.~\eqref{EQ: Sloshing motion} leads to \begin{equation}\label{Eq: Prediction Friction velocity variable} \ddot{\eta} +\omega^2\eta +\frac{\kappa_0}{N}\sum_{i=1}^N |v_i+\dot{\eta}|(v_i+\dot{\eta}) = 0. \end{equation} The evolution of Eq.~\eqref{Eq: Prediction Friction velocity variable} is computed using a fourth-order Runge-Kutta method and figure~\ref{Fig: Friction velocity variable} shows some simulations using \eqref{Eq: Velocity Friction profile} for different perturbation and diffusion coefficients. Agreement is not perfect but very good results are obtained for large perturbations. This shows that the dynamical ansatz method is not limited to small perturbations. \begin{figure}[!htpb] \includegraphics[scale=0.25]{./Figure_3.ps} \caption{(Color on line) Comparison between $N$-body simulations (straight line) and theoretical predictions (dotted line) given by \eqref{Eq: Prediction Friction velocity variable}. Figures (a), (c), (e) corresponds to small perturbation $\eta(0)\simeq 0.02\times L_h$. Figures (b), (d), (f) corresponds to large perturbation $\eta(0)\simeq 2\times L_h$. The diffusion coefficient increases from top to bottom. (a)(b): $D=5$; (c)(d): $D=50$; (e)(f): $D=500$. Other parameters are $\Delta t=0.001$, $N=10^5$, $C=10^{-2}$, $v_0=1.0$, $\omega=17.8$ and $\kappa_0=10.0$. We continue to use parameter the $L_h$ because for these set of parameters the hypothesis of a strongly interacting or cold plasma is well satisfied. \label{Fig: Friction velocity variable}} \end{figure} In a similar manner as for the space-dependent friction case, we observe interesting features in figure~\ref{Fig: Friction velocity variable}. Varying the diffusion may change the dynamics of the c.o.m. This phenomenon can be understood as follows: when $D$ increases, particles explore a larger region in phase space, including parts with larger velocity. Considering Eq.~\eqref{Eq: Velocity Friction profile}, the particles are then more damped and the global evolution changes from underdamped to overdamped. Such a switching behavior between two qualitatively different evolutions has already been studied in different models with velocity dependent friction \cite{Mikhailov_1999, Erdmann_2005, Ebeling_2008}. In these papers the authors do not consider the c.o.m. motion, and in their cases the value of the diffusion coefficient implies a transition between a translation and a rotation mode. Nevertheless, we point out that the dynamical features are closely related to the nature of the friction and the shape of the one-particle distribution which depends on the diffusion coefficient $D$. We now investigate a friction profile presenting large variations and a negative part: \begin{equation}\label{Eq: Negative Friction} \kappa(x,v)=\kappa(v)=-\kappa_0\left[1 - \left(\frac{v}{v_0}\right)^2\right], \end{equation} with $v_0$ a typical velocity and $\kappa_0$ the friction value at $v=0$. This friction profile is negative for $|v|<v_0$ and particles increase their energy in this region. With such a friction profile, the hypothesis underlying \eqref{Eq: Ansatz} are expected to be completely violated. Figure~\ref{Fig: Negative Friction} shows the comparison between direct numerical simulations, and the prediction using \eqref{EQ: Sloshing motion} with the discrete density $f_0^N$. \begin{figure} \includegraphics[scale=0.25]{./Figure_4} \caption{(Color on line) Comparison between $N$-body simulations (straight line) and theoretical predictions (dotted line) given by the friction profile \eqref{Eq: Negative Friction}. (a): $\eta(0)\simeq 0.02\times L_h$; (b): $\eta(0)\simeq 2\times L_h$. Other parameters are $\Delta t=0.001$, $N=10^5$, $C=10^{-2}$, $v_0=1.0$, $\omega=17.8$, $\kappa_0=10.0$ and $D=1.0$. We continue to use parameter $L_h$ because for these set of parameters the hypothesis of strongly/cold interacting plasma is well satisfied. \label{Fig: Negative Friction}} \end{figure} In this case, as could be anticipated, the dynamical ansatz fails to predict the c.o.m. motion. Indeed, a negative friction induces some local dilation/compression which are not included in the dynamical ansatz. For large perturbation we obtain a better result because the whole system starts in a positive friction region and the local dilation/compression effect become smaller. However, when the particles reach again negative friction a shift appears between prediction and simulation. \subsection{Anharmonic trap}\label{Subsection: Anharmonic trap} In this section we consider a one dimensional plasma with constant friction $\kappa$ in an anharmonical trap. The trapping force used is: \begin{equation}\label{Eq: Anharmonic Force} \mathbf{F}_{trap}(x)=\frac{\omega^2}{1+4(\delta-\mu x)^2} - \frac{\omega^2}{1+4(\delta+\mu x)^2} \end{equation} with $\boldsymbol{\nabla}_{x}\Phi(x)=-\mathbf{F}_{trap}(x)$, $\delta<0$ and $\mu>0$. This kind of anharmonic trap appears for instance in some models of cold atoms in magneto-optical traps (see section \ref{Sec: MOT}). Figure~\ref{Fig: Anharmonic} shows a comparison between numerical simulation and prediction when the number of particles lying in the strongly anharmonic region becomes more and more important. To highlight this aspect, the figure represents the absolute trapping force $|F_{trap}(x)|$ and the linear trapping force obtained by expanding $F_{trap}(x)$ around $x=0$. \begin{figure} \includegraphics[scale=0.25]{./Figure_5.ps} \caption{(Color on line) Figures (a), (c), (e): Space stationary state from $N$-body simulations and absolute trapping force $|F_{trap}(x)|$ (straight line). The absolute harmonic trapping force associated (dotted line) is included for reference purposes. Figures (b), (d), (f): Comparison between $N$-body simulations (straight line) and theoretical prediction (dotted line) given by the trapping force~\eqref{Eq: Anharmonic Force}. The interaction strength increases from top to bottom: $C \in \{2.5, 5, 10\} (\times 10^{-5})$. Others parameters are $\Delta t=0.001$, $N=10^5$, $\eta(0)=2.0$, $\delta=-6.0$, $\mu=1.0$ $\omega=17.8$, $\kappa=10.0$ and $D=0.1$. \label{Fig: Anharmonic}} \end{figure} Clearly the dynamical ansatz fails to describe the whole c.o.m. dynamics when to number of particles in the anharmonic region becomes too important. We obtain the same results as for a negative velocity dependent friction: the dynamical ansatz hypothesis are not satisfied any more. As an example, \cite{Ott_2003} shows for a Bose-Einstein condensate that a non harmonic trap may lead to several new features. In their cases, they have identified several nonlinear effects due to anharmonicity, such as nonlinear mode mixing. In conclusion, the dynamical ansatz method must be used with care when the number of particles in the anharmonic region is important in comparison with those in the harmonic one. On the other hand, the dynamical ansatz yields satisfactory results for the c.o.m. motion in a weakly anharmonic trap. \section{Application: Large Magneto-optical trap}\label{Sec: MOT} Recently it has been shown that increasing the number of atoms in a magneto-optical traps (MOT) may trigger dynamical instabilities, that are absent in smaller clouds \cite{Pohl_2006, Labeyrie_2006}. In this section, we apply the previous results to the case of the magneto-optical trap. We start by briefly introducing one of the simplest ways to model a MOT with Rb$^{85}$ atoms using the transition $F=3\rightarrow F'=4$ of the $D2$ line. We show that a the model naturally induces a space dependent friction, which leads to a qualitative change in the dynamics for the center-of-mass relaxation. Finally, we provide some experimental evidence confirming this prediction. \subsection{Dynamics of the center-of-mass of a MOT}\label{Sec: MOTTh} We use the so-called low-intensity Doppler model which is based on a velocity confinement due to the Doppler cooling and a spatial confinement due to the Zeeman effect \cite{Metcalf_1999}. Let us stress that we neglect the sub-Doppler effect. We assume that the contribution of the sub-Doppler cooling does not change qualitatively the behavior of a large MOT with Rubidium atoms. The average force $\mathbf{F}(\mathbf{r},\mathbf{v})$ acting on a single atom comes from the radiative pressure force: \begin{equation} \begin{array}{lcr} F^i(r_i,v_i) &=& \displaystyle \frac{\hbar k_L \Gamma}{2M}\frac{I_0}{I_{sat}}\frac{1}{1+\frac{4(\delta -\mu_ir_i-k_Lv_i)^2}{\Gamma^2}}\\ &-& \displaystyle \frac{\hbar k_L \Gamma}{2M}\frac{I_0}{I_{sat}} \frac{1}{1+\frac{4(\delta +\mu_ir_i + k_Lv_i)^2}{\Gamma^2}}, \end{array} \end{equation} with $F^i$ the $i^{th}$ component of $\textbf{F}$, $M$ the mass of Rb$^{85}$ atoms, $I_0$ the laser intensity of lasers beams along the six directions, $I_{sat}$ the saturation intensity, $\delta$ the detuning of the laser frequency $\omega_L$ with respect to the atomic resonance $\omega_A$ ($\delta=\omega_L-\omega_A<0$), $\Gamma$ the natural linewidth of the transition used, $k_L$ the laser wave number and $\mu_i r_i$ the Zeeman shift where $\mu_i$ depends on the applied magnetic field. One usually considers the limit $k_Lv_i/\delta\ll 1$ and $\mu_i r_i/\delta\ll 1$, which allows to extract a friction and a trapping force. In the case of a large low temperature MOT, we assume that linearization in space is less reasonable than a linearization velocity. The radiative pressure force becomes \begin{equation}\label{Eq: Radiative Pressure Force Linearize} \left\{\begin{array}{l} F^i(r_i,v_i) = F_{trap}^i(r_i) - \kappa(r_i)v_i + \mathcal{O}\left(v_i\right)\\ F_{trap}^i(r_i)=\frac{\hbar k_L \Gamma}{2M}\frac{I_0}{I_{sat}}\left[ \frac{\Gamma^2}{\Gamma^2+4(\delta -\mu_i r_i)^2} - \frac{\Gamma^2}{\Gamma^2+4(\delta +\mu_i r_i)^2} \right]\\ \kappa(r_i)=- \frac{\hbar k_L \Gamma}{2M}\frac{I_0}{I_{sat}}\left[ \frac{8k_L\Gamma^2 (\delta -\mu_i r_i)}{\left(\Gamma^2+4(\delta -\mu_i r_i)^2\right)^2} + \frac{8k_L\Gamma^2 (\delta +\mu_i r_i)}{\left(\Gamma^2+4(\delta +\mu_i r_i)^2\right)^2} \right]. \end{array}\right. \end{equation} In order to simplify our theoretical consideration and apply the dynamical ansatz method described in the section \ref{Sec: Scaling Ansatz}, we consider the symmetric approximation of the friction part of Eq.\eqref{Eq: Radiative Pressure Force Linearize} (in particular $\mu_i=\mu$): \begin{equation}\label{Eq: Radiative Pressure Force Friction Approximate} \begin{array}{ll} \kappa(\mathbf{r})=& \displaystyle - \frac{\hbar k_L \Gamma}{2M}\frac{I_0}{I_{sat}} \times \\ & \displaystyle \left[ \frac{8k_L\Gamma^2 (\delta -\mu|\mathbf{r}|)}{\left(\Gamma^2+4(\delta -\mu|\mathbf{r}|)^2\right)^2} + \frac{8k_L\Gamma^2 (\delta +\mu|\mathbf{r}|)}{\left(\Gamma^2+4(\delta +\mu|\mathbf{r}|)^2\right)^2} \right], \end{array} \end{equation} which coincides with expression \eqref{Eq: Radiative Pressure Force Linearize} along the axis. This approximate friction \eqref{Eq: Radiative Pressure Force Friction Approximate} should preserve the important features of the system, albeit in a simplified way. It is commonly known in the cold-atoms community that this model is restricted to small MOT \cite{Sesko_1991}, \textit{i.e.} small number of particles. For large MOT we have to consider two other contributions: an attractive force $\mathbf{F}_A$ which comes from a screening effect, and a repulsive force which comes from multiple scattering. In the small optical width region, these two forces satisfy \cite{Sesko_1991}: \begin{equation} \boldsymbol{\nabla}.\left[\mathbf{F}_A(\mathbf{r})\right] = -\frac{\sigma_L^2 I_0}{c} \rho(\mathbf{r}) \end{equation} and \begin{equation} \boldsymbol{\nabla}.\left[\mathbf{F}_R(\mathbf{r})\right] = \frac{\sigma_R\sigma_L I_0}{c} \rho(\mathbf{r}), \end{equation} where $\sigma_L$ (resp. $\sigma_R$) is the laser absorption (resp. atom scattering) cross section, $c$ the light velocity. Assuming that $\mathbf{F}_A$ and $\mathbf{F}_R$ derive from a potential (which is not true for the former), we can use the Gauss theorem to consider these forces as a Coulombian binary interaction force: \begin{equation} \mathbf{F}_{bin}(\mathbf{r}, \mathbf{r}')=C\times\frac{\mathbf{r}-\mathbf{r}'}{|\mathbf{r}-\mathbf{r}'|^3}. \end{equation} The MOT is then described as a non-neutral plasma. When the temperature is low enough and the anharmonicity is weak, we obtain the same property as the toy-model used in our numerical test, \textit{i.e.} a stationary state: \begin{equation} \rho_0(\mathbf{r}) = \int f_0(\mathbf{r},\mathbf{v})d\mathbf{v} = \left\{\begin{array}{cl} \frac{3\omega^2}{C}& \text{, if } |\mathbf{r}|<L_h\\ 0 & \text{, elsewhere} \end{array},\right. \end{equation} with $L_h^3= NC/(4\pi \omega^2)$. Exactly as in section \ref{Subsection: Space dependant friction}, we show that the friction profile given in Eq.~\eqref{Eq: Radiative Pressure Force Friction Approximate} leads to transitions between overdamped or underdamped relaxation when the number of particles change. To predict the transitions between these two behaviors we linearize equation Eq.~\eqref{EQ: Sloshing motion}. If we consider a perturbation along the $i^{th}$ direction, we obtain the condition: \begin{equation} \left\langle \kappa(\mathbf{r}) \right\rangle_{f_0}^2 - 4 \left\langle\frac{\partial^2 \Phi}{\partial r_i^2}(\mathbf{r}) \right\rangle_{f_0} \left\{ \begin{array}{ll} <0 & \Rightarrow\text{Underdamped}\\ >0 & \Rightarrow\text{Overdamped} \end{array} \right. \end{equation} We represent on figure~\ref{Fig: Sloshing mode for MOT} the different behaviors, depending on the detuning $\delta$ and the magnetic field $\nabla B$ when the size of the system increases \textit{i.e.} when the number of particles increase. It is possible to obtain three different behaviors depending on the parameter values: underdamped relaxation; overdamped relaxation; or the stationary state is unstable. This latter possibility corresponds to $\langle \kappa(\mathbf{r})\rangle_{f_0}<0$. This implies that $\kappa(\mathbf{r})$ can be negative and that a lot of particles lies in those regions. We will not discuss this case anymore since our numerical tests show that the method is unreliable when negative friction plays a role. An important feature shown in figure~\ref{Fig: Sloshing mode for MOT} is that for the same value of detuning and magnetic field, it is possible to observe a modification of the relaxation dynamics by increasing the number of atoms. \begin{figure} \includegraphics[scale = 0.40]{./Figure_6.ps} \caption{(Color on line) Theoretical behavior of the c.o.m. mode relaxation in the parameters plane ($ \nabla B; \delta/\Gamma$) for different system sizes. The diamond shows the dynamical changes experienced by the c.o.m. motion as the MOT's size increases, at fixed parameter values ($15; -1$). (a): $L_h=0.5$~mm; (b): $L_h=1$~mm; (c): $L_h=2$~mm; (d): $L_h=4$~mm. Other parameters are $k_L = 2\pi/\lambda$ with $\lambda = 780\times10^{-9}$~m and $\mu = 2\pi\mu_0g\nabla B$ with $\mu_0=2.1\times10^6$~G$^{-1}$ and $g=1.0$ an effective Land\'e g-factor of the transition used. \label{Fig: Sloshing mode for MOT}} \end{figure} For example, increasing $L_h$ from $0.5$~mm to $4$~mm with $\delta/\Gamma=-2.5$ and $\nabla B = 15$~G/cm, the relaxation of the center-of-mass changes from underdamped to overdamped. Considering the friction profile obtained for those parameters (see figure~\ref{Fig: Friction Profile MOT}) and using the same consideration as in section \ref{Sec: Test}, we are able to understand this behavior: $\left\langle \kappa(\mathbf{r}) \right\rangle_{f_0}$ is just the average friction felt by the whole system; the threshold separating overdamped and underdamped is obtained by comparing this average friction with $\left\langle \partial^2 \Phi /\partial r_i^2(\mathbf{r}) \right\rangle_{f_0}$. Figure~\ref{Fig: Friction Profile MOT} represent these quantities for harmonic trap to simplify as much as possible the discussion. When the size of the system is roughly between $2.5-3$~mm, the local friction at the edge of the system is higher than the critical friction, nevertheless the system is underdamped because the average friction stays below the threshold. For $|\mathbf{r}|\geq 3$~mm the system becomes overdamped. For an even larger cloud, the friction decreases, the system may become underdamped again (not seen in the figure). \begin{figure} \includegraphics[scale = 0.35]{./Figure_7.ps} \caption{(Color on line) Representation of the different friction depending on the size of the system for $\delta/\Gamma= -2.5$ and $\nabla B=15$~G/cm. Dotted line: friction profile $\kappa(\mathbf{r})$ given by \eqref{Eq: Radiative Pressure Force Friction Approximate}; dashed line: average friction $\langle \kappa(\mathbf{r})\rangle_{f_0}$; straight line: critical friction considering the harmonic approximation of the trapping potential $\Phi$. \label{Fig: Friction Profile MOT}} \end{figure} Let us stress that we used a simple model based on Doppler cooling to describe a MOT and many phenomena are not included in our study: sub-Doppler cooling is neglected (this can be important for the c.o.m. relaxation in small systems); the friction profile is assumed to be independent on the number of particles; the ``effective charge'' in the interaction force is taken as a constant. We do not expect this model to predict exactly the transition between the different regimes; however, in some cases the Doppler model should be sufficient to describe qualitatively the system. Note that \cite{Xu_2002} shows that atomic properties may dramatically changes the relaxation: for alkaline-earth-metal atoms it is possible to observe underdamped oscillations \cite{Xu_2002}, while the same regime with alkali-metal atoms shows a strongly overdamped relaxation \cite{Steane_1991}. \subsection{Experimental results}\label{Sec: MOTExp} We present here some experimental evidence supporting the analysis developed in the previous section. Our experimental setup has been described elsewhere \cite{Labeyrie_2006}. We load a large magneto-optical trap containing up to 2$\times 10^{10}$ Rb$^{85}$ atoms from a dilute room-temperature vapor. The MOT employs 6 large (waist = 4 cm) independent laser beams, tuned slightly below (typically by $-3\Gamma$) the $F = 3 \rightarrow F' = 4$ transition of the $D2$ line. An additional repumping beam is also applied to the atoms, whose intensity is used to control the number of atoms in the $F = 3$ hyperfine level. To study the dynamics of the MOT's center-of-mass, we image the fluorescence light of the cloud on a photodiode, with a mask blocking half of the MOT's image (see figure~\ref{Fig:damping}(a)). This setup is thus sensitive to any displacement of the center-of-mass of the cloud in the plane orthogonal to the line of sight of the imaging optics. \begin{figure} \includegraphics[scale = 0.30]{./Figure_8.ps} \caption{(Color on line) Experimental observation of number-dependent MOT center-of-mass dynamics. (a) Detection setup. A fluorescence image of the MOT is formed on a photodiode (Ph) using lens (L). A mask (M) blocks half of the MOT image, rendering the photodiode signal sensitive to a lateral displacement of the MOT. (b) N-dependent dynamics. We record the sloshing motion of the cloud following a small displacement of the trap at $t = 0$. The different curves correspond to an increasing number of atoms from top to bottom.} \label{Fig:damping} \end{figure} Figure~\ref{Fig:damping}(b) shows the measured impact of the atom number on the dynamics of the MOT's center-of-mass. Using an offset magnetic field, we displace the MOT's center-of-mass. When this offset field is switched off at $t=0$, the MOT is free to evolve in the initial trapping conditions and the MOT then returns to its equilibrium position. As shown on figure~\ref{Fig:damping}(b), for low atom number the dynamics of the center of mass is overdamped, typical for standard MOTs. However we observe a clear transition from an overdamped behavior at low atom number, to an underdamped one when the atom number is large. This cross-over is heralding the instability regime observed in \cite{Labeyrie_2006}. This transition to an underdamped motion of the center of mass of the MOT before reaching the instability regime is similar to the narrow region indicated in blue in figure~\ref{Fig: Sloshing mode for MOT} separating the overdamped region from the instability region. \section{Conclusion} In the present work we have introduced a dynamical ansatz to obtain an approximate evolution of the center-of-mass mode considering a trapped system of interacting particles, assuming a global translation of the whole system. This approach allows to describe numerous problems with arbitrary perturbation amplitudes of the center-of-mass mode. This includes systems with space and/or velocity dependent friction as well as anharmonic trapping potential. We have confronted the predictions of the simplified approach with direct $N$-body simulations of Langevin equations, considering a one dimensional plasma with different friction and trap profiles as test case. The main conclusion is that the agreement is satisfactory as long as the hypotheses underlying the ansatz are well enough satisfied. Using this approach on a model for a magneto-optical trap, we predict transitions between overdamped and underdamped motion for the center of mass of the cloud, as the number of trapped atoms increases. Finally, we provide some experimental evidence for such a transition, thus confirming some of the predictions made. Note that we assume in this work a constant diffusion coefficient and an external force which derives from a potential. However it is straightforward to extend the dynamical ansatz method. For instance in the case of a space and/or velocity dependent diffusion or with rotational forces. Finally a combination of the dynamical ansatz introduced in this paper and the scaling ansatz method detailed in \cite{Olivetti_2009, Olivetti_2011} may be useful to describe simultaneously the center-of-mass motion and the breathing mode oscillation \cite{Guery_Odelin_2002}. \acknowledgments This work is partially supported by the F\'ed\'eration W. D\"oblin (FR 2800). Alain Olivetti is grateful to J. Barr\'e, D. Broizat and C. Garc\'ia for fruitful discussions.
proofpile-arXiv_067-11077
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Computational cognitive modeling is an approach in cognitive sciences which explores human cognition by implementing detailed computational models. This enables researchers to execute their models and simulate human behavior \cite{sun_introduction_2008}. Due to their executability, computational models have to be defined precisely. Thereby ambiguities appearing in verbal-conceptual models can be eliminated. By conducting the same experiments with humans and an executable cognitive model, the plausibility of a model can be verified and gradually improved To implement cognitive models, it is helpful to introduce \emph{cognitive architectures} which bundle well-investigated research results from several disciplines of psychology to a unified theory. On the basis of such an architecture, researchers are able to implement domain-specific computational models without having to deal with the remodeling of fundamental psychological results. Additionally, cognitive architectures ideally constrain modeling to plausible models which facilitates the modeling process \cite{taatgen_modeling_2006}. One of the most popular cognitive architectures is \emph{Adaptive Control of Thought -- Rational} (ACT-R), a production rule system introduced by John R. Anderson \cite{AndersonLe98,anderson_integrated_2004}. It has been used to model cognitive tasks like learning the past tense \cite{taatgen_why_2002}, but is also used in human-computer interaction or to improve educational software by simulating human students \cite[p. 1045 sqq.]{anderson_integrated_2004}. Although providing a theory of the psychological foundations, ACT-R lacks a formal definition of its underlying concepts from a mathematical-computational point of view. This led to a reference implementation full of assumptions and technical artifacts beyond the theory making it difficult to overlook and inhibiting adaptability and extensibility. The situation improved with the modularization of the psychological theory, but it is still difficult to exchange more central parts of the implementation like conflict resolution \cite{stewart_deconstructing_2007}. To overcome these drawbacks, we have formalized parts of the implementation closing the gap between the psychological theory and the technical implementation. We describe an implementation of ACT-R which has been derived from our formalization using Constraint Handling Rules (CHR). Due to the power of logic programming, our implementation is very close to the formalization and leads to short and concise code covering the fundamental parts of the ACT-R theory. For the compilation of ACT-R models to CHR programs, source-to-source transformation is used. Our implementation is highly adaptable. In this paper, this is demonstrated by integrating four different conflict resolution strategies. Despite its proximity to the theory, the implementation can reproduce the results of the original implementation as exemplified in the evaluation of our work. The formalization may support the understanding of the details of our implementation, hence we refer to \cite{gall_rule_based_2013} and and the online appendix (\ref{sec:formalization}). In section~\ref{sec:actr}, we give an overview of the fundamental concepts of ACT-R and shortly describe their implementation in CHR. Section~\ref{sec:conflict_resolution} describes the general conflict resolution process of ACT-R. Then the implementation of four different conflict resolution strategies proposed in the literature is presented. To evaluate our implementations, we use an example to compare the results of our implementation with those of the reference implementations where available in section~\ref{sec:evaluation}. Eventually, in section~\ref{sec:related_work} some related work is presented and a conclusion is given in section~\ref{sec:conclusion}. \section{A CHR implementation of ACT-R} \label{sec:actr} In the following, a short overview of the fundamental concepts of the ACT-R theory and their transfer to CHR is given. For reasons of space, we refer to the literature for an introduction to CHR \cite{fru_chr_book_2009}. For a more detailed introduction to ACT-R, see \cite{anderson_integrated_2004} and \cite{taatgen_modeling_2006}. The reference implementation of ACT-R is written in Lisp and can be obtained from the ACT-R website \cite{actr_homepage}. Details of our implementation including the formalization it is based on can be found in~\cite{gall_rule_based_2013}. Parts of the formalization are located in the online appendix (\ref{sec:formalization}). \subsection{Architecture} \label{sec:modular_architecture} ACT-R is a production rule system which distinguishes two types of knowledge: \emph{declarative knowledge} holding static facts and \emph{procedural knowledge} representing processes controlling human cognition. For example, in a model of the game \emph{rock, paper, scissors}, a declarative fact could be ``The opponent played scissors'', whereas a procedural information could be that a round is won, if we played rock and the opponent played scissors. Declarative knowledge is represented as \emph{chunks}. Each chunk consists of a symbolic name and labeled slots which hold symbolic values. The values can refer to other chunk names, i.e. chunks can be connected. Chunks are typed, i.e. the number and names of the slots provided by a chunk are determined by a type. As usual for production rule systems, procedural knowledge is represented as rules of the form IF \textit{conditions} THEN \textit{actions}. Conditions match values of chunks, actions modify them. \looseness=-1 The psychological theory of ACT-R is modular: There are modules for each function of the human mind like a declarative module holding the declarative facts, a goal module taking track of the current goal of a task and buffering information and a procedural module holding the procedural information and controlling the cognitive process. There are also modules to interact with the environment like a visual module perceiving the visual field. The modules are independent from each other, i.e. there is no direct communication between them. Each module has a fixed number of \emph{buffers} associated with it. The buffers can hold at most one single piece of information a time, i.e. one chunk. Modules can put chunks into their associated buffers. The core of the system is the procedural module which can access the buffers of all other modules but does not have an own buffer. It consists of a \emph{procedural memory} with a set of production rules. The conditions of a production rule refer to the contents of the buffers, i.e. they match the values of the chunk's slots. The formal applicability condition of rules can be found in the online appendix (\ref{sec:formalization}). There are three types of actions whose arguments are encoded as chunks as well: First of all, \emph{buffer modifications} change the content of a buffer, i.e. the values of some of the slots of a chunk in a buffer. Secondly, the procedural module can state \emph{requests} to other modules which then change the contents of their buffers. Eventually, \emph{buffer clearings} remove the chunk from a buffer. Although our implementation can handle requests and clearings, we only regard buffer modifications in this work for the sake of simplicity. \begin{example} \label{ex:simple_rule} Consider the following rule: \begin{verbatim} (p recognize-win =goal> isa game me rock opponent scissors ==> =goal> result win) \end{verbatim} It recognizes a win situation in the game \emph{rock, paper, scissors} if the model has realized that the opponent played scissors and the agent played rock (which could be accomplished by a corresponding production rule interacting with the visual module). The situation is represented by a chunk of type \verb|game| providing the slots \verb|me|, \verb|opponent| and \verb|result|. As a result, it adds the information that the round has been won by modifying the \verb|result|-slot of the goal buffer. \end{example} Furthermore, the procedural module controls the \emph{match-select-apply} cycle of the production rule system. It searches for matching rules. As soon as a matching rule has been selected to fire, it takes 50\,ms for the rule to fire based on theories of human cognition \cite[p. 54]{anderson_how_2007}. During this time, the matching process is inhibited and no other rule can be selected until the selected rule is applied. Hence, the productions are executed serially. The production system is called \emph{free}, if no rule is selected and waiting for execution. As long as the procedural module is free, it searches for matching rules. The modules act in parallel. When a request is sent to a module by a production, the procedural module becomes free while the request is completed. Hence, new production rules can match while other modules might be busy with requests. ACT-R can be extended by arbitrary modules communicating through buffers with the procedural system. However, to exchange more fundamental parts of the architecture it needs more than only architectural modules as shown in section~\ref{sec:conflict_resolution}. \subsection{The Procedural Module in CHR} The procedural module is the core of ACT-R's production rule system. Our implementation is based on the translation of production rule systems to CHR as presented in \cite[chapter 6.1]{fru_chr_book_2009}. However, we have to account for the concepts of chunks and buffers, since ACT-R differs in those particular points from other production systems. Details of the implementation can be found in \cite{gall_rule_based_2013}. The set of chunks can be represented in CHR by a constraint \verb|chunk(C,T)|, where \verb|C| is the name of the chunk and \verb|T| its type. The slots provided by this chunk and their values can be stored in constraints \verb|chunk_has_slot(C,S,V)| denoting that chunk \verb|C| has the value \verb|V| in slot \verb|S|. With special consistency rules it can be assured, that no chunk has two values in its slots and that it only provides the slots allowed by its type. Analogously, a buffer is represented by a constraint \verb|buffer(B,M,C)| denoting that the buffer \verb|B| is affiliated with the module \verb|M| and holds chunk \verb|C|. The formal definitions of chunks and buffers can be found in the online appendix (\ref{sec:formalization}). A production rule can now match and modify the information of the buffer system. The actions are implemented by trigger constraints \verb|buffer_action(B,C)| which get the name of the buffer \verb|B| and a chunk description \verb|C| represented by a term \verb|chunk(C,T,[(S,V),...])| which describes a chunk with name \verb|C|, type \verb|T| and a list of slot-value pairs representing the values of the chunk's slots. Note that such chunk descriptions can be incomplete in some arguments by simply letting them unspecified. \begin{example} \label{ex:simple_rule_chr} The rule from example~\ref{ex:simple_rule} can be translated to the following CHR rule: \begin{verbatim} buffer(goal,_,C), chunk(C,game), chunk_has_slot(C,me,rock), chunk_has_slot(C,opponent,scissors) ==> buffer_modification(goal,chunk(_,_,[(result,win)])). \end{verbatim} The name and type of the chunk in the modification are not specified in the original rule and therefore left blank as well as the \verb|me| and \verb|opponent| slots. \end{example} \subsection{Timing and Phases} \label{sec:timing} As mentioned before, the production system of ACT-R is occupied for 50\,ms after a rule has been selected. To model such latencies, an event queue has to be added. It keeps track of the current time and holds an ordered set of events which can be dequeued one after another according to their scheduled times. In our implementation, the event queue is implemented as a priority queue sorting its elements after the time and a priority determining the order of application for simultaneous events. Events are arbitrary Prolog goals and can be added by \verb|add_q(Time,Priority,Event)|. The current time can be queried by \verb|get_time(Now)|. To ensure that a production rule only matches when the module is free, we replace each CHR rule of the form \verb!C ==> A! according to the following scheme consisting of two rules: \begin{verbatim} C \ match <=> add_q(Now + 0.05,0,apply_rule(rule(r,C))). C \ apply_rule(rule(r,C)) <=> A, get_time(Now), add_q(Now,-10,match). \end{verbatim} The constraint \verb|match| indicates that the procedural module is free and searches for a matching rule. For the matching rule, an \verb|apply_rule| event is scheduled 50\,ms from the current time. This event will actually fire the rule. The actions \verb|A| schedule their effects on the buffers at the current time with different priorities. Requests are only sent to the corresponding module. Its effects on the requested buffer are scheduled at a later time. Finally, a new \verb|match| event is scheduled at the current time \verb|Now| but with low priority of $-10$. This ensures that all current actions are performed before the next rule is scheduled to fire.\looseness=-1 Otherwise, if no rule matches and the procedural module is free (i.e. a \verb|match| constraint is present), a rule can only become matching if the content of the buffers change. Hence, a new \verb|match| constraint is added directly after the next event in the queue. This models the fact that the procedural module is searching permanently for matching rules when it is free without adding unnecessary \verb|match| events. \section{Conflict Resolution} \label{sec:conflict_resolution} Only one matching production rule can fire at a time. Hence, if there are multiple applicable productions, the system has to decide which to fire. This process is called \emph{conflict resolution} \cite{mcdermott_1977}. In most implementations, CHR simply chooses the rule to fire by textual order, which is a valid conflict resolution mechanism. However, in ACT-R a more advanced approach using subsymbolic concepts is needed to faithfully model human cognition. \subsection{General Conflict Resolution Process} In \cite[p. 151]{fru_chr_book_2009} a general method to implement different conflict resolution mechanisms in CHR is given. This method is adapted to our CHR implementation of ACT-R. The first rule of each CHR rule pair from section~\ref{sec:timing} can be replaced by: \begin{verbatim} match, C ==> G | conflict_set(rule(r,C)). \end{verbatim} Hence, the application of a matching production is delayed by adding the rule to the conflict set instead of choosing the first matching rule to be applied by scheduling \verb|apply_rule/1| as explained in section~\ref{sec:timing}. Thereby all matching rules are collected in \verb|conflict_set/1| constraints which then can be reduced to one single constraint containing only the rule to be applied according to an arbitrary strategy. As a last production rule, the rule \verb!match <=> select.! occurs in the program. This rule will always be applied last (since rules are applied in textual order in CHR). It removes the remaining \verb|match| constraint and adds a constraint \verb!select! which triggers the selection process. This means that the conflict resolution is performed by choosing one rule from the conflict set constraints and removing all other such constraints. If no rule matches, a new \verb|match| constraint is scheduled after the next event. With the introduction of the \verb|select| constraint, the system commits to the rule to be applied by scheduling the corresponding \verb!apply_rule/1! event as explained in section~\ref{sec:timing}. This leads the chosen production to perform its actions since its second CHR rule is applicable. After the actions are performed, the next matching phase is scheduled. The strategy of how the conflict set is eliminated to one single rule which will be applied may vary and is exchangeable. In the following section, several strategies are presented and implemented. \subsection{Conflict Resolution Strategies} There have been several conflict resolution strategies proposed for ACT-R over time. To demonstrate the adaptability of our CHR implementation, we implement some of those strategies. In the reference implementation of ACT-R, such adaptations might need a lot of knowledge about its internal structures \cite{stewart_deconstructing_2007}. In general, ACT-R conflict resolution strategies usually use the subsymbolic concept of \emph{production utilities}. The production utility for a production $i$ is the function $U_i: \mathbb{N} \rightarrow \mathbb{R}$ which expresses the value of utility of a particular production at its $n$th application which may be adapted according to a learning strategy. In the conflict resolution process, the current utility values are compared for all matching functions and the production with the highest utility is chosen. The production utility can therefore be seen as a dynamic rule priority which is adapted according to a certain strategy. In the following, we present some different learning strategies to adapt the utility of a production. Eventually, the concept of rule refraction is introduced, which is a general conflict resolution concept and can be applied for all of the presented learning strategies. \subsubsection{Reinforcement-Learning-Based Utility Learning} The current implementation of ACT-R~6.0 uses a conflict resolution mechanism which is motivated by the Rescorla-Wagner learning equation \cite{rescorla_wagner_1972}. The basic concept is that there are special production rules which recognize a successful state (by some model-specific definition) and then trigger a certain amount of reward measured in units of time as a representation of the effort a person is willing to spend to receive a certain reward \cite[p. 161]{anderson_how_2007}. All productions which lead to the successful state, i.e. all productions which have been applied, receive a part of the triggered amount of reward which demounts the more time lies between the application of the production rule and the triggering of the reward. The utility $U_i$ of a production $i$ then is adapted as follows: \begin{equation} \label{eq:utility_learning:reinforcement} U_i(n) = U_i(n-1) + \alpha (R_i(n) - U_i(n-1)) \end{equation} The reward $R_i(n)$ for the $n$th application of the rule $i$ is the difference of the external reward and the time between the selection of the rule and the triggering of the reward. The utility adapts gradually to the average reward a rule receives. Its calculation can be extended by noise to enable rules with initally low utilities to fire. This then may boost their utility values. In CHR, this strategy can be implemented as follows: For each production rule, a \verb|utility/2| constraint is stored holding its current utility value. For rules marked with a reward, a \verb|reward/2| constraint holds the amount of reward. When a production rule is applied, this information is stored with the application time in a constraint by the rule \verb!apply_rule(rule(P,_,_)) ==> get_time(Now), applied([(P,Now)]).! With a corresponding rule, the \verb|applied/1| constraints are merged respecting the application time of the rules, since the adaptation strategy depends on the last utility value of a rule and rules might be applied more than once until they receive a reward. This leads to one \verb|applied/1| constraint containing a sorted list of rules and their application time. If a rule which is marked with a reward is going to be applied, the reward can be triggered by \verb!apply_rule(rule(P,_)), reward(P,R) ==> trigger_reward(R).! The triggering of the reward simply adapts the utilities according to equation~\ref{eq:utility_learning:reinforcement} for all productions which have been applied indicated by the \verb!applied/1! constraint respecting the order of application. Afterwards, this constraint is deleted because after a reward has been received, the rule is not considered in the next adaptation. \subsubsection{Success-/Cost-Based Utility Learning} \label{sec:success_cost_based_utility_learning} In prior implementations of ACT-R, the utility learning is based on a success-/cost approach \cite{anderson_integrated_2004,taatgen_modeling_2006}. A detailed description can be found in \cite[unit~6]{actr5_tutorial}. Each production rule $i$ is associated to the values $P_i$ denoting the success probability of the production and $C_i$ denoting its costs. In this approach, the utility of a production rule is defined as: \begin{equation} U_i(n) = P_i(n) G - C_i(n) \end{equation} Note that the current utility does not depend on the value of the last utility, but can be calculated by the current values of the parameters instead. Hence, the order of application does not play a role. Usually, $C_i$ is measured in units of time to achieve a goal whereas $G$ -- the goal value -- is an architectural parameter and usually set to 20\,s. The parameters $P$ and $C$ are obtained by the following equations: \begin{align} P_i(n) &= \frac{\mathit{\#sucesses_i}}{\mathit{\#successes_i + \#failures_i}} & C_i(n) &= \frac{\mathit{efforts_i}}{\mathit{\#successes_i + \#failures_i}} \end{align} The values $\mathit{\#sucesses}$ and $\mathit{\#failures}$ count all applications of a rule which have been identified as a success or a failure respectively. Similarly to the reinforcement-based learning, some productions which identify a success or failure trigger an event which adapts the counters of successes or failures of all production rules which have been applied since the last triggering. The efforts are estimated by the difference of the time of the triggering and the selection of a rule. The values are initialized with $\mathit{\#sucesses} = 1, \mathit{\#failures} = 0$ and $\mathit{efforts} = 0.05\,s$ which is the selection time of one firing. Analogously to the reward-based strategy, utilities can be extended by noise. Similarly to the implementation of the reinforcement learning rule, the triggering of a success or failure can be achieved by a constraint \verb|success(P)| or \verb|failure(P)|, which encode that a production \verb|P| is marked as success or failure respectively. Combined with the \verb|apply_rule/2| constraint, a \verb|success/0| or \verb|failure/0| constraint can be propagated which trigger the utility adaptation. The following rules show the adaptation of $\mathit{\#successes_i}$ and $\mathit{efforts_i}$ when a success is triggered and rule $i$ has been applied before: \begin{verbatim} success \ applied(P,T), efforts(P,E), successes(P,S) <=> get_time(Now), efforts(P,E+Now-T), successes(P,S+1). success <=> true. \end{verbatim} The number of successes or failures are stored in the respective binary constraints and if a success is triggered, they are incremented for all applied production rules and efforts are adjusted. The rules for failures are analogous. The adaptation of one of those parameters triggers the rules which replace the constraints holding the old $P_i$ and $C_i$ values by new values. When a $P_i$ or $C_i$ constraint is replaced, the calculation of the new utility value is triggered. To ensure that only one utility value is in the store, a destructive update rule is used. \subsubsection{Random Estimated Costs} In \cite{BelavkinR04}, a conflict resolution strategy motivated by research results in decision-making is presented. The current implementation varies slightly from this description \cite{belavkin_optimist_impl} and we stick to this most recent approach for a better comparability of the results. The strategy is based on the success-/cost-based utility learning from section~\ref{sec:success_cost_based_utility_learning} and uses the same subsymbolic information (the counts of successes and failures and the efforts). However, instead of calculating the average cost $C_i$, the expected costs $\theta_i$ of achieving a success by a rule are estimated: \begin{equation} \theta_i := \mathrm{E}(C_i) \approx \frac{\mathit{efforts_i}}{\mathit{\#sucesses_i}} \end{equation} From the expected costs $\theta_i$ of a rule $i$, the \emph{random estimated costs} $\zeta_i$ are derived by by drawing a random number $r_i$ from a uniform distribution $U(0,1)$ and setting $\zeta_i = -\theta_i \cdot \mathrm{log}(1 - r_i)$. Eventually, production utilities are calculated analogously to the success-/cost-based strategy: $U_i = P_i G - \zeta_i$. The influence of the random estimated costs can be varied by adapting the parameter $G$. If $G = 0$, the production rule with minimal random estimated costs will be fired (as suggested in \cite{BelavkinR04}). Since this method uses the same parameters as the success-/cost-based variant, almost all of the code can be reused for an implementation. However, instead of the costs, the expected costs $\theta_i$ are computed and saved in a constraint whenever the success/failure ratio changes. Additionally, the random costs must be calculated in every conflict resolution step and not only when the parameters change since they vary each time due to randomization. Hence, a rule must be added which calculates the utility value as soon as a production rule enters the conflict set: \begin{verbatim} conflict_set(rule(P,_)), theta(P,T), succ_prob(P,SP) ==> random(R), Z is -T * log(1 - R), U is SP*20-Z, set_utility(P,U). \end{verbatim} The rest of the implementation like the calculation of the success/failure counters, efforts or the pruning of the conflict set is identical to the success-/cost-based strategy. \subsubsection{Production Rule Refraction} In contrast to the previous strategies which only exchange the utility learning part, production rule refraction adapts the general conflict resolution mechanism and can be combined with all of the other presented strategies. It was first suggested in \cite{young_refraction_2003} to avoid over-programming of models in the sense that the order of application of a set of rules is fixed in advance by adding artificial signals to ensure the desired order. Rule refraction can avoid such operational concepts by inhibiting the application of the same rule instantiation more than once. To the best of our knowledge, our implementation is the first of its kind for ACT-R. Refraction can be implemented by saving the instantiation of each applied production using the rule \verb!apply_rule(R) ==> instantiation(R).! When building the conflict set, the following rule eliminates all productions which already have been applied from the set: \verb!instantiation(R) \ conflict_set(R) <=> true.! This pruning rule must be performed before the rule selection process, so that such productions are never considered as fire candidates. \section{Evaluation} \label{sec:evaluation} After having implemented some different conflict resolution strategies, we test their validity with an example model of the game \emph{rock, paper, scissors}. The idea is that the model simulates a player playing against three opponents with different preferences on the three choices in the game. We then want to observe, how the model adapts its strategy under the different conflict resolution mechanisms and test if the results of the ACT-R implementation and our CHR implementation match. \subsection{Setup} The player is basically modeled by the production rules \verb|play-rock|, \verb|play-paper| and \verb|play-scissors| standing for the three choices a player has in the game. At the beginning, the production rules have equal utilities which are then adapted by the utility learning mechanisms of the three conflict resolution strategies. Since we only want to test our conflict resolution implementations, we try to rule out all other factors which could influence the behavior of our model. Hence, we only use the procedural module with the goal buffer and do not simulate any declarative knowledge or even perceptual and motor modules. I.e. the model is not a realistic psychological hypothesis of the game play, but only a test of our implementation. Furthermore, we disable noise where possible to better compare our results. In ACT-R, the canonical parameter setting is not recommended to change without justification \cite[sec. 1.1]{stewart_deconstructing_2007}. For our experiment, we used this setting. The moves of the opponents are randomly generated in advance according to their defined preferences: Player~1 simply chooses rock for every move, player 2 chooses only between rock and paper and player 3 chooses equally between all three possibilities. For each player, we produced 20~samples of 20~moves (except for player~1 with only one sample of 20~moves). Their choices are put into the goal buffer one after another by host-language instructions (Lisp and Prolog/CHR). The game is played for 20~rounds until a restart with a new sample which corresponds to 2\,s simulation time. Finally, the utility values $U_{\{r,p,s\}}$ at the end of each run (for rock, paper and scissors respectively) are collected and compared to the reference implementation. We use the notation $\overline{U}_{\{r,p,s\}}$ to denote the average of those values over all 20 samples. In the following the implementation of the production rule \verb|play-rock|: \begin{verbatim} (p play-rock =goal> isa game me nil opponent nil ==> =goal> me rock opponent =x !output! (rock =x) ) \end{verbatim} This rule simply puts the symbol \verb|rock| into the goal buffer indicating that the model chose rock. The variable \verb|=x| is set by built-in functions of the host language (omitted in the listing) modeling the choice of the opponent derived from a given list of moves. The rules for \emph{paper} and \emph{scissors} can be defined analogously. The model has been translated to CHR by our compiler. We performed the translation of Lisp built-ins to Prolog built-ins by hand. \looseness=-1 Furthermore, the model contains production rules detecting a win, draw or defeat situation (similar to example~\ref{ex:simple_rule}) and resetting the choices of the two players in the goal buffer to indicate that the next round begins. Those rules are marked with a reward (positive or negative) or as a success/failure respectively. In the case of a draw, no reward, success or failure will be triggered. Hence, the utility learning algorithms will adapt the values of the fired rules depending on their success. If the highest utilities in the conflict set are equal, the strategy of ACT-R is undocumented. It depends on the order of the rules in the source code and may vary between the implementations (e.g. the strategy of ACT-R~6.0 differs from ACT-R 5.0 as we found in our experiments). We adapted the order of rules in our translated CHR model to match the strategy of ACT-R. Usually, noise would rule out such differences. For the reference implementations, we used Clozure Common Lisp version 1.9-r15757. The CHR implementation has been run on SWI-Prolog version 6.2.6. The relevant data collected in our experiments can be found in the online appendix (\ref{sec:evaluation_results}). \subsection{Availability of the Strategies} Our approach enables the user to exchange the complete conflict resolution strategy without relying on provided interfaces and hooks except for the very basic information that a rule is part of the conflict set or about to be applied. This information relies on the fundamental concept of the match-select-apply cycle of ACT-R. In the reference implementations of the strategies, there are deeper dependencies and assumptions on when and how subsymbolic information is adapted and stored. This leads to incompatibilities: The reinforcement-learning-based strategy is only available for ACT-R~6.0. Although the success-/cost-based strategy is shipped with ACT-R~6.0, it was not executable for us and hence we had to use ACT-R 5.0 to run it. This leads to further incompatibility problems when using modules not available for ACT-R 5.0 (which is in general difficult to extend due to the lack of architectural modules). Since the method of random-estimated costs relies on the success-/cost-based strategy, it is also only available for ACT-R 5.0. Our implementation of the refraction-based method is to the best of our knowledge the only existing implementation for ACT-R, although it has been suggested in \cite{young_refraction_2003}. \subsection{Reinforcement-Learning-Based Utility Learning} For the reinforcement-learning-based strategy, we marked the win-detecting production rules with a reward of 2 and the defeat-detecting rules with 0 which leads to negative rewards for all applied rules when a defeat is detected. Draws do not lead to adjustments of the strategy in our configuration. We executed the model on ACT-R~6.0 version~1.5-r1451 and our CHR implementation. Our implementation matches the results of the reference implementation exactly when rounded to the same decimal precision (see online appendix~\ref{sec:evaluation_results_reinf}). Differences of floating point precision did not influence the results, since ACT-R does round the final results to the one-thousandths. As expected, the model usually rewards the paper rule most when playing against player~1~and~2 (average utility at end of round for player 1: $(\overline{U_r}, \overline{U_p}, \overline{U_s}) = (0, 1.87, -0.02)$; player 2: (0, 0.81, 0.49)). Exceptions are rounds where the opponent chooses paper above average especially as first moves (e.g. sample 10: 75\% rate of paper; first 9 moves; $U_p = 0, U_s = 1.329$). In such cases, scissors has the highest utility. This is reinforced by the relatively high reward of successes compared to the punishment of defeats. However, the winning rate is still very high (15 wins, 5 defeats, no draws). Overall, the behavior of the model is very successful (average: 10.4 wins, 3.9 draws and 5.7 defeats in each sample). For player~3 -- as expected -- no unique result can be learned; wins, draws and defeats are very close in average (6.6 wins, 6.7 draws, 6.7 defeats). \subsection{Success-/Cost-Based Utility Learning} \label{sec:evaluation:success_cost_based} For the success-/cost-based strategy, the production rules recognizing a win situation are marked as a success and analogously the production rules for the defeat situations as a failure. We used ACT-R 5.0 to test our implementation against the reference implementation, since it is not available for ACT-R~6.0. Again, noise is disabled for better comparability. Because the selection mechanism for rules with same utility differs from ACT-R~6.0, we adapted the order in which the rules appear in the source code. Our implementation matches the results of the reference implementation exactly (see online appendix~\ref{sec:evaluation_results_succ}). It can be seen that this strategy is not able to detect the optimal moves for player~1. Analyses showed that due to the order of the rules, the model first selects to play \emph{rock}. This leads to a draw and hence no adaptation of the utilities. Hence, rock is played repeatedly. In real-world models, noise would help to overcome such problems. For player 2, the model correctly chose to play \emph{paper} in average even for the samples where the opponent chooses paper more often than rock. However, in average, the model did only win 8.9 out of 20 rounds in a sample and produced 9.1 draws. For each of the samples, only two rounds were lost. \subsection{Random Estimated Costs} Due to the randomness of this strategy, no exact matches of results can be expected. Hence, we executed the models on 3 samples (the first of each opponent) with 50~runs for each sample. The reference implementation has been run on ACT-R~5.0. The average utilities are close to the reference implementation (error squares of average utilities player~1: ($\Delta\overline{U_r}^2,\Delta\overline{U_p}^2,\Delta\overline{U_s}^2) = (0.145, 0.000, 0.000)$; player~2: (0.850, 0.000, 0.098); player~3: (2.823, 0.503, 0.003), see online appendix~\ref{sec:evaluation_results_rand} for details). It can be seen that for most runs the production with the highest, medium and lowest utility value coincide. For player 1, the random estimated costs overcome the problem of the success-/cost-based implementation as discussed in section~\ref{sec:evaluation:success_cost_based}. \section{Related Work} \label{sec:related_work} There are several implementations of the ACT-R theory in different programming languages. First of all, there is the official ACT-R implementation in Lisp \cite{actr_homepage} which we used as a reference. There are a lot of extensions to this implementation which partly have been included to the original package in later versions like the ACT-R/PM extension included in ACT-R 6.0 \cite[p. 264]{actr_reference}. The implementation comes with an experiment environment offering a graphical user interface to load, execute and observe models.\looseness=-1 In \cite{stewart_deconstructing_2006,stewart_deconstructing_2007}, a Python implementation is presented which also has the aim to simplify and harmonize parts of the ACT-R theory by finding the central components of the theory. The architecture has been reduced to only the procedural and the declarative memory which are used to build other models combining and adapting them in different ways. However, there is no possibility to translate traditional ACT-R models automatically to Python code since the way of modeling differs too much from the original implementation. Furthermore, there are two different implementations in Java: \emph{jACT-R} \cite{jactr} and \emph{ACT-R: The Java Simulation \& Development Environment} \cite{java_actr}. The latter one is capable of executing original ACT-R models and offers an advanced graphical user interface. The focus of the project was to make ACT-R more portable with the help of Java \cite{java_actr_benefits}. In jACT-R, the focus was to offer a clean and exchangeable interface to all the components, so different versions of the ACT-R theory can be mixed \cite{jactr_benefits} and models are defined using XML. There is no compiler from original ACT-R models to XML models of jACT-R. Due to the modular design defining various interfaces which can be exchanged, jACT-R is highly adaptable to personal needs. However, both approaches are missing the proximity to a formal representation. \section{Conclusion} \label{sec:conclusion} In this work, we have presented an implementation of ACT-R using Constraint Handling Rules which is capable of closing the gap between the theory of ACT-R and its technical realization. Our implementation abstracts from technical artifacts and is near to the theory but can reproduce the results of the reference implementation. Furthermore, the formalization itself enables implementations to check against this reference. The implementation of the different conflict resolution strategies has shown the adaptability of our approach. Most of the implemented strategies are not available for the current implementation of ACT-R and our implementation of production rule refraction is unique. For the future, the implementation can be extended by other modules like the perceptive/motor modules provided by ACT-R. Currently, there is a running student project on implementing a temporal module which may be used to investigate time perception. The formalization and CHR translation pave the way to develop analysis tools (e.g. a confluence test) on the basis of the results for CHR programs. \newpage
proofpile-arXiv_067-11091
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} Machine learning has become ubiquitous in modern data analysis, decision-making, and optimization. A prominent subset of machine learning is the artificial deep neural network (DNN), which has revolutionized many fields, including classification~\cite{krizhevsky_imagenet_2012}, translation and prediction~\cite{esteva_dermatologist-level_2017,lecun_deep_2015}. An important step toward unlocking the full potential of DNNs is improving the energy consumption and speed of DNN tasks. To this end, emerging DNN-specific hardware optimizes data access, reuse and communication for mathematical operations: most importantly, general matrix-matrix multiplication (GEMM) and convolution~\cite{sze_efficient_2017}. One approach is to use a specialized memory hierarchy to store and reuse data near an array of computation units, which minimizes reliance on expensive large-scale data distribution networks~\cite{chen_eyeriss:_2017, chen2019eyerissv2, yin_thinker}. Another option for GEMM is a large array of electronic multipliers with fewer intermediate memory tiers~\cite{jouppi_-datacenter_2017}. The large multiplier array reduces overhead, and if the DNN is sizable enough to keep all the processing elements occupied, this design can be more efficient in energy consumption and throughput, thanks to the ability to perform more parallel operations. However, despite these advances, a central challenge in the field is scaling hardware to keep up with exponentially-growing DNN models (see Fig.~\ref{fig:f0} and Ref.~\cite{xu_scaling_2018}). Many popular DNN models comprise matrices exceeding the GEMM capacity of leading DNN processors (e.g., Google's Tensor Processing Unit (TPU)~\cite{jouppi_-datacenter_2017}), and therefore, matrices must be computed in multiple `tiles'. Tiling requires many inputs and intermediate values to be stored rather than streamed, which increases data movement. Thus, tiling restricts the use of DNNs in high-throughput applications such as the observation of new phenomena in fundamental physics~\cite{duarte_fast_2018, cosmology1, iprijanovi2020deepmerge, neutrino, huerta_enabling_2019}, and reduces the throughput of large DNN models such as recommender systems~\cite{gupta_facebook2020}, vision~\cite{jouppi_-datacenter_2017} and natural language processing~\cite{lan2019albert}. Though the current trend is to scale up conventional electronic hardware, these efforts are impeded by communication~\cite{horowitz_1.1_2014}, clocking~\cite{grs2019}, thermal management~\cite{heat_2012} and power delivery~\cite{gupta_power2007}. Parallel processing with multiple chips~\cite{brainwave} or partitioned chips~\cite{shao_simba:_2019, chiplet_isca} can ease these constraints and improve performance over a monolithic equivalent through greater mapping flexibility~\cite{scalesim}, at the cost of increased communication energy. \begin{figure}[htbp] \begin{center} \includegraphics[width=.95\textwidth]{num_params_new.pdf} \caption{Number of parameters, i.e., weights, in recent landmark neural networks~\protect\cite{krizhevsky_imagenet_2012, Simonyan15, szegedy2014going, mnih2015humanlevel, szegedy2016rethinking, heresnet50, chollet2017xception, NIPS2017_7181, nasnet2018, senet2018, devlin2018bert, radford2018gpt2, lan2019albert, dai-etal-2019-transformer} (references dated by first release, e.g., on arXiv). The number of multiplications (not always reported) is not equivalent to the number of parameters, but larger models tend to require more compute power, notably in fully-connected layers. The two outlying nodes (pink) are AlexNet and VGG16, now considered over-parameterized. Subsequently, efforts have been made to reduce DNN sizes, but there remains an exponential growth in model sizes to solve increasingly complex problems with higher accuracy.} \label{fig:f0} \end{center} \end{figure} In this Article, we introduce an optical DNN accelerator that encodes data into reconfigurable on-off optical pulses for transmission and passive copying (or \textit{fan-out}) to large-scale electronic multiplier arrays. The near length-independence of optical data routing enables freely scalable systems, where single transmitters are fanned out to many arbitrarily arranged receivers with fast and energy-efficient links. Optics has previously been proposed for analog DNN accelerators, with potential orders-of-magnitude reductions in energy consumption and improved throughput~\cite{hamerly_large-scale_2019, tait_neuromorphic_2017, lin_all-optical_2018, shen_deep_2017,feldmann2020parallel}. In contrast, we propose an entirely digital system, where we replace electrical on-chip interconnects with optical paths for data transmission, but not computation, thus with the capability to preserve accuracy. This `digital optical neural network' (DONN) performs large-scale data distribution from memory to an arbitrary set of electronic multipliers. We first illustrate the DONN architecture and discuss possible implementations. Then, in a proof-of-concept experiment, we demonstrate that digital optical transmission and fan-out with cylindrical lenses has little effect on the classification accuracy of the MNIST handwritten digit dataset (<0.6\%). Crosstalk is the primary cause of this drop in accuracy, and because it is deterministic, it can be compensated: with a simple crosstalk correction scheme, we reduce our bit error rates by two orders of magnitude. Alternatively, crosstalk can be greatly reduced through optimized optical design. Since shot and thermal noise are negligible (see Discussion), the accuracy of the DONN can therefore be equivalent to an all-electronic DNN accelerator. We also compare the energy consumption of optical interconnects (including light source energy) against that of electronic interconnects over distances representative of logic, memory, and multi-chiplet interconnects in a 7~nm CMOS node. Our calculations show an advantage in data transmission costs for distances $\geq 5$~$\upmu$m (roughly the size of the basic computation unit: an 8-bit multiply-and-accumulate (MAC), with length 5-8~$\upmu$m). Moreover, the DONN scales favorably with respect to very large DNN accelerators that require partitioning into multiple chiplets: the DONN's optical communication cost remains nearly constant at $\sim$0.2~fJ/bit, whereas multi-chiplet systems have much higher electrical interconnect costs ($\sim$90~fJ/bit). Thus, the efficient optical data distribution provided by the DONN architecture will become critical for continued growth of DNN performance through increased model sizes and greater connectivity. \section*{Results} \subsection*{Problem statement} A DNN consists of a sequence of layers, in which input activations from one layer are connected to the next layer via weighted paths (weights), as shown in Fig.~\ref{fig:f1}a. We focus on inference tasks in this paper (where weights are known from prior training), which, in addition to the energy consumption problem, places stringent requirements on latency and throughput. Modern inference accelerators expend the majority of energy ($>90\%$) on memory access, data movement, and computation in fully-connected (FC) and convolutional (CONV) layers~\cite{chen_eyeriss:_2017}. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.00\textwidth]{donn_schematic_abc_v3.pdf} \caption{Digital fully-connected neural network (FC-NN) and hardware implementations. (a) FC-NN with input activations (red, vector length $K$) connected to output activations (vector length $N$) via weighted paths, i.e., weights (blue, matrix size $K\times N$). (b) Matrix representation of one layer of an FC-NN with $B$-sized batching. (c) Example bit-serial multiplier array, with output-stationary accumulation across $k$. Fan-out of \textbf{X} across $n \in \left\{1...N\right\}$; fan-out of \textbf{W} across $b \in \left\{1...B\right\}$. Bottom panel: all-electronic version with fan-out by copper wire (for clarity, fan-out of \textbf{W} not illustrated). Top panel: digital optical neural network version, where \textbf{X} and \textbf{W} are fanned out passively using optics, and transmitted to an array of photodetectors. Each pixel contains two photodetectors, where the activations and weights can be separated by, e.g., polarization or wavelength filters. Each photodetector pair is directly connected to a multiplier in close proximity.} \label{fig:f1} \end{center} \end{figure} Parallelized vector operations, such as matrix-matrix multiplication or successive vector-vector inner products, are the largest energy consumers in CONV and FC layers. In an FC layer, a vector $\boldsymbol{x}$ of input values (`input activations', of length $K$) is multiplied by a matrix \textbf{W}$_{K\times N}$ of weights (Fig.~\ref{fig:f1}b). This matrix-vector product yields a vector of output activations ($\boldsymbol{y}$, of length $N$). Most DNN accelerators process vectors in $B$-sized batches, where the inputs are represented by a matrix \textbf{X}$_{B\times K}$. The FC layer then becomes a matrix-matrix multiplication (\textbf{X}$_{B\times K}\cdot$\textbf{W}$_{K\times N}$). CONV layers can also be processed as matrix multiplications, e.g., with a Toeplitz matrix~\cite{sze_efficient_2017}. In matrix multiplication, fan-out, where data is read once from main memory (DRAM) and used multiple times, can greatly reduce data movement and memory access. This amortization of read cost across numerous operations is critical for overall efficiency, since retrieving a single matrix element from DRAM requires two to three orders of magnitude more energy than the MAC~\cite{horowitz_1.1_2014}. A simple input-weight product illustrates the benefit of fan-out, since activation and weight elements appear repeatedly, as highlighted by the repetition of $X_{11}$ and $W_{11}$: \begin{align} &\textcolor{white}{W} \begin{bmatrix} \label{eq:matmult} \textcolor{red}{X_{11}} & X_{12} \\ X_{21} & X_{22} \end{bmatrix} \begin{bmatrix} \textcolor{blue}{W_{11}} & W_{12} \\ W_{21} & W_{22} \end{bmatrix} = \begin{bmatrix} \textcolor{red}{X_{11}}\textcolor{blue}{W_{11}}+X_{12}W_{21} & \textcolor{red}{X_{11}}W_{12}+X_{12}W_{22} \\ X_{21}\textcolor{blue}{W_{11}}+X_{22}W_{21} & X_{21}W_{12}+X_{22}W_{22} \end{bmatrix} \end{align} Consequently, DNN hardware design focuses on optimizing data transfer and input and weight matrix element reuse. Accelerators based on conventional electronics use efficient memory hierarchies, a large array of tightly packed processing elements (PEs, i.e., multipliers with or without local storage), or some combination of the these approaches. Memory hierarchies optimize temporal data reuse in memory blocks near the PEs to boost performance under the constraint of chip area~\cite{sze_efficient_2017}. This strategy can enable high throughput in CONV layers~\cite{chen_eyeriss:_2017}. With fewer intermediate memory levels, a larger array of PEs (e.g., TPU v1~\cite{jouppi_-datacenter_2017}) can further increase throughput and lower energy consumption on workloads with a high-utilization mapping due to potentially reduced overall memory accesses and a greater number of parallel multipliers (spatial reuse). Therefore, for workloads with large-scale matrix multiplication such as those mentioned in the Introduction, if we maximize the number of available PEs, we can improve efficiency. \subsection*{Digital optical neural network architecture} \label{sec:architecture} Our DONN architecture replaces electrical interconnects with optical links to relax the design constraints of reducing inter-multiplier spacing or colocating multipliers with memory. Specifically, optical elements transfer and fan out activation and weight bits to electronic multipliers to reduce communication costs in matrix multiplication, where each element $X_{bk}$ is fanned out $N$ times, and $W_{kn}$ is fanned out $B$ times. The DONN scheme shown in Fig.~\ref{fig:f1}c spatially encodes the first column of \textbf{X}$_{B\times K}$ activations into a column of on-off optical pulses. At the first time step, the activation matrix transmitters fan out the first bit of each of the matrix elements $X_{b1}, \forall b \in \left\{1...B\right\}$ to the PEs (here, $k=1$). Simultaneously, a row of weight matrix light sources transmits the corresponding weight bits $W_{1n}$ to each PE. The photons from these activation and weight bits generate photoelectrons in the detectors, producing the voltages required at the inputs of electronic multipliers (either 0~V for a `0' or 0.8~V for a `1'). After 8 time steps, a multiplier has received $2\times8$~bits (8 bits for the activation value and 8 bits for the weight value), and the electronic multiplication occurs as it would in an all-electronic system. The activation-weight product is completed, and is added to the locally stored partial sum. The entire matrix-matrix product is therefore computed in $8\times K$ time steps; this dataflow is commonly called `output stationary'. Instead of this bit-serial implementation, bits can be encoded spatially, using a bus of parallel transmitters and receivers. The trade-off between added energy and latency in bit-serial multiplication versus increased area from photodetectors for a parallel multiplier can be analyzed for specific applications and CMOS nodes. \begin{figure}[htbp] \begin{center} \includegraphics[width=1\textwidth]{donn_implementation_f.pdf} \caption{Possible implementations of digital optical neural network. (a)~Free-space version. Digital inputs and weights are transmitted electronically to an array of light sources (red and blue, respectively, illustrating different paths). Single-mode light from a source is collimated by a spherical lens (Lens), then focused to a 1D spot array by a diffractive optical element (DOE). A 50:50 beamsplitter brings light from the inputs and weights into close proximity on a custom CMOS receiver. (b)~Waveguide or chip-integrated implementation with scatterers above each processing element (PE). (c)~Example circuit with 2 photodetectors per PE: 1 for activations; 1 for weights. Received bits proceed to multiplier, then memory or next layer.} \label{fig:f2} \end{center} \end{figure} We illustrate two exemplary experimental DONN implementations in Fig.~\ref{fig:f2}. In the free-space version (Fig.~\ref{fig:f2}a), each source in a linear array of vertical cavity surface emitting lasers (VCSELs) or $\upmu$LEDs emits a cone of light into free space, which is collimated by a spherical lens. A diffractive optical element (DOE) focuses the light to a 1D spot array on a 2D receiver, where the activations and weights are brought into close proximity using a beamsplitter. Figure~\ref{fig:f2}b shows a waveguide or chip-integrated alternative, where each light source is coupled into an optical waveguide. The waveguides are low-loss, except at the scattering elements above each detector pixel. These scattering elements are tuned such that, along one row, an equal amount of light enters each photodetector for a `1' (similar concepts have been experimentally demonstrated~\cite{sun2013large}). In both the free-space and integrated implementations, `receiverless' photodiodes~\cite{miller_attojoule_2017} convert the optical signals to the electrical domain (Fig.~\ref{fig:f2}c). An electronic multiplier then multiplies the values. The output is either saved to memory, or routed directly to another DONN that implements the next layer of computation. Note that the data distribution pattern is not confined to regular rows and columns. A spatial light modulator (SLM), an array of micromirrors, scattering waveguides or a DOE can route and fan out bits to arbitrary locations. There will be some length-dependent optical loss that will vary based on the implementation. Since free-space propagation is lossless and mirrors, SLMs and diffractive elements are highly efficient (>~95\%), most length- or receiver-number-dependent losses can be attributed to imperfect focusing, e.g., from optical aberrations far from the optical axis. These effects can be mitigated through judicious optical design. There are also waveguide losses in integrated photonics, but these can be very low, e.g., 3~dB/m with mm-scale bend radii in silicon nitride~\cite{bowers_sin2010}. (The DONN does not require any active components, which makes silicon nitride a good choice here.) Therefore, even if we design a meter-length chip with mm-scale bends in the waveguides, we can compensate optical losses by increasing the number of photons generated at the sources (for example, in silicon nitride, by a factor of 2). We assume for the remainder of our analysis that energy is length-independent. \subsection*{Bit error rate and inference experiments} We used a DONN implementation similar to Fig.~\ref{fig:f2}a to test optical digital data transmission and fan-out for DNNs, as described in Methods. In our first experiment, we determined the bit error rate of our system. Figure~\ref{fig:f3}a shows an example of a background-subtracted and normalized image, captured on the camera when the DMDs displayed random vectors of `1's and `0's. The camera's Bayer filter (described in Methods), as well as optical aberrations and misalignment, caused some crosstalk between pixels (see Fig.~\ref{fig:f3}b). Using a region of $357\times477$ superpixels on the camera, we calculated bit error rates (in a single shot) of $1.2\times 10^{-2}$ and $2.6\times10^{-4}$ for the blue and red channels, respectively. When we confined the region of interest to $151\times191$ superpixels, the bit error rate (averaged over 100 different trials, i.e., 100 pairs of input vectors) was $4.4\times10^{-3}$ and $4.6\times10^{-5}$ for the blue and red arms. See Supplementary Note~1 for more details on bit error rate and error maps. Because crosstalk is deterministic, and not a source of random noise, we can compensate for it. We applied a simple crosstalk correction scheme that assumes uniform crosstalk on the detector and subtracts a fixed fraction of an element's nearest neighbors from the element itself (see Supplementary Note~2). The bit error rates for the blue and red channels then respectively dropped to $2.9\times10^{-3}$ and 0 for the $357\times477$-pixel, single shot image and $2.6\times10^{-5}$ and 0 for the $151\times191$-pixel, 100-image average. In other words, after crosstalk correction, there were no errors in the red channel, and the errors in the blue channel dropped significantly. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.00\textwidth]{experimental_data3.pdf} \caption{Background-subtracted and normalized receiver output from free-space digital optical neural network experiment with random vectors of `1's and `0's diplayed on DMDs. (a) Full 2D image. (b) One column: pixels received as `1' in red and `0' in black.} \label{fig:f3} \end{center} \end{figure} \begin{figure} \begin{center} \includegraphics[width=1.00\textwidth]{probability_matrix_image.pdf} \caption{Experimentally measured 3-layer FC-NN output scores (otherwise known as confusion matrix) for 500 MNIST images from test dataset. The values along the diagonal represent correct classification by the model. Each column is an average of $\sim$50 vectors. (a) DONN output scores. (b) Ground-truth (all-electronic) output scores. (c)-(d) Box plot of the diagonals of (a)-(b). (e) Difference in diagonals of DONN versus ground-truth output scores.} \label{fig:out_score} \end{center} \end{figure} Next, we experimentally tested the DONN's effect on the classification accuracy of 500 MNIST images using a three-layer (i.e., two-hidden-layer), fully-connected neural network (FC-NN), with the dataset and training steps described in Supplementary Note~3. We compared our experimental classification results with inference performed entirely on CPU (ground truth) in two ways. The simplest analysis, reported in Table~\ref{tab:t2}, shows a 0.6\% drop in classification accuracy for the DONN versus the ground truth values (or 3 additional incorrectly classified images). Figure~\ref{fig:out_score} illustrates more detailed results, where we analyzed the network output scores. An output score is roughly equivalent to the assigned likelihood that an input image belongs to a given class, and is defined as the normalized (via the softmax function) output vector of a DNN. We found that, along the matrix diagonal, the first and third quartiles in the difference in output scores between the DONN and the ground truth have a magnitude <3\%. The absolute difference in average output scores is also <3\%. We also performed this experiment with a single hidden layer (`2-layer' case), and achieved similar results (a 0.4\% drop in accuracy, or 2 misclassified images). No crosstalk error correction was applied to these results. \begin{table}[htb] \centering \caption{MNIST classification accuracy of DONN versus all-electronic hardware with custom fully-connected neural network models} \begin{tabular}{ccc} \hline & 2 layers & 3 layers \\ \hline Electronic (ground truth) & 95.8\% & 96.4\% \\ DONN & 95.4\% & 95.8\% \\ \hline \end{tabular} \label{tab:t2} \end{table} \subsection*{Energy analysis: DONN compared with all-electronic hardware} \label{sec:energy} In this section, we compare the theoretical interconnect energy consumption of the DONN with its all-electronic equivalent, where interconnects are illustrated in green in Fig.~\ref{fig:f4}. The interconnect energy, which must include any source inefficiencies, is the energy required to charge the parasitic wire, detector, and inverter capacitances, where a CMOS inverter is representative of the input to a multiplier. See Methods for full energy calculations. In the electronic case, a long wire transports data to a row of multipliers using low-cost (.06~fJ/bit) repeaters (see Supplementary Note~6). The wire has a large parasitic capacitance, but also produces an effective electrical fan-out. In the DONN, the energetic requirements of the detectors contrast with those of conventional optical receivers, which aim to maximize sensitivity to the optical input field, rather than minimize the energetic cost of the system as a whole. The values for electronic and optical components are summarized in Table~\ref{tab:t1}, where $h\nu/e$ must be greater than or equal to the bandgap $E_\text{g}$ of the detector material (here, we have chosen silicon as an example, and set $h\nu/e = E_\text{g}$). $C_\text{det}$ is a theoretical approximation for a $(1\times1\times1)~\upmu\text{m}^3$ cubic detector~\cite{miller_attojoule_2017} and the optical source power conversion efficiency (wall-plug efficiency, i.e., WPE) is a measured value for VCSELs \cite{iga2008vertical,jager199757}. $C_\text{T}$ is an approximation for the capacitance of an inverter in a state-of-the-art node~\cite{zheng_p_finfet_2015, miller_attojoule_2017} and $L_\text{wire}$ is the distance between MAC units in various scenarios. We find that the optical interconnect energy is independent of length at 0.2~fJ/bit, while the electrical interconnect energy scales from 0.2-0.3~fJ/bit for inter-multiplier communication for abutted MAC units to $90$~fJ/bit for inter-chiplet interconnects. The crossover point where the optical interconnect energy drops below the electrical energy occurs when $L_{\text{wire}} \geq 5~\upmu \text{m}$. The DONN therefore provides an improvement in the interconnect energy for data transmission and can scale to greatly decrease the energy consumption of data distribution with regular distribution patterns. Additionally, advanced technologies are emerging which could lower its energy consumption, such as plasmonic photodetectors with ultra-low capacitance~\cite{tang2008nanometre} and more efficient VCSELs. \begin{figure}[htbp] \begin{center} \includegraphics[width=1.00\textwidth]{electrical_vs_optical_v4.pdf} \caption{Fan-out of one bit from memory (Mem) to multiple processing elements (PEs). (a) Fan-out by electrical wire to a row of PEs in a monolithic chip. (b) DONN equivalent of monolithic chip, where green wire is replaced by optical paths. (c) Fan-out by electrical wire to blocks of PEs divided into chiplets, or separated by memory and logic. (d) DONN equivalent of fan-out to PEs in multiple blocks (energetically equivalent to (b)).} \label{fig:f4} \end{center} \end{figure} \begin{table}[htb] \footnotesize \caption{Interconnect energies over three distances: inter-MAC, inter-SRAM, and inter-chiplet} \begin{center} \begin{tabular}{|c|c|} \multicolumn{2}{c}{Global parameters} \\ \hline $C_{\text{wire}}/\upmu \text{m}$ & $\sim$0.2~fF$/\upmu$m~\cite{miller_attojoule_2017, keckler_gpus_2011, dally_hardwareenabled2018} \\ $C_{\text{T}}$ & $\sim$0.1~fF~\cite{miller_attojoule_2017, zheng_p_finfet_2015} \\ $C_{\text{det}}$ & 0.1~fF~\cite{miller_attojoule_2017} \\ $h \nu / e$ & 1.12~\text{eV}\\ WPE & $\sim$0.5~\cite{iga2008vertical,jager199757}\\ \hline \end{tabular} \end{center} \vfill \begin{tabular}{|c|c|} \multicolumn{2}{c}{Inter-MAC (8-bit MAC unit)} \\ \hline $L_\text{wire}$ & 5-8~$\upmu\text{m}^\dagger$ \\ $V_{DD}$ & 0.80~V~\cite{stillmaker2017scaling} \\ $E_\text{elec}/\text{bit}$ & 0.2-0.3~fJ/bit \\ $E_\text{DONN}/\text{bit}$ & 0.2~fJ/bit \\ \hline \end{tabular} \hfill \begin{tabular}{|c|c|} \multicolumn{2}{c}{Inter-SRAM (7~nm SRAM macro)} \\ \hline $L_\text{wire}$ & 60~$\upmu$m ~\cite{chang201712} \\ $V_{DD}$ & 0.75~V*~\cite{chang201712} \\ $E_\text{elec}/\text{bit}$ & 2~fJ/bit \\ $E_\text{DONN}/\text{bit}$ & 0.2~fJ/bit \\ \hline \end{tabular} \hfill \begin{tabular}{|c|c|} \multicolumn{2}{c}{Inter-chiplet~\cite{shao_simba:_2019}} \\ \hline $L_\text{wire}$ & $\sim$2500~$\upmu$m \\ $V_{DD}$ & 0.85~V*\\ $E_\text{elec}/\text{bit}$ & $90$~fJ/bit\\ $E_\text{DONN}/\text{bit}$ & 0.2~fJ/bit \\ \hline \end{tabular} \caption*{$^\dagger$We assume a square multiplier and scale reported 8-bit multiplier areas~\cite{saadat_minimally_2018, shoba_energy_2017, ravi_design_2015} from a 45~nm to a 7~nm node (the current state of the art) with the scaling factors from Ref.~\cite{stillmaker2017scaling}. A MAC unit comprises both an 8-bit multiplier and a 32-bit adder, so we are placing a lower bound on the minimum length of $L_\text{wire}$. Recent work~\cite{johnson_mac} optimizes MAC units for DNNs, and reports a $337~\upmu\text{m}^2$ area in a 28~nm node, where the MAC unit comprises an 8-bit multiplier and a 32-bit adder. Extrapolated to a 7~nm node with a fourth-order polynomial fit of the scaling factors from Ref.~\cite{stillmaker2017scaling}, the MAC unit is of size ($7~\upmu\text{m})^2$, which falls within the 5-8~$\upmu$m range.} \caption*{*Input-output voltage and core logic voltage can differ in CMOS. In optics, however, since the data delivery mechanism does not vary with distance travelled, we assume $V_{DD}$ remains constant at 0.80~V.} \label{tab:t1} \end{table} \section*{Discussion} \label{sec:discussion} With minimal impact on accuracy, the DONN yields an energy advantage over all-electronic accelerators with long wire lengths. In our proof-of-concept experiment, we performed inference on 500 MNIST images with 2- and 3-layer FC-NNs and found a <0.6\% drop in accuracy and a <3\% absolute difference in average output scores with respect to the ground truth implementation on CPU. We attributed these errors to crosstalk due to imperfect alignment and blurring from the camera's Bayer filter. In fact, a simple crosstalk correction scheme lowered measured bit error rates by two orders of magnitude. We could thus transmit bits with 100\% measured fidelity in the activation arm (better aligned than the weight arm), which illustrates that crosstalk can be mitigated and possibly eliminated either through post-processing, charge sharing at the transmitters, greater spacing of receivers, or optimized design of optical elements and receiver pixels. In the hypothetical regime where error due to crosstalk is negligible, the remaining noise sources are shot and thermal noise. Intuitively, shot and thermal noise are also present in an all-electronic system, and the number of photoelectrons at the input to an inverter in the DONN is equal to the number of electrons at the input to an inverter in electronics. Therefore, if these noise sources do not limit accuracy in the all-electronic case, the same can be said for the DONN~\cite{miller_attojoule_2017}. For mathematical validation that shot and thermal noise have a trivial impact on bit error rate in the DONN, see Supplementary Note~7. These analyses demonstrate that the fundamental limit to the accuracy of the DONN is no different than the accuracy of electronics, and thus, we do not expect accuracy to hinder DONN scaling in an optimized system. In our theoretical energy calculations, we compared the nearly length-independent data delivery costs of the DONN with those of an all-electronic system. We found that in the worst case, when multipliers are abutted in a multiplier array, optical transmitters have a similar interconnect energy cost compared to copper wires in a 7~nm node ($\sim$0.2~fJ/bit versus $\sim$0.2-0.3~fJ/bit). The regime where the DONN shows important gains over copper interconnects is in architectures with increased spacing between computation units, e.g., with locally-packed memory and logic ($\sim$0.2~fJ/bit versus $\sim$2~fJ/bit), or with multiple chiplets ($\sim$0.2~fJ/bit versus $\sim$90~fJ/bit). In the multi-chiplet case, the cost to transmit two 8-bit values in electronics ($\sim$1,400~fJ) is therefore significantly larger than that of an 8-bit MAC (25~fJ)~\cite{stillmaker2017scaling,horowitz_1.1_2014}. On the other hand, in optics, the interconnect cost ($\sim$3~fJ for 2$\times$8~bits, including source energy) remains an order of magnitude smaller than the MAC cost. Since multi-chiplet and multi-chip systems offer a promising approach to increasing throughput on large DNN models, optical connectivity can further these scaling efforts by reducing inter-chiplet communication energy by orders of magnitude. In addition, because length-independent data distribution is a tool currently unavailable to digital system designers, relaxing electronic constraints on locality can open new avenues for DNN accelerator architectures. For example, memory can be devised such that numerous small pieces of memory are located far away from the point of computation and reused many times spatially, with a small fixed cost for doing so. Designers can then lay out smaller memory blocks with higher bandwidth, lower energy consumption, and higher yield. If we keep memory and computation spatially distinct, we have the added benefit of allowing for more compact memories that consume less energy and area, e.g., DRAM, which is fabricated with a different process than typical CMOS to achieve higher density than on-chip memories. Furthermore, due to its massive fan-out potential, the DONN can, firstly, reduce overhead by minimizing a system's reliance on a memory hierarchy and, secondly, amortize the cost of weight delivery to multiple clients running the same neural network inference on different inputs. Additionally, some newer neural network models require irregular connectivity (e.g., graph neural networks, which show state-of-the-art performance on recommender systems, but are restricted in size due to insufficient compute power~\cite{graph_survey_wu2020, graph_survey_zhang2020}). These systems have arbitrary connections with potentially long wire lengths between MAC units, representing different edges in the graph. The DONN can implement these links without incurring additional costs in energy from a complex network-on-chip in electronics. Yet another instance of greater distance between multipliers is in higher-bit-precision applications, as in training, which require larger MAC units. Lastly, the DONN could facilitate thermal management in chips, with the option to increase spacing between compute units at no extra cost. In future work, we plan to assess the performance of the DONN on state-of-the-art DNN workloads, such as the models described in MLPerf~\cite{mlperf_micro2020}. Firstly, we will benchmark the DONN against all-electronic state-of-the-art accelerators by using Timeloop~\cite{parashar_timeloop:_2019}. Through a search for optimal mappings (ways to organize data and computation), this software can simulate the total energy consumption and latency of running various workloads on a given hardware architecture, including computation and memory access. Timeloop therefore enables us to perform an in-depth comparison of all-electronic accelerators against the proposed instances of the DONN, including variable data transmission costs for different electronic wire lengths, and waveguide losses in the chip-integrated DONN. Second, we will design an optical setup and receiver to reduce experimental crosstalk, power consumption and latency. We can then test larger workloads on this optimized hardware. Finally, beyond neural networks, there are many examples of matrix multiplication which a DONN-style architecture can accelerate, such as optimization, Ising machines and statistical analysis, and we plan to investigate these applications as well. In summary, the DONN implements arbitrary transmission and fan-out of data with an energy cost per bit that is nearly independent of data transmission length and number of receivers. This property is key to scaling deep neural network accelerators, where increasing the number of processing elements for greater throughput in all-electronic hardware typically implies higher data communication costs due to longer electronic path length. Contrary to other proposed optical neural networks~\cite{hamerly_large-scale_2019, tait_neuromorphic_2017, lin_all-optical_2018, shen_deep_2017,feldmann2020parallel}, the DONN does not require digital-to-analog conversion and is therefore less prone to error propagation. The DONN is also reconfigurable, in that the weights and activations can be easily updated. Our work indicates that the nearly length-independent communication enabled by optics is useful for digital neural network system design, for example to simplify memory access to weight data. We find that optical data transfer begins to save energy when the spacing of MAC computational units is on the order of $>$10~$\upmu$m. More broadly, further gains can be expected through the relaxation of electronic system architecture constraints. \section*{Methods} \subsection*{Digital optical neural network implementation for bit error rate and inference experiments} We performed bit error rate and inference experiments with optical data transfer and fan-out of point sources using cylindrical lenses. Two digital micromirror devices (DMDs, Texas Instruments DLP3000, DLP4500) illuminated by spatially-filtered and collimated LEDs (Thorlabs M625L3, M455L3) acted as stand-ins for the two linear source arrays. For the input activations/weights, each 10.8~$\upmu$m-long mirror in one DMD column/row either reflected the red/blue light toward the detector (`1') or a beam dump (`0'). Then, for each of the DMDs, an $f=100~\text{mm}$ spherical lens followed by an $f=100~\text{mm}$ cylindrical achromatic lens imaged one DMD pixel to an entire row/column of superpixels of a color camera (Thorlabs DCC3240C). Each camera superpixel is made up of four pixels of size (5.3~$\upmu$m)$^2$: two green, one red and one blue. The camera acquisition program applies a `de-Bayering' filter to automatically extract color information for each sub-pixel; this filter causes blurring, and therefore it increased crosstalk in our system. In a future version of the DONN, a specialized receiver will reduce this crosstalk and also operate at a higher speed. To process the image received on the camera, we subtracted the background, normalized, then thresholded. (We acquired normalization and background curves with all DMD pixels in the `on' and `off' states, respectively. This background subtraction and normalization could be implemented on-chip by precharacterizing the system, and biasing each receiver pixel by some fixed voltage.) If the detected intensity was above the threshold value, it was labeled a `1'; below threshold, a `0'. For the bit error rate experiments, we compared the parsed values from the camera with the known values transmitted by the DMDs, and defined the bit error rate as the number of incorrectly received bits divided by the total number of bits. In the inference experiments, the DMDs displayed the activations and pre-trained weights, which propagated through the optical system to the camera. After background subtraction and normalization, the CPU multiplied each activation with each weight, and applied the nonlinear function (ReLU after the hidden layers and softmax at the output). We did not correct for crosstalk here, to illustrate the worst-case scenario of impact on accuracy. The CPU then fed the outputs back to the input activation DMD for the next layer of computation. We used a DNN model with two hidden layers with 100 activations each and a 10-activation output layer. We also tested a model with a single hidden layer with 100 activations. \subsection*{MNIST preprocessing} For the inputs to the network, a bilinear interpolation algorithm transformed the $28\times28$-pixel images into $7\times7$-pixel images, which were then flattened into a 1D 49-element vector. The following standard mapping quantized both input and weight matrices into 8-bit integer representations: \begin{align} \rm{Quantized} = \rm{QuantizedMin} + \frac{(\rm{Input} - \rm{Floating Min})}{\rm{Scale}} \end{align} \noindent where Quantized is the returned value, QuantizedMin is the minimum value expressible in the quantized datatype (here, always 0), Input is the input data to be quantized, FloatingMin is the minimum value in Input, and Scale is the scaling factor to map between the two datatype ranges $\left(\frac{\rm{FloatingMax} - \rm{FloatingMin}}{\rm{QuantizedMax} - \rm{QuantizedMin}}\right)$. See gemmlowp documentation \cite{gemmlowp} for more information on implementations of this quantization. (In practice, 8-bit representations are widely used in DNNs, since 8-bit MACs are generally sufficient to maintain accuracy in inference~\cite{judd2016stripes, albericio2017bitpragrmatic, jouppi_-datacenter_2017}). \subsection*{Electronic and optical interconnect energy calculations} When an electronic wire transports data over a distance $L_\text{wire}$ to the gate of a CMOS inverter (representative of a full-adder's input, which are the building blocks of multipliers), the energy consumption per bit is: \begin{align} \label{eq:electronic} E_\text{elec}/\text{bit} = \tfrac{1}{4}\left(\tfrac{C_{\text{wire}}}{\upmu \text{m}} \cdot L_{\text{wire}}+C_\text{T}\right) \cdot V_{DD}^2 \end{align} where $V_{DD}$ is the supply voltage, $L_{\text{wire}}$ is the wire length between two multipliers and $C_\text{T}$ is the inverter capacitance. Interconnects consume energy predominantly when a load capacitance, such as a wire, is charged from a low (0~V) to a high ($\sim$1~V) voltage, i.e., in a $0\rightarrow1$ transition. If we assume a low leakage current, maintaining a value of `1' (i.e., $1\rightarrow1$) consumes little additional energy. To switch a wire from a `1' to a `0', the wire is discharged to the ground for free (Supplementary Note~4). Lastly, maintaining a value of `0' simply keeps the voltage at 0~V, at no cost. Assuming a random distribution of `0' and `1' bits, we therefore include a factor of 1/4 in equation~(\ref{eq:electronic}) to account for this dependence on switching activity. In the DONN, a light source replaces the wire for fan-out. The low capacitances of the receiverless detectors in the DONN allow for the removal of receiving amplifiers~\cite{miller_attojoule_2017}. Thus, the DONN's minimum energy consumption corresponds to the optical energy required to generate a voltage swing of 0.8~V on the load capacitance (i.e., the photodetector ($C_\text{det}$) and an inverter ($C_\text{T}$)), all divided by the source's power conversion efficiency (called wall-plug efficiency, WPE). Subsequent transistors in the multiplier are powered by the off-chip voltage supply, as in the all-electronic architecture. Assuming a detector responsivity of $\sim$1~\cite{miller2013responsivity}, the DONN interconnect energy cost is: \begin{align} \label{eq:optical} E_\text{DONN}/\text{bit} = \tfrac{1}{2\cdot \text{WPE}}\cdot h\nu \cdot n_\text{p} \end{align} \noindent where $h\nu$ is the photon energy and the number of photons per bit, $n_\text{p}$, is determined by: \begin{align} \label{eq:num_phot} n_\text{p} =\frac{\left(C_{\text{det}}+C_\text{T}\right) \cdot V_{DD}}{e} \end{align} \noindent As in the all-electronic case, we assume low leakage on the receiverless photodetector. Photons are received for every `1' and therefore, to avoid charge buildup, charge on the output capacitor must be reset after every clock cycle. In Supplementary Note~5, we propose a CMOS discharge circuit that actively resets the receiver. (Another possible method is a dual-rail encoding scheme \cite{miller_attojoule_2017}.) Thus, the switching activity factor is 1/2 instead of 1/4: as for the all-electronic case, we assume a random distribution of bits, but here, both $1\rightarrow1$ and $0\rightarrow1$ have a nonzero cost. \section*{Supplementary Note 1: Bit error rate due to crosstalk} Here, we show experimental bit error rate maps for both the blue and red channels. Each DMD pixel is fanned out to a row (column) of superpixels on the camera for the input activations (weights). The Bayer filter allows the discrimination of the input activations from the weights into red and blue channels, respectively. Since the camera has four sub-pixels per superpixel, we bin the sub-pixels into $2\times2$ blocks. As described and shown in Fig.~4 of the main text, random vectors of `1's and `0's were displayed on the DMDs to assess bit error rates in data transmission from two 1D source arrays to the camera. In Fig.~\ref{fig:error}, we show bit error rate maps. Images~\ref{fig:error}(a) and (c) show the error from a single shot (one random vector pair displayed on the DMDs). Images~\ref{fig:error} (b) and (d) show the error averaged over 100 frames (100 different random vector pairs displayed on the DMDs) in the low-error region of interest used for the proof-of-concept experiment. The error is larger on the edges of each image due to optical aberrations, and is larger in the blue channel than the red channel due to misalignment. \begin{figure} \centering \includegraphics[scale=.8]{transmission_error.pdf} \caption{Bit error rates in proof-of-concept experiment. (a) Blue channel: errors when random vector of `0's and `1's displayed on DMD (single shot). Blue: incorrectly transmitted bit; white: correct. (b) Region of interest selected for experiment. Error in blue channel averaged over 100 frames (different vectors displayed on DMD at each frame). (c)-(d) same as (a)-(b), but for red channel.} \label{fig:error} \end{figure} \section*{Supplementary Note 2: Crosstalk correction} The bit error rate described in the previous section is mainly attributable to optical crosstalk at the detector, due to imperfect lenses and alignment. Since this error is deterministic (as opposed to random fluctuations), it can be compensated by post-processing. To illustrate this principle, we performed simple crosstalk correction: we multiplied each line of an image detected on the camera by a tridiagonal crosstalk reduction matrix, per equation~(\ref{eq:xtalk}) (where $\overline{I_{:n}}$ is the corrected line of the camera image). $\xi$ was estimated to be $\sim$0.19 and $\sim$0.18 for the red and blue arms, respectively, from a calibration image of alternating `1's and `0's transmitted by the DMDs. $\overline{I_{:n}}$ is renormalized after this matrix multiplication. We show the effects of crosstalk reduction in Fig.~\ref{fig:xtalk_corr}. \begin{align} \begin{bmatrix} \overline{I_{1n}} \\ \overline{I_{2n}} \\ \vdots \\ \\ \\ \end{bmatrix} = \begin{bmatrix} 1 & -\xi & 0 & & \\ -\xi & 1 & -\xi & & \\ 0 & -\xi & 1 & \ddots & \\ & & \ddots & \ddots & \\ & & & & 1 \end{bmatrix} \begin{bmatrix} I_{1n} \\ I_{2n} \\ \vdots \\ \\ \\ \end{bmatrix} \label{eq:xtalk} \end{align} To maximize energy efficiency and throughput, the final version of this system (with a custom CMOS chip that integrates detection with digital MAC computation) will not perform any post-processing. Instead, we might use a charge-sharing scheme at the transmitters to implement a version of equation~(\ref{eq:xtalk}). Alternatively, we could simply reduce crosstalk by changing the system design; for example, we could choose to space the PEs further apart or shrink the active region of the detectors to improve the ratio of signal at the current pixel to noise from neighboring pixels. \section*{Supplementary Note 3: Training and test sets} In our proof-of-concept experiment, we performed inference on 500 images using a two-hidden-layer, fully-connected neural network, where each hidden layer had 100 activations. We used TensorFlow's built-in dataset importer to download the first 500 images in the test set of the MNIST handwritten digit dataset \cite{lecun-mnist}, as downloaded from the TensorFlow~2 Keras database. Relevent code can be found in the GitHub repository for user Alexander Sludds (alexsludds): \noindent\url{https://github.com/alexsludds/Digital-Optical-Neural-Network-Code} The model's weights were pre-trained on an NVIDIA K40 GPU using the entire MNIST training set. Categorical cross-entropy was used as a loss function. Dropout regularized the model's weights in each layer to prevent overfitting. Input images were downsized from $49\times49$ to $7\times7$ using bilinear interpolation. \begin{figure}[htbp] \centering \includegraphics[scale=.9]{pre_post_corr.pdf} \caption{One line of receiver image after background subtraction and normalization, with random vectors of `1's and `0's displayed on DMDs. (a) Column 100 in red channel (same as Fig. 4b in main text). (b) Same as (a), after crosstalk correction. (c) Row 100 in blue channel. (d) Same as (c), after crosstalk correction.} \label{fig:xtalk_corr} \end{figure} \section*{Supplementary Note 4: Electronic interconnect switching energy in 0 to 1 transitions} The dynamic switching energy of CMOS devices is the amount of energy required to charge the output capacitance of a CMOS gate. Energy is only consumed in CMOS inverters for low-to-high transitions on the outputs of these gates. Consider the toy circuit model shown in Fig. \ref{fig:switching}. On the left is a CMOS inverter, and on the right are a low-to-high and high-to-low transitions, respectively. In the low-to-high transition, the PMOS has to switch closed, shorting the output to the supply rail by charging the load capacitance. In the high-to-low transition, the NMOS already has a sufficient drain-to-source voltage from the load capacitance charge, so it can discharge the output without consuming any power from the supply. To summarize, in an output which switches from low to high and back to low again, the PMOS initially turns on, taking $CV_{DD}^2$ energy from the supply, then the NMOS will turn on, discharging $\frac{1}{2}CV_{DD}^2$ from the charged load capacitor (the other $\frac{1}{2}CV_{DD}^2$ is dissipated as heat in the resistive load). \begin{figure} \centering \includegraphics[scale=0.4]{output_transition_energy.png} \caption{A demonstration of where dynamic energy consumption goes during switching of a CMOS inverter. The circuit, shown left, consists of a stacked NMOS and PMOS device. During an output low to high transition, shown center, charge is deposited on the lumped output capacitance. During an output low to high transition, shown right, that charge from the lumped output capacitance is discharged through the NMOS into ground.} \label{fig:switching} \end{figure} \section*{Supplementary Note 5: Resetting a `receiverless' circuit} There are several circuit methods by which the accumulated charge on the input capacitor can be reset. In the method shown in Fig. \ref{fig:reset_circuit}, we place the NMOS device $\rm{NMOS}_{Discharge}$ between the photodetector and ground and drive the gate with an external reference voltage $V_\text{ref}$. The benefit of this solution is that it consumes no dynamic energy when there is no optical input power. However, it has the tradeoff that it requires additional area on chip and, because it is ratioed logic, requires careful design to ensure functionality. The width of $\rm{NMOS}_{Discharge}$ is set such that the accumulated charge on the capacitor generates a voltage high enough to overcome the input threshold of the load (modeled here as a CMOS inverter), but not so small that it cannot dissipate the charge quickly in a single clock cycle. One problem that arises from receiverless photodetection is that a constant steam of `1's coming into a photodetector without a strong enough $\rm{NMOS}_{Discharge}$ fill causes additional charge to slowly build on the load capacitance. To compensate, we propose a P-N junction diode ($\rm{Clamp~ Diode}$). \begin{figure} \centering \includegraphics[scale=0.6]{receiver_reset_circuit.png} \caption{A proposed circuit for resetting the receiver lumped capacitor model.} \label{fig:reset_circuit} \end{figure} \section*{Supplementary Note 6: Electronic Repeaters} A naive implementation of a repeater is a double inverter. The energy required is $C_\text{T} V_{\text{DD}}^2$, since in any transition, one inverter must be making a low-to-high transition and the other a high-to-low transition. As a result, in any `flip' of a repeater, one inverter does not consume energy. Using the values in Table~2 of the main text, the cost of a repeater is .06~fJ/bit for an output low-to-high transition. Therefore, even in the worst-case scenario where we place a repeater between every multiplier in an array of abutted 8-bit MAC units, the inter-multiplier interconnect energy cost is larger than that of the repeater. \section*{Supplementary Note 7: Shot and thermal noise} In a hypothetical crosstalk-free DONN, the remaining noise sources are thermal (Johnson) and shot noise. To gain insight into whether they would affect classification accuracy, we estimate the ensuing bit error rates (BERs). The detector registers a `1' when $q \geq q_\text{D}$ photoelectrons are generated, and a `0' when $q<q_\text{D}$, where we assume the threshold charge is set by $q_\text{D} = n_\text{p}/2$ electrons. Fig.~\ref{fig:BER} illustrates the probability distributions of the number of photoelectrons, as well as the probabilities that a `0' is received when a `1' is sent (BER$_1$), and vice-versa (BER$_0$). \begin{figure}[htbp] \centering \includegraphics[scale=.9]{BER_for_paper.pdf} \caption{Schematic representation of probability density function of received charge (curves) and bit error rate (shaded region) - not to scale.} \label{fig:BER} \end{figure} In a receiverless photodetector scheme, thermal noise can be approximated as `kT/C' noise~\cite{miller_attojoule_2017}, with: \begin{align} \sigma_{\text{V}}=\sqrt{k_BT/(C_\text{det}+C_\text{T})} \end{align} \noindent where $\sigma_\text{V}$ is the standard deviation of voltage, $k_B$ is the Boltzmann constant, $T$ is the temperature in Kelvin, $C_{\text{det}}$ is the capacitance of the photodetector, and $C_\text{T}$ is the capacitance of the inverter. The temperature depends on quality of heat sinking and proximity to hot spots; from Ref.~\cite{heat_2012}, we assume it is in the range $T\in\left[300-500\right]$. Using the values from Table 2 of the main text, we find $\sigma_\text{V}\approx5-6~\text{mV}\ll V_{DD}$. We can further verify whether thermal noise is likely to cause bit errors by approximating the probability distribution due to thermal noise, $p_\text{J}(q)$, by a Gaussian: \begin{align} p_\text{J}(q) = \frac{1}{\sigma_\text{J}\sqrt{2\pi}}e^{-\tfrac{q^2}{2\sigma_\text{J}^2}} \end{align} \noindent with $\sigma_{\text{J}}=\sqrt{k_BT(C_\text{det}+C_\text{T})}/e \approx 6-7$~electrons. \noindent To first order, shot noise will not affect the transmission of `0's (BER$_0$) since the number of transmitted photons is $n_\text{p}=0$. Thus: \begin{align} \text{BER}_0 = \sum_{q=q_D}^{\infty}p_0(q) = \sum_{q=q_D}^{\infty}p_\text{J}(q) &= \sum_{q=q_D}^{\infty} \frac{1}{\sigma_\text{J}\sqrt{2\pi}}e^{-\tfrac{q^2}{2\sigma_\text{J}^2}} \\ &\approx \frac{1}{2}\text{erfc}\left( \frac{q_D}{\sqrt{2}\sigma_J} \right) \end{align} \noindent BER$_0$ for different $n_\text{p} = 2q_\text{D}$ are reported in Table~\ref{tab:BER}. We assume shot noise follows a Poissonian probability distribution: \begin{align} p_\text{shot}(q) = \frac{e^{-n_\text{p}}\left(n_\text{p}\right)^q}{q!} \end{align} \noindent where $n_\text{p}$ is the number of photons per detector per clock cycle. \noindent For ease of computation with large $n_\text{p}$, we take the natural logarithm: \begin{align} \text{ln}\left(p_\text{shot}(q)\right) &= \text{ln}\left(\frac{e^{-n_\text{p}}\left(n_\text{p}\right)^q}{q!}\right) \\ &= \text{ln}\left(e^{-n_\text{p}}\right)+q\text{ln}\left(n_\text{p}\right)-\text{ln}\left(q!\right)\\ &= -n_\text{p}+q\text{ln}\left(n_\text{p}\right)-\sum_{m=1}^q \text{ln}\left(m\right) \\ &\Downarrow \\ p_\text{shot}(q) &= \text{exp}\left(-n_\text{p}+q\text{ln}\left(n_\text{p}\right)-\sum_{m=1}^q \text{ln}\left(m\right)\right) \end{align} \noindent BER$_1$ due to shot noise is therefore: \begin{align} \text{BER}_1^\text{shot} = \sum_{q=1}^{q_\text{D}-1}p_\text{shot}(q) \label{eq:BER1} \end{align} \noindent Results of this computation for various $n_\text{p}$ are shown in Table~\ref{tab:BER}. \begin{table}[ht] \centering \caption{Expected values for BER$_1$ due to shot noise for different numbers of transmitted photons/bit} \begin{tabular}{c|c|c|c} $n_\text{p}$ & BER$_0^*$ & BER$_1^\text{shot}$ & BER$_1^\text{total}$ \\ \hline 10 & $10^{-1}$ & $ 10^{-2}$ & $10^{-1}$ \\ 100 & $10^{-18}-10^{-12}$ & $10^{-8}$ & $10^{-6}-10^{-5}$ \\ 1000 & small$^\dagger$ & $10^{-69}$ & $10^{-65}-10^{-63}$ \end{tabular} \label{tab:BER} \caption*{*BER$_0=\text{BER}_1^\text{thermal}$ \\ $^\dagger$Too small for MATLAB to compute. \\ Note: We report a range since thermal noise, and therefore BER, depends on quality of heat sinking.} \end{table} Thermal noise will also contribute to BER$_1$; we convolve the probability distributions to find the total bit error rate: \begin{align} \text{BER}_1^\text{total} = \sum_{q=1}^{q_\text{D}-1}p_1(q) = \sum_{q=1}^{q_\text{D}-1}p_\text{shot}(q)\circledast p_\text{J}(q) \end{align} From equation~(5) in the main text, we find $n_\text{p}\approx 1000$~photons/bit to generate a voltage swing of 0.8~V on the load capacitance; therefore, the expected BER is negligible, per Table~\ref{tab:BER}.
proofpile-arXiv_067-11293
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} One of the key challenges in plant breeding and crop production is to predict performance (seed yield) in unseen and new environments. This active research area is complicated by the time and expense of generating an extensive dataset to represent a wide range of genotypes and environments. Among different crops, soybean has a long history of cultivation in North America, with the first reported production in Georgia in 1766 \cite{hymowitz1983introduction}. Over the years, production in the US and Canada has expanded longitudinally as far west as Kansas-Colorado border and latitudinally from southern Texas to Canada \cite{websitereference_1,websitereference_2}. North American annual soybean yield trials (known as Uniform Soybean Tests (UST)) have been coordinated in the United States and Canada through the United States Department of Agriculture (USDA) between public breeders in university and government settings since 1941 \cite{websitereference_4, websitereference_5}. These trials are used to evaluate current and experimental varieties in multiple environments within their range of adaptation. Therefore, these trials are valuable sources of historical and current data to improve prediction performance with the assimilation of genetic and environmental variables. Management and permanent environmental effects have been examined primarily at small scales due to the labor required for managing large numbers of plots \cite{zhang2016warming, puteh2013soybean}. With the addition of each layer of added characterization of the environment, less of the differences need be ascribed to a generic "environmental" component, and can instead be examined individually in combination with plant genetics. The nexus of genetic and non-genetic variables form the cornerstone of plant breeding strategies, irrespective of crop species, for meeting crop production challenges in the future\cite{lenaerts2019improving,Wulff2019breed}. Climatic resiliency in cultivars is an important objective for plant breeders and farmers to get a high seed yield in a myriad of environments\cite{ICARDA2018resil}. The climatic variability can be associated with changes in temperature and rainfall events (including patterns and magnitude) and other weather variables. In addition to spatial variability, temporal variability of weather variables \cite{websitereference_3} is equally important but generally less understood or not included in yield prediction studies. It is important to understand how agricultural production is affected by the variability of weather parameters in presence of global climate change, especially with higher occurrence of extreme weather events. Therefore, prediction of the effects of changing environments on performance can help in making informed plant breeding decisions, marketing decisions, optimizing production and comparing results over multiple years \cite{jagtap2002adaptation}. Traditionally, crop growth models have been proposed to simulate and predict crop production in different scenarios including climate, genotype, soil properties, and management factors~\cite{blanc2017statistical}. These provide a reasonable explanation on biophysical mechanisms and responses but have deficiencies related to input parameter estimation and prediction in complex and unforeseen circumstances~\cite{roberts2017comparing}. Previous attempts at yield prediction across environments have relied on crop models generated by quantifying response in a limited number of lines while altering a single environmental variable, limiting the inference scope ~\cite{bishop2014seasonal}. To bypass the limitations of crop growth models, linear models have also been used to predict yield with some success ~\cite{jewison2013USDA}. However, these low-capacity models typically rely on a rather small subset of factors, therefore failing to capture the complexity of biological interactions and more site-specific weather variable complexities. Traditional linear methods such as Autoregressive Integrated Moving Average (ARIMA) have been used for time series forecasting problems \cite{petricua2016limitation}, but these methods are effective in predicting future steps in the same time-series. For time series prediction tasks, deep neural networks show robustness to noisy inputs and also have the capability to approximate arbitrary non-linear functions~\cite{dorffner1996neural}. Deep learning models can provide solutions in the presence of such complex data comprising of different weather variables, maturity groups and zones, and genotype information. Long Short Term Memory (LSTM) networks are very useful for time series modeling as they can capture the long-term temporal dependencies in complex multivariate sequences \cite{malhotra2015long}. LSTMs have shown state-of-the-art results in various applications including off-line handwriting recognition \cite{doetsch2014fast}, natural language processing \cite{sutskever2014sequence} and engineering systems \cite{gangopadhyay2020deep}. LSTMs have also been used effectively for multivariate time series prediction tasks \cite{jiang2018predicting,gangopadhyay2018temporal, shook2018integrating}. Considering the importance of climate extremes for agricultural predictions, random forest has been utilized to predict grid-cell anomalies-deviations of yields \cite{vogel2019effects}. Previous work \cite{you2017deep} using deep learning for yield prediction has utilized multi-spectral images to predict yield (instead of leveraging only multivariate time series as input) without considering model interpretability. Khaki et al. \cite{khaki2019crop} applied deep neural networks for yield prediction of maize hybrids using environmental data, but their model is not capable of explicitly capturing the temporal correlations and also lacks explainability. LSTM based model has been used for corn yield estimation \cite{jiang2019deep}, but these models lack interpretability. This study is based on geospatial data without field-scale farming management data and lacks temporal resolution in the absence of daily weather data. Attention based LSTM has been used along with multi-task learning (MTL) output layers \cite{lin2020deepcropnet} for county level corn yield anomaly prediction only based on meteorological data (maximum daily temperature, minimum daily temperature) without field-scale farming data. Other approaches to predict yield rely on the use of sensors to identify the most informative set of variables to predict yield\cite{parmley2019tpp,parmley2019scirep}, which is very useful in multiple applications; however, there is still a need to integrate weather parameters and in a time series approach involving multiple genotypes. Using these motivations, we developed a model that can capture the temporal variability of different weather variables across the growing season in an explainable manner to predict soybean yield from the UST dataset of field trials spanning 13 years across 28 states and provinces. \begin{figure*}[tbhp] \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=12cm, height=8 cm]{fig/map_locations.PNG} \end{center} \caption{Map showing different locations in the USA and Canada included in our dataset. The dataset comprises of different maturity groups (MGs), some of which are labeled in the figure. The relative size of a yellow dot (representing location) indicates the size of the dataset for that particular location. Dataset included observations from the National Uniform Soybean Tests for years 2003-2015 and is split into North (MG 0 to 4) and South (MG 4 to 8) regions\cite{UST2018S, UST2018N}, consisting of 103,365 performance records over 13 years and 150 locations. These records are matched to weekly weather data for each location throughout the growing season (30 weeks). This generated a dataset with 35,000 plots having phenotype data for all agronomic traits.} \label{map_details} \end{figure*} We propose a framework based on LSTM and temporal attention to predict crop yield with 30 weeks of weather data per year (over 13 years) provided as input, along with a reduced representation of the pedigree to capture differences in the response of varieties to the environment. We vary the number of input time-steps and compare the performance of our proposed Temporal Attention model with the Stacked LSTM model for two variations of each model. We also compared against the results of random forest (RF), LASSO regression and the data-driven state-of-the-art USDA model. The temporal attention mechanism highlights the significant time periods during the growing season leading to high or low yield prediction, concurred with domain knowledge. In this paper, we report improved fidelity interpretation of the prediction outcomes without sacrificing the accuracy for multivariate time-series prediction. Our proposed framework can have widespread applications in plant breeding, crop science research, and agricultural production. \section*{Methods} \subsection*{Preparation of Performance Records} Files from 2003-2015 USTs were downloaded as PDFs \cite{websitereference_4, websitereference_5}. Using on-line utility Zamzar (zamzar.com), all 26 PDFs from this period were converted to .xlsx files, with each tab corresponding to a single page in the file. In this way, the vast majority of tables were recovered with no errors or need for human translation. However, random checking for error was manually performed to ensure verity. These tables were manually curated to align all performance records for a given genotype/location combination into a single row. Records that did not have yield data (due to a variety not being planted in a specific location or dying prior to production of seed), were removed from the file. Following data cleaning, the final dataset comprised of 103,365 performance records over 13 years representing 5839 unique genotypes, along with all available management information. After compilation, we imported performance records in Python for further data analysis. \subsection*{Acquisition and Sub-Sampling of Weather Records} Daily weather records for all location/year combinations were compiled based on the nearest available weather station (25km grid) on Weather.com. We downsampled the dataset to include maximum, minimum, and average conditions on different time frames throughout the growing season (defined April 1 through October 31) and this information was appended to performance records. \subsection*{Genotype Clustering} We included genotype-specific criteria to apply the model for specific genotypes and mean location yield across genotypes. Due to the nature of the UST program, most of the genotypes tested in this period do not have molecular marker data available, preventing the use of a G matrix. To circumvent these restrictions, we developed a completely connected pedigree for all lines with available parentage information, resulting in the formation of a 5839 x 5839 correlation matrix. To improve the model performance, genotypes were clustered based on the organization which developed them, providing additional control over relatedness. We clustered genotypes in 5 clusters using the K-means Clustering technique based on the correlation matrix to extract information about relatedness. With a specified number of clusters ($n$), the K-means algorithm finds $n$ groups of equal variance by choosing centroids of the clusters to minimize a criterion known as $inertia$ (also called, within-cluster sum-of-squares). This algorithm is effective for a large number of samples and finds application across different domains. With this hard clustering technique, each genotype belongs to one of the 5 clusters. The clustering is used to represent each line as a function of membership in 5 groups, which is fed into the model to allow differentiation of lines. \subsection*{Model Development} To leverage the temporal sequence of variables, a modeling approach based on recurrent neural network (RNN) was developed to capture correlation across time. Gradient descent of an error criterion may be inadequate to train RNNs especially for tasks involving long-term dependencies \cite{bengio1994learning}. To overcome these challenges, long short-term memory (LSTM) was used, which is an RNN architecture designed to overcome the error backflow problems \cite{hochreiter1997long}. By learning long-range correlations in a sequence, LSTM can accurately model complex multivariate sequences~\cite{malhotra2015long}. \begin{figure*} \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=16cm, keepaspectratio]{fig/stacked_lstms_without_attention.PNG} \end{center} \caption{The figure showing the Stacked LSTM Model. The input feature vector is $x^{<t>}$ at time-step 't'. Depending on whether the maturity group and genotype cluster information are incorporated in the model or not, the vector $x^{<t>}$ can be 9-dimensional or 7-dimensional. We included 7 weather variables in our study. The embedding vector $a^{<T_x>}$ encodes the entire input sequence and summarizes the sequential dependencies from the time-step 0 to the time-step $T_x$. We designed two variants of our proposed model based on input information with the time series encoding part remaining the same for both variants. This model (when including MG, cluster) had 106,511 learnable parameters and the training time/epoch was 60 secs.} \label{stacked_lstms_model} \end{figure*} \begin{figure*} \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=13cm, keepaspectratio]{fig/stacked_lstms_with_attention.PNG} \end{center} \caption{The figure showing the Temporal Attention Model. The LSTM encoding part is the same as that of the Stacked LSTM Model where we get the annotations $a^{<t>}$ for each timestep. Instead of only using $a^{<T_x>}$, this model utilizes all annotations which act as inputs for the temporal attention mechanism. Based on the computed context vector, the two variants of this model are designed depending on the input information. This model (when including MG, cluster) had 106,562 learnable parameters and the training time/epoch was 60 secs.} \label{temporal_attention_model} \end{figure*} We developed two models, based on LSTM: (a) Stacked LSTM Model (without using any attention) (Fig.~\ref{stacked_lstms_model}), and (b) Temporal Attention Model (using a temporal attention mechanism) (Fig.~\ref{temporal_attention_model}). The output of both the models is yearly seed yield as this is a many-to-one prediction problem. For each model, we formulated the model variants depending on whether the performance records comprise data of maturity group and genotype cluster. The same modeling approach was used to compute the time-step wise encoding for both models. Two stacked LSTM layers were used to encode the $T_x$ time-steps of the input sequence as shown in Fig.~\ref{stacked_lstms_model}. Depending on the variant, for both models, we concatenated MG and genotype cluster values with the compressed time-series information. In the Stacked LSTM Model, the last hidden state of the encoding part is assumed to be the compressed representation from the entire input sequence. This fixed-dimensional representation was used for predicting the output value of seed yield (Fig.~\ref{stacked_lstms_model}). For the Temporal Attention Model, the compressed information (context) is computed after aggregating the information from the sequence of hidden states using the attention mechanism. The concept of soft temporal attention \cite{bahdanau2014neural} was first proposed in the context of neural machine translation to overcome the bottleneck of the encoder-decoder model\cite{cho2014learning, sutskever2014sequence} for long sequences. Compressing all information from the input time-steps into a fixed-length single vector was the major bottleneck for the encoder-decoder model. Temporal attention can be applied for many-to-many time series prediction \cite{gangopadhyay2018temporal} and many-to-one-prediction \cite{gangopadhyayexplainable,gangopadhyay2019deep}. The proposed approach (Fig.~\ref{temporal_attention_model}) does not incorporate a decoder LSTM as we are performing a many-to-one prediction problem. Taking in the annotations of all time-steps as input, the attention block aggregates the information and computes the context vector. A greedy search method was utilized to empirically determine the most influential weather variable on seed yield prediction considering data of both the northern and southern U.S. regions. In the first step of the greedy search, the Stacked LSTM model was trained for each of the 7 variables and choose the variable that had the least RMSE. With this variable added, in the second step, the model was trained for each of the other 6 variables. In this way, variables were added. More information is provided in the supplementary materials (Supplementary Tables 5, 6 and 7). All input features were scaled in the range (-1, 1) with the scaler fitted on the training set. We compute the Root Mean Square Error (RMSE) after inverting the applied scaling to have forecasts and the actual values in the original scale. Data were randomly split into training (80\%), validation (10\%) and test (10\%) sets. Models were evaluated by computing RMSE for the test set. Both models were trained for 200 epochs to get the optimal RMSE scores. For training, Adam optimizer was used \cite{kingma2014adam} (learning rate of 0.001) and the mean squared error loss function was computed. Models were developed using Keras \cite{chollet2015keras} with the TensorFlow backend \cite{abadi2016tensorflow} and the models were trained using NVIDIA GPUs. \section*{Results} To select hyper-parameters (determination of appropriate temporal sampling of weather information to predict yield using our proposed frameworks), the test set RMSE was used to determine optimal (lowest RMSE) number of time points to predict seed yield. Using a step-wise approach building from monthly, bi-weekly, weekly and finally daily data, similar performance was observed in each scenario (approximate test RMSE = 7.206) except for daily data. The intermediate scenario of weekly data was picked for all subsequent analyses, to facilitate faster training of LSTMs and also not to downsample to a higher extent in capturing the long-range temporal dependencies. \begin{figure*}[tbhp] \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=17cm, keepaspectratio]{fig/prediction_results.PNG} \end{center} \caption{Results for different inputs to the Stacked LSTM model. The vertices of the triangle demonstrate results including only the maturity group, only genotype cluster and only weather variables in the input. The edges show the results with a combination of inputs from the respective vertices. The results showed improvement when the genotype cluster was included with weather variables. The coefficient of determination increased further when the maturity group was included with weather variables. The best result were noticed when information from all sources was incorporated (shown at the center of the triangle). The best performance (RMSE = 7.130) is about 14\% of the average seed yield for the test set (50.745) and 44.5\% of the standard deviation (16.019).} \label{prediction_results} \end{figure*} Using weekly weather aggregate data in our model, the prediction models were built starting with a heuristic variable importance given to each variable. For example, precipitation was deemed to be most important followed by average surface temperature and so on. However, the largest drop in test RMSE was observed for the maturity group when it was used as a predictor factor in the model and adding the MG classification after the 2nd LSTM as well caused a further improvement in model performance. No perceptible change in performance was observed with variation of the number of clusters (5, 10, 15, 20, 25) using the hard clustering technique (K-means clustering). Therefore, subsequent analyses are done using 5 clusters in the proposed models for prediction and variable search. Adding the genotype cluster information at every time-step and also after the 2nd LSTM, showed better results. From our greedy search, we observed average relative humidity had the lowest test RMSE. With the inclusion of average relative humidity in the prediction model, average direct normal irradiance was the next most important variable. Sequentially, the remaining weather variables were: maximum direct normal irradiance, maximum surface temperature, minimum surface temperature, average surface temperature, and average precipitation. A second greedy search initiated with the inclusion of maturity group and pedigree-based clustering revealed minimum surface temperature as the most important weather variable (lowest RMSE). The greedy search results revealed the following sequence of weather variable importance obtained from a forward selection approach: average direct normal irradiance, average surface temperature, maximum direct normal irradiance, average precipitation, average relative humidity, and maximum surface temperature. Noticeably, the ranking of the variables was different but the absolute change in RMSE scores was minimal. Overall, a correlation of 0.894 between predicted and observed yields in the testing and validation sets was attained; largely capturing the differences in performance between environments and years. However, the model remains somewhat limited in its ability to generate genotype-specific yield predictions due to the limited complexity of relationships which can be modeled using LSTM, and a lack of genomic information on each genotype. Since, a lack of molecular marker data for each line precludes us to leverage genomic prediction and its integration with the LSTM model, it is the next step of the approach presented in our paper. As currently implemented, the model's average absolute error is 5.4 bu/acre, which is reasonable given the levels of variability within a given environment/year combination. For example, in Ames, IA, during 2003, yields ranged from 33.3-55.3 bu/acre. In spite of this large range of difference, an average error of only 4.5 bu/ac was observed for this environment. No perceptible trends are observed when we looked at state wide results combined over years. We also looked at originating breeding state as well as private company entries, and no geographical trends were noticeable. Both proposed models (Stacked LSTM, Temporal Attention) showed similar performance, and results improved when more information were included (Fig.~\ref{prediction_results}). The coefficient of determination was highest (0.802) when information from all the sources (maturity group, genotpye cluster, weather variables) were incorporated. The best model performance (test RMSE = 7.130) was \~14\% of the average yield for the test set (50.745) and 44.5\% of the standard deviation (16.019)(Fig.~\ref{prediction_results}). Comparatively, test RMSE of 12.779 was obtained from least absolute shrinkage and selection operator (LASSO) regression, while Random Forest test RMSE was 9.889 with same input features. Therefore, both Stacked LSTM and Temporal Attention models outperform LASSO and RF models. In comparison with the data-driven state-of-the-art USDA model\cite{jewison2013USDA}, our deep learning approach performs significantly better demonstrating much lower absolute errors. The USDA approach uses a linear regression approach with coefficients based on historical statewide yields and weather averages. However, the USDA model does not predict performance for individual locations. Due to this limitation, we compare results of our model with the USDA model using year wise average across states for the test set. In comparison with the USDA model, the absolute errors of our model are lower for all 12 years (except in 2011). For 2014 and 2015, the absolute errors of deep learning models were 0.03 and 0.35 (compared to 1.32 and 1.70 for the USDA model), respectively. Detailed comparison results are provided in the supplementary material (Supplementary Table 10). \begin{figure*}[tbhp] \begin{center} \setlength{\unitlength}{0.012500in}% \includegraphics[width=17cm, keepaspectratio]{fig/attention_results.PNG} \end{center} \caption{Results showing the distribution of attention weights for the entire input sequence (spanning the growing season). Considering different ranges of actual yield, the results are demonstrated for two different maturity groups (MG = 1, MG = 7) providing stark geo-climatic regions (Fig.~\ref{map_details}). Early season variables were observed to be comparatively less important for prediction of the highest yielding genotypes.} \label{attention_results} \end{figure*} In addition to accurate yield prediction, the Temporal Attention Model provided insights (Fig.~\ref{attention_results}) about how early-season variables were less important for yield prediction in the highest yielding genotypes for two geographically distinct maturity groups: MG1 (Northern US adaptation) and MG7 (Southern US adaptation). We observed mild sigmoid curves for the highest yielding group in the case of both MG1 and MG7. However, we note that while MG1 had a significantly large number of plots ($\approx 550$) for the highest yielding group, MG7 had only about $30$ such plots. It points to the increasing importance of features in the August – September time phases for both North and South US regions. These time phases coincide with crop reproductive phases, emphasizing their importance in the final yield, and need functional validation which is outside of the scope of our research. However, this is an example of hypotheses generation advantage of these models motivating future research. \section*{Discussion} We establish the potential for use of a long short-term memory-based method for yield prediction to allow models to account for temporal differences in the occurrence of weather events. Predictions using this system can be made reasonably accurate due to a large amount of training data made available through the mining of historical records. Our approach (using LSTM and attention) is an efficient modeling scheme to analyze soybean crop growth interaction with the weather, and to identify hypothesis for plasticity, as well as to identify key physio-environmental features that are important to include into any predictive model. For example, differences in the timing of extreme heat events, as well as drought periods, would affect soybean plants in various ways depending on the stage of plant development. For example, heat stress during flowering is particularly damaging while heat in vegetative stages of development may not produce significant reduction to harvested yield ~\cite{westgate1993flower}. With a larger encompassing dataset, breeders and researchers can be empowered to parse out most informative time periods, weather variables and crop responses. This information sets up the framework for breeding strategies to develop climate resilient and responsive varieties. Our results -- via our hypothesis generation approach -- show a potential mismatch in the heuristic/empirical results for the importance of weather variables. The finding of minimum surface temperature as the most significant weather variable suggests that nighttime temperatures play a larger role in yield prediction than previously suggested~\cite{gibson1996influence}. Our study is a retrospective design, and cannot conclude definitively that this is the case; however, these findings necessitate further empirical investigations and can be used to formulate the next set of hypotheses. Our findings are significant, as minimum temperatures have been reported to be increasing at a faster rate than maximum temperatures~\cite{karl1993asymmetric}. More studies are needed to ascertain the relative importance of these variables and can motivate morpho-physiological attentive breeding approaches to assemble sturdier varieties for future scenarios. A large capacity machine learning approach, such as the one presented in this paper using LSTM-RNN will be robust to incorporate weather changes and adjust performance predictions accordingly. Additional information that may improve the results of this approach is the inclusion of any supplemental irrigation provided, soil fertility levels, disease pressure and resistance levels, and direct genetic markers for the tested varieties, all of which would further strengthen predictive ability. Therefore, future implementations may be expanded to include genomic data, additional factors such as preceding crop, row spacing, planting date, soil texture, or additional temporal data in the forms of soil sensor measurements and remote sensing data for morphological and physiological traits. The approach presented in this work will further enhance phenomic assisted breeding that collects in-season data using different sensors and payloads~\cite{parmley2019scirep,parmley2019tpp,gao2018novel} using machine and deep learning approaches suitable in plant sciences applications~\cite{singh2016machine,singh2018deep,ghosal2018explainable}. Our work shows a unique strategy to assimilate and utilize complex data for seed yield prediction. For comparative purposes, we compared our models with the RF, LASSO and the data-driven USDA model. The USDA model has a limitation on the type of data it can utilize and is limited in its application. For example, as the USDA model computes predictions at the state level, the finer resolution available with our model may help in making regional marketing decisions, as well as in creating yield predictions which can capture intra-state variation due to factors such as differences in rainfall in different areas of the state. Since our results are built on more than a decade of data, it also reflects that early season weather variables are less useful in seed yield prediction and needs empirical evidence to confirm the genetic variability in plasticity of soybean genotypes in earlier stages of growth and development. Importantly, we emphasize that the utilization of the attention module within a LSTM framework allows us to tease out potentially important features for further testing. This alleviates the disadvantage of DL models -- which serve as purely blackbox predictive models -- by allowing for hypothesis generation that will allow scientific insight via targeted follow up analysis and experiments. The advantages of LSTM based models have been recently established for maize yield prediction at a county level \cite{jiang2019deep}, but the model lacked interpretability. Attention based LSTM along with multi-task learning (MTL) output layers has also been used for maize yield prediction using county level data based on meteorological data (maximum daily temperature, minimum daily temperature, and daily precipitation) \cite{lin2020deepcropnet}. These studies are important for solving the yield prediction challenge; however, models are based on geospatial data without field-scale farming management data and variety information is indiscernible, and based on limited weather variables. In our soybean study, we included seven weather variables and detailed field-scale farming data with multiple maturity groups spanning continental U.S. and full variety representation. We have shown that an LSTM-based approach can improve seed yield prediction accuracy due to the ability to identify both temporal effects of weather events and the relative importance of various weather variables for crop yield prediction. Advances in developing an explainable yield prediction model using attention mechanism is an attractive development. The basic framework of LSTM for the phenotypic prediction can be applied to any crop with weather-dependent variability in order to better understand the genotype x environment effects found in the course of multi-environment testing. As such, this approach can be immediately useful for researchers in a variety of crops and environments and may prove to be exceptionally powerful when used in collaborative efforts between researchers operating in contrasting climatic zones, and in conjunction with sensor data for prescriptive breeding\cite{parmley2019scirep} including for root traits\cite{falk2020computer}. The insights provided by our model can help in understanding the impact of weather variability on agricultural production in the presence of climate change, and devise breeding strategies for variety plasticity to circumvent these climatic challenges. The ability to make accurate predictions of crop performance can lead to optimization across many different levels of organizations. At the federal level, improved crop insurance recommendations can be made based on weather forecasts before planting, and be continually updated throughout the season as more data is recorded and forecasts are updated. Railroads, grain cooperatives, and end-users can streamline the logistics of handling the desired quantities of grain if they are permitted a better understanding of how much grain (and of what quality) will be produced in a given region. Farmers can make better marketing decisions if they have an accurate and high confidence prediction of their production for the year, allowing them to sell their crops at the most opportune time. We envision that similar work on other crops and over a longer time span will generate invaluable insights for cultivar development and plant breeding and production related research in a challenging climate. \section*{Conclusion} Unraveling causality would be a substantial step forward in understanding impact of climate change on variety's plasticity. Viewed through the lens of causality, DL based predictive models vs process based predictive models have distinct pros and cons. Process based models have clear causal relationships (by construction); however causality is limited to the confines of the model parameters, and it is non-trivial to assimilate additional data to extract broader causal trends. On the other hand, incorporating causality into DL based models is an open problem in the AI/ML community, with much activity. No principled approaches exist to accomplish this. However, DL based models (in contrast to process-based models) have the ability to seamlessly assimilate additional data. Our vision is therefore to evaluate if systematically augmenting DL based predictive models with increasing amounts of physio-morphological informative features provides a way towards unraveling causal relationships. We accomplish this by deploying our DL framework as a 'hypotheses generation tool'. We build DL models using a large volume of data and variety of information incorporating domain based knowledge. We then systematically probe the impact of various physio-morphological and environmental parameters on yield (via sensitivity analysis, and "what if" scenario evaluation), and establish a framework to generate hypotheses in different crop species and physio-morphological characteristics under different climatic conditions. Until causality based DL becomes feasible, the hypotheses generation DL models will have the maximum impact in meeting the need of climate change scenarios and to incorporate plasticity response in future varieties. \section*{Acknowledgements} Funding for this project was provided by Iowa Soybean Association (AKS), Monsanto Chair in Soybean Breeding (AKS), RF Baker Center for Plant Breeding (AKS), Plant Sciences Institute (SS, BG and AKS), USDA (SS, BG, AKS), NSF NRT (graduate fellowship to JS) and ISU's Presidential Interdisciplinary Research Initiative (AKS, BG, SS). The authors thank Vikas Chawla for his assistance with querying weather data for this project. \section*{Author contributions statement} A.K.S., J.S., S.S. and B.G. conceived the research; All authors contributed in the design of the analysis and interpretation; J.S. compiled the UST performance and pedigree data; T.G. and J.S. performed statistical analysis, T.G. and L.W. built machine learning models and results were interpreted by T.G. and J.S. with inputs from S.S., A.K.S., and B.G.; J.S. and T.G. wrote the first draft with inputs from A.K.S. and S.S.; All authors contributed to the development of the manuscript.
proofpile-arXiv_067-11306
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{sec_intro} \vspace{-0.05in} From 1999 to 2019, almost 500,000 people died from overdose in the USA \cite{cdc19992019} which is the leading cause of death for 25- to 64-year old US citizens. Opioid overdose is a life-threatening condition that, if not treated within minutes, can lead to severe neurological damage and death \cite{BucklandDesignConsider}. The chance of survival of an opiate overdose victim decreases by ten percent with each minute passing before resuscitation is attempted \cite{2019Burroughs,ornato2020feasibility}. Although emergency medical services (EMS) are located to optimize access to the population, the median arrival time of US EMS is between 7 and 8 minutes and can go beyond 14 minutes in rural, geographically challenged, or high-traffic urban areas \cite{Johnson2021impact}. In that respect, \citeauthor{gao2020dynamic} report that lessening the response time by one minute traditionally requires adding ambulances, each costing around \$200,000, whereas a \$10,000 drone can decrease the response time by two minutes. Consequently, medical drones are viewed as a promising way to reduce response times and to enhance EMS performance \cite{PULSIRI19}. This motivates us to design a drone-based network to provide timely medical treatment and to study its potential to increase the chance of survival of overdose victims. \subsection{Background }\label{BACK} The opioid overdose epidemic is ramping up as the number of incidents has quadrupled in the last 15 years. In the midst of this crisis, there are about 130 opioid overdose deaths each day \cite{ornato2020feasibility}. The first wave of overdoses began in 1990s with an increase in the cases related to prescription opioids. The 2010 second wave was due to heroin while the third one, in 2013, was caused by synthetic opioids, in particular the illicitly manufactured Fentanyl \cite{cdcopioid}. The ongoing COVID-19 pandemic has further exacerbated the opioid health crisis \cite{SLAVOVA}. According to the Centers for Disease Control and Prevention (CDC) \cite{cdc81000}, over 81,000 drug overdose deaths occurred in the USA between May 2019 and May 2020. Synthetic opioids are the primary driver behind the surge in overdose deaths. The death counts related to the synthetic opioid increased by 38.4\% from the 12-month period ending in June 2019 to the 12-month period ending in May 2020. In addition to the substantial human toll, opioid overdose poses a great burden on the economy. The Society of Actuaries \cite{soa2019} estimated that the total economic cost of opioid misuse amounts to at least \$631 billions from 2015 to 2019 in the USA and includes the cost associated with healthcare services, premature mortality, criminal justice activity, and related educational and assistance program. As an illustration, the White House Council of Economic Advisers (CEA) estimated the 2015 total cost of opioid overdoses to \$504 billion, which includes \$431.7 billion due to mortality costs and \$72.3 billion due to health, productivity and criminal justice costs \cite{2018Brill}. Many overdose-related deaths are preventable if naloxone, an overdose-reversal medication, can be administrated within minutes following the breath cessation of the overdose victim \cite{BucklandDesignConsider}. Naloxone can be safely administered by a layperson witnessing an overdose incident \cite{whowebiste2020} and its increased utilization has contributed to reducing the mortality rate of opioid overdoses as stressed in \cite{LynnOverdoseStats}. As opioids can slow or even entirely stop breathing, permanent brain damage can occur within only four minutes of oxygen deprivation. This underscores the need of administering naloxone timely to prevent brain damages or death and explains why naloxone access is of the US Department of Health and Human Services’ three priority areas to combat the opioid crisis~\cite{ToddAlexOpioid}. In order to mitigate the opioid crisis, \citeauthor{ornato2020feasibility} advocate the development of a system that allows 911 dispatchers, trained as drone pilots, to swiftly deliver naloxone with drones to bystanders and to guide them to administer naloxone. Drones, also known as unmanned aerial vehicles (UAVs), are an emerging technology that can accomplish time-sensitive medical delivery tasks by fastening the time-to-scene response, particularly in the areas lacking established transportation network. A test in Detroit showed that the DJI 'Inspire 2' drones turned out to be systematically faster than ambulances -- the traditional first responders -- in delivering a naloxone nasal spray toolkit \cite{TukelTimeToScene}. In order to make drone deliveries of naloxone strategy a reality and to leverage their full capability, drones need to be strategically deployed and assigned in a timely manner to the randomly arriving overdose-triggered requests (OTR) to account for the spatial and temporal uncertainty of opioid overdose incidents. The design of drone networks that sets up drone bases, prepositions drones, and determines their assignment to overdose emergencies remains an open problem in the literature. In that regard, \citeauthor{BucklandDesignConsider} stress the need to develop drone network optimization models for opioid overdoses so that dispatchers ''\textit{understand when and where to dispatch drones}``. This study is purported to fill this gap in the academic literature and the EMS practice. \vspace{-0.05in} \subsection{Structure and Contributions} \label{STRU} \vspace{-0.05in} The contributions of this study are fourfold spanning across modeling, reformulation, algorithmic development, and emergency medical service practice. \noindent {\bf Modeling:} We develop a novel network design queueing-optimization model to help EMS operators respond quickly and efficiently to an opioid overdose via the drone-based delivery of naloxone. The proposed model is a stochastic service system design model with congestion \cite{BERMAN-KRASS2019} and determines the location of drone bases (DB) and drones as well as the assignment and dispatching of drones to overdose incidents. The model represents the drone network as a collection of $M/G/K$ queuing systems in which the capacity $K$ of each system (i.e., number drones at each opened DB) is unknown ex-ante and modelled as a decision variable. To our knowledge, there is no such model in the literature. \noindent {\bf Reformulation:} The base formulation of the proposed drone network design model is a NP-hard nonlinear integer problem which includes fractional, polynomial, exponential, and factorial terms. We derive in Theorem \ref{T2} a lifted mixed-integer linear programming (MILP) reformulation. Theorem \ref{TH-GEN} shows that the reformulation method is highly generalizable and that optimization models minimizing the average response time in networks organized as a series of interdependent $M/G/K$ queuing systems with ex-ante unknown number of servers $K$ are always MILP-representable. \noindent {\bf Algorithm:} We design two methods to solve the lifted MILP reformulation. The first one is an outer approximation algorithm based on the concept of lazy constraints to attenuate the challenges posed by the large dimension of the constraint space in the reformulation. The second and most efficient algorithm is an outer approximation branch-and-cut algorithm that involves the dynamic incorporation of valid inequalities and optimality-based cuts. \noindent {\bf Data-driven EMS insights:} Using real-life overdose data, we first demonstrate the extent to which the drone network contributes in reducing the response time, increasing the survival chance of overdose victims, and thereby in saving lives. A cross-validation analysis underlines the applicability~and~robustness of the network and its performance. Additional tests attest the low cost of implementing a drone network and its flexibility to adjust to the spatio-temporal uncertainty of overdose incidents. The quality-adjusted life year (QALY) underscores the largely increased quality of life and the related low incremental QALY cost, Second, we show the computational efficiency and scalability of the reformulation and algorithms. The reminder of this paper is organized as follows. The literature review of the various fields intersecting with this study is in Section \ref{litrev}. Section \ref{sec_model} derives the closed-form formulas for the queueing metrics used to assess the performance of the drone response network, presents the base formulation of the proposed model, and analyzes its complexity. Section \ref{sec_reformulation} is devoted to the reformulation method while Section \ref{sec_ALGO} describes the proposed algorithms. Section \ref{sec_TESTS} describes the data-driven study based on overdose real-life data from Virginia Beach, presents the computational efficiency of the method, and analyzes the healthcare benefits obtained by using drones to respond to overdose incidents. Section \ref{sec_conclusion} provides concluding remarks. \section{Literature Review and Gaps}\label{litrev} This study propose contributions spanning across three fields -- drone-based response to opioid overdose, network design queueing models with mobile servers, and fractional 0-1 programming - which we review succinctly below. \subsection{Drone-based Response to Opioid Overdose} EMSs struggle to provide a timely response to medical emergencies leading to patients passing away due to the delayed arrival of ground ambulances \cite{HART,NIMILAN}. Medical drones are increasingly viewed as an efficient solution to assist EMS professionals, including for overdoses. They can quickly deliver medicine to patients and take on-sight imagery to support EMS personnel and operations before the arrival of ambulances \cite{PULSIRI19}. We review next the scant literature on the use of drones for responding~to~overdoses. \citeauthor{ye2019optdronenetwork} propose a drone-delivery optimization model for overdoses. The model is solved with a genetic algorithm and is tested on suspected opioid overdose data from Durham County, NC. Their model reduces the response time to 4 minutes 38 seconds by using four drone bases (instead of 10 minutes 46 seconds with ambulances) that provide a 64.2\% coverage of the county. \citeauthor{gao2020dynamic} propose an assignment Markov decision model to select which drone should be dispatch to a reported overdose incident and to relocate the drone afterwards. Confronted with the curse of dimensionality, the authors develop a state aggregation heuristic to derive a lookup table policy. A simulation based on EMS data from Indiana reveals that the state aggregation approach outperforms the myopic policy currently used in practice, in particular when the overall request intensity gets higher. The authors report the difficulty to properly estimate the value function used within their approach. In a pilot feasibility study based on 30 simulated opioid overdose events, \citeauthor{ornato2020feasibility} report that all participants were able to administer the intranasal naloxone medication to the manikin within about two minutes after the 911 contact and that 97\% of them felt confident that they could do it successfully in a real event. None of the above models consider jointly the location of the fixed (drone bases) and mobile (drones) servers and the assignment decisions. Additionally, these models are all covering models in which the objective is to minimize the total costs or the number of drones and/or stations. Our model contributes to fill in these two gaps in this literature. Our strategic network model 1) concurrently determines the location of the drone bases, the positioning of the drones, and the drone-dispatching policy to OTRs, and 2) is a survival network design model as it minimizes the response time which is the primary driver to increase the survival chance of overdosed patients. \subsection{Queueing-optimization Models for Stochastic Network Design} Queueing models can be categorized into two types: {\it descriptive} queuing models which conduct an after-the-fact analysis to analyze how a predefined system configuration has performed while {\it optimization-based} ({\it prescriptive}) queuing models deal with congestion and are used for decision-making purposes (e.g., number of servers, location) \cite{MARIANOV1994QueuingProb}. We propose a new optimization-based queueing model that belongs to the class of stochastic network design queueing models with congestion \cite{BERMAN-KRASS2019}. This model class can be further decomposed into two sub-categories depending on whether the servers are fixed (immobile) requesting the customers to travel to the facility, or mobile and travel to the demand location. Since we consider the delivery of naloxone with drones, our literature review is focused on mobile servers, which is much less extensive than the one for immobile servers (see, e.g., \cite{ANJOS}). The first queuing optimization model with mobile server \cite{berman1985optimal} determines the optimal location of a single facility in order to either maximize the service coverage or to minimize the response cost. Modelling the network as an M/G/1 system, the authors consider that, if the server is busy when the demand arrives, the demand can be either rejected (demand is lost and covered by backup servers at a cost) or enter a queue managed under the first-come-first-serve discipline. Building on this study, Chiu and Larson \cite{CHIU1985LocANSev} consider a single facility operating as an M/G/k loss queuing system and show that, under a mild assumption, the optimal location reduces to a Hakimi median, which is the location minimizing the average travel time to a client \cite{Hakimi1964}. Batt and Berman \cite{batta1989location} study an M/G/k system that allows for queues with fixed number of servers $K$ and develop an an approximation-based approach to minimize the average response time. While the above studies consider a single facility, the queuing probabilistic location set-covering model proposed in \cite{MARIANOV1994QueuingProb} considers several facilities, each operating as an M/M/k queuing system. The authors determine the minimum number of facilities needed so that the probability of at least one server being available is greater than or equal to a certain threshold. Later, operating under the auspices of an M/G/k system, \citeauthor{MARIANOV1996QueuingMax} \cite{MARIANOV1996QueuingMax} study the probabilistic version of the maximal covering problem to show how to site a limited number of emergency vehicles with the objective of maximizing availability when a call arrives. Considering an M/M/k system, \citeauthor{Aboolian2008LocAlloc} determine the optimal number of servers needed to minimize a cost function encompassing the setup cost to open facilities and operate servers, the travel costs, and the queuing delay costs. In the context of EMSs, \citeauthor{boutilier2017optimizing} propose a decoupled two-stage approach to set up the drone-based delivery of automated external defibrillators (AED) in Toronto. The first stage involves the heuristic solution of a set covering model to determine the number of DBs to open. In the second stage, each DB is assumed to operate as an M/M/k system and the objective, given the locations of the opened DBs set up in stage 1, is to minimize the number of drones needed to meet a specified response time goal. The expected service time of each drone and the arrival rates are fixed parameters determined ex-ante in an exogenous manner. An ongoing study \cite{LEMA22} considers an M/G/1 queuing system for the delivery of AEDs to cardiac arrests and proposes new optimality-based bound tightening techniques. In contrast to some of the above studies (see, e.g., \cite{boutilier2017optimizing}), this work considers that the service time and arrival rate of (overdose) requests assigned to drones are random variables whose parameters (mean) are endogenized. In addition to this, the proposed model considers that the capacity of the DBs is a decision variable, which means, in other words, that the number of drones positioned at each DB is a decision variable. To our knowledge, this study is the first one that considers a system built as a collection of M/M/K$_j$ servers in which the capacity or number of mobile servers $K_j$ positioned at each facility is a bounded variable whose value can vary across the open DBs. We are not aware of any study that proposes a queuing-based optimization model to design a network of drones to deliver naloxone by locating and allocating drones in a stochastic environment that accounts for system congestion. \subsection{Fractional Nonlinear 0-1 Programming} \vspace{-0.07in} As it will be shown in Section \ref{sec_formu}, the optimization problem studied is a nonlinear fractional integer problem in which the objective function is a ratio of two nonlinear functions each depending on integer variables. The class of problems closest to ours is that of fractional linear 0/1 (binary) problems for which significant improvements have been made over the last 5-6 years \cite{BGP16,MGP}. Such problems pose serious computational challenges due to the pseudo-convexity and the combinatorial nature of the fractional objective. They minimize a single or a sum of ratios in which the denominator and numerator are both linear functions of binary variables. When more than one ratio term is in the objective functions, the problem (even unconstrained) has been shown to be NP-hard \cite{28}. One approach to tackle fractional linear 0-1 programs is to move the fractional terms to the constraint set and to then derive MILP reformulations, which requires the introduction of continuous auxiliary variables and big-M constraints (see, e.g., \cite{22}). MILP can however struggle -- in particular when the number of ratio terms increases -- due to the significant lifting of the decision and constraint spaces and the looseness of the continuous relaxation induced by the big-M constraints. Within this family, an MILP reformulation \cite{BGP16} based on the binary expansion of the integer-valued expressions appearing in the ratio terms permit to significantly lessen the number of bilinear terms and hence the number of linearization variables and constraints. This formulation scales much better but can also generate weak continuous relaxations, which hurts the convergence of the branch-and-bound algorithm. Mixed-integer second-order cone reformulations have also been proposed (e.g., \cite{33}). They are based on the submodularity concept and the derivation of extended polymatroid cuts \cite{5} to obtain tighter continuous relaxations. However, it is reported \cite{MGP} that state-of-the-art optimization solvers still struggle to solve moderate-size mixed-integer conic problems and that their performance degrades quickly as size increases. Building upon the links between MILP and mixed-integer conic reformulations, \citeauthor{MGP} derive tight convex continuous relaxations while limiting the lifting of the decision and constraint spaces. The above methods, while providing very valuable insights, can not be directly used to the problem tackled here as the latter differs from the fractional 0-1 problem along several dimensions. First, the denominator and the numerator of each ratio term are nonconvex functions and each involve binary as well as general integer decision variables. Second, the objective function is the sum of a very large number (several hundred) of ratio terms. Third, the integer variables are multiplied by fractional parameters which prohibits the use of the MILP method \cite{BGP16}. Fourth, the nonlinear terms are not simply bilinear. The numerator of each ratio term includes exponential terms and and higher-degree polynomial terms while each denominator includes polynomial and factorial terms. \vspace{-0.05in} \section{Drone Network Design Problem for Opioid Overdose Response} \label{sec_model} \subsection{Problem Description and Notations}\label{subsec_description} The problem studied in this paper is called the Drone Network Design Problem for Opioid Overdose Response (DNDP). Consider an EMS provider that seeks to design a drone network to deliver naloxone, i.e., an opioid antagonist, to an opioid overdose incident with the objective to minimize the response time and thereby to maximize the chance of survival of the overdosed patient. Given a set of candidate locations (e.g., fire, police, EMS stations), some of them are selected to be set as drone bases where the available medical drones can be deployed. The DNDP model is a bilocation-allocation problem that simultaneously determines the locations of the DBs, the capacity of those (i.e., number of drones at each DB), and the response policy determined by the assignment of drones to OTRs in order to minimize the response time. The response time is defined as the sum of the queuing delay time experienced if drones are busy when requested for service and the drone flight time from a DB to an overdose location. The following notational set is used in the formulation of the model (see Appendix \ref{sec:notations} for the description of all notations). Let $I$ be the set of opioid overdose locations and $J$ be the set of potential DB locations. Let $A_j$ be a vector of parameters $(a_j, b_j, c_j)$ representing the coordinates of DB $j$ in the earth-centered, earth-fixed coordinate system while $A_i$ is a vector of parameters $(a_i, b_i, c_i)$ representing the location $i$ of an overdose. The parameter $d_{ij} = \Vert A_i - A_j \Vert$ is the euclidean distance between DB $j$ and overdose location $i$. A number $p$ of drones with speed $v$ can be deployed at up to $q (q\leq p)$ open DBs across the network. Due to battery and autonomy limitations, each drone has a limited coverage defined by its catchment area with radius $r$. A drone can only service OTRs at locations that are within its catchment radius \cite{boutilier2017optimizing,chauhan2019maximum}, since the drone must have enough power (i.e., battery coverage) to return to its base. Accordingly, we define the sets $J_i, i \in I$ (resp., $I_j, j \in J$) which include all the DBs (resp., possible overdose locations) that are within $r$ (i.e., the catchment area) of overdose location $i$ (resp, DB $j$). The binary decision variable $x_j$ is equal to 1 if a DB is set up at location $j$, and is 0 otherwise. The binary variable $y_{ij}$ takes value 1 if an OTR at $i$ is assigned to DB $j$, and is equal to 0 otherwise. The general integer decision variable $K_j$ represents the number of drones deployed at DB $j$. By convention, we denote the upper and lower bound of any decision variable $x$ by $\overline{x}$ and $\underline{x}$, respectively. \subsection{DNDP Model: Collection of $M/G/K_j$ Queueing Systems with Unknown~Capacity}\label{subsec_queue} The stochastic nature of the occurrence of overdoses and the resulting uncertainty about the arrival rates of OTRs at DBs along with the uncertain service times can cause delays, requests being queued, and waiting times until a drone can be dispatched. In order to account for those, we model the network as a collection of $M/G/K$ queues in which each DB $_j$ operates as an $M/G/K_j$ queueing system where the capacity $K_j$ of DB $j$ is a bounded general integer decision variable representing the number of drones to be deployed at DB $j$. The occurrence of opioid overdoses at any location $i$ follows a Poisson distribution with arrival rate $\lambda_i$. The service time of drones is a random variable with general distribution and with known first and second moments. The arrival rate $\eta_j$ of OTRs at DB $j$ can be inferred to be a Poisson process since it is a linear combination of independent Poisson variables (i.e., weighted sum of assigned OTRs; see \eqref{ARRIVAL}). The arrival rate of OTRs at DBs is unknown ex-ante and is defined endogenously via the solution of the optimization problem. The same applies to the drone service time whose expected value is also endogenized. It follows that the arrival process, i.e., demand at DBs and the service times of drones are endogenous sources of uncertainty with decision-dependent parameter (expected value) uncertainty as coined by \cite{hellemo2018decision}. If a drone cannot be dispatched on the spot after reception of an OTR, the request is placed in a queue depleted in a first-come-first-served manner. After providing service, drones travel back to the DB to be cleaned, recharged, and prepared for the next trip \cite{ornato2020feasibility}. Having presented the structure of the network, we now derive closed-form functions for the queueing metrics - response time, queuing delay, and service time -- needed to formulate the DNDP problem. We use the notations $S_{j}$, $R_i$, and $Q_j$ to represent the random variables respectively denoting the total service time at DB $j$, the response time for an overdose at location $i$, and the queueing delay at DB $j$. Different from immobile servers, the travel time to and from the scene must be included in the service time for mobile servers \cite{berman2007multiple}. The service time for any OTR at $i$ serviced by a drone placed at DB $j$ is \cite{berman1985optimal} \begin{align} \label{SERV-TIME} S_{ij} & = \frac{d_{ij}}{v} + \alpha_i + (\beta - 1)\frac{d_{ij}}{v} + \epsilon_i \end{align} where $\beta$ is a constant that allows for different travel speeds to and from the scene, and $\alpha_i$ and $\epsilon_i$ are independent and identically distributed (i.i.d) random variables representing the on-scene service time and the drone's reset time (to be charged and prepared for the next demand), respectively. Let $\xi_i = \alpha_i + \epsilon_i$. with expected value $\mathbb{E}[\xi_i]$. The expected value $\mathbb{E}[S_{ij}]$ of the service time conditional to DB $j$ responding to an OTR at $i$ is: \begin{align} \label{e_s_ij} \mathbb{E}[S_{ij}] & = \beta \frac{d_{ij}}{v} + \mathbb{E}[\xi_i] \ . \end{align} The expected value of the total response time for $j$ is the sum of the expected service times for all the OTRs serviced (i.e., $y_{ij}=1$) by the drones stationed at $j$ and depends on the dispatching variables $y_{ij}$ \begin{align} \label{e_s_j} \mathbb{E}[S_{j}] & = \frac{\sum_{i \in I_j} \lambda_i y_{ij}\mathbb{E}[S_{ij}]}{\sum_{i \in I_j} \lambda_i y_{ij}} \end{align} while the second moment of the total service time at DB $j$ is: \begin{align} \label{e_s_j_sqr} \mathbb{E}[S_{j}^2] & = \frac{\sum_{i \in I_j} \lambda_i y_{ij}\mathbb{E}[S_{ij}^2]}{\sum_{i \in I_j} \lambda_i y_{ij}} \ . \end{align} The service time $S_j$ follows a general unspecified distribution with known first and second moments and the opioid overdoses occur at each location $i$ according to a Poisson process with rate $\lambda_i$. Accordingly, each DB $j$ is modelled as an M/G/K$_j$ queueing system in which a variable and upper bounded number $K_j$ of drones can be stationed and whose expected queueing delay is given by \cite{nozaki1978approxi, ROSS2014IntroToProb} \begin{align} \label{e_q_j} \mathbb{E}[Q_j] \approx \frac{\eta_j^{K_j} \mathbb{E}[S_j^2] \mathbb{E}[S_j]^{K_j-1}}{2(K_j-1)! (K_j-\eta_j \mathbb{E}[S_j])^2 \big[ \sum_{n = 0}^{K_j-1} \frac{(\eta_j \mathbb{E}[S_j])^n}{n!} + \frac{(\eta_j \mathbb{E}[S_j])^{K_j}}{(K_j - \eta_j \mathbb{E}[S_j]}\big]} \end{align} which represents the expected waiting time until a drone is ready to be dispatched after reception of an OTR. The arrival rate of OTRs at DB $j$ is given by \begin{equation} \label{ARRIVAL} \eta_j = \sum_{i \in I_j} \lambda_i y_{ij} \ , \ j \in J \end{equation} which shows that the arrival rate is unknown ex-ante and depends on the assignment decisions $y_{ij}$, thereby highlighting the decision-dependent parameter uncertainty of the OTR arrivals at DBs. The expected response time for an OTR at location $i$ is: \begin{align} \label{e_r_i} \mathbb{E}[R_i] = \sum_{j \in J_i} \left(\mathbb{E}[Q_j] + \frac{d_{ij}}{v}\right)y_{ij} \ . \end{align} The metric used in the proposed optimization problem is the average (over all overdoses) of the response time. We denote it by $\bar{R}$ and refer to it as the average response time. \begin{thm}\label{thm_avg_resp} The functional form of the average response time is a fractional expression with nonlinear numerator and denominator: \begin{equation} \bar{R} = \sum_{i \in I} \sum_{j \in J_i} \left[ \frac{\eta_j^{K_j} \mathbb{E}[S_j^2] \mathbb{E}[S_j]^{K_j-1}}{2(K_j-1)! (K_j-\eta_j \mathbb{E}[S_j])^2 \big[ \sum_{n = 0}^{K_j-1} \frac{(\eta_j \mathbb{E}[S_j])^n}{n!} + \frac{(\eta_j \mathbb{E}[S_j])^{K_j}}{(K_j - 1)!(K_j - \eta_j \mathbb{E}[S_j]}\big]} + \frac{d_{ij}}{v}\right]\frac{\lambda_i y_{ij}}{\sum_{l \in I} \lambda_l} \end{equation} \end{thm} \begin{proof} The average response time is the weighted average of the expected response times for each OTR $i$. Therefore, we have: \begin{equation} \label{R1} \bar{R} = \sum_{i \in I} \frac{\lambda_i}{\sum_{l \in I} \lambda_l} \mathbb{E}[R_i] \end{equation} Using the definition \eqref{e_r_i} of $\mathbb{E}[R_i]$, \eqref{R1} becomes: \begin{align} \bar{R} \; = \; \sum_{i \in I} \frac{\lambda_i}{\sum_{l \in I} \lambda_l} \sum_{j \in J_i} \left(\mathbb{E}[Q_j] + \frac{d_{ij}}{v}\right)y_{ij} \; = \; \sum_{i \in I} \sum_{j \in J_i} \label{r_bar_q_j} \left(\mathbb{E}[Q_j] + \frac{d_{ij}}{v}\right) \frac{\lambda_i y_{ij}}{\sum_{l \in I} \lambda_l} \end{align} Now expanding $\mathbb{E}[Q_j]$ using \eqref{e_q_j}, we obtain the expression given in Theorem \ref{thm_avg_resp}. \hfill$\Box$ \end{proof} \subsection{DNDP Base Model: Fractional Nonlinear Integer Programming Problem}\label{sec_formu} We present now the base formulation of problem DNDP and analyze its complexity. The base formulation of DNDP belongs to the family of nonlinear fractional integer problems. Before presenting its formulation, we recall three of the most distinctive features of the proposed DNDP model. First, the minimization of the average response time is a survival objective function as it contributes to increasing the survival chance of the victims of opioid overdoses. Indeed, the probability of survival to an overdose is a monotone decreasing function of the response time. Second, the capacity -- number of drones positioned at each DB -- of each queuing system is not fixed ex-ante, but is a decision variable. Third, the queueing-optimization model accounts for the presence of congestion and decision-dependent uncertainty as some of the parameters characterizing the Poisson arrival process of OTRs at DBs and their service times are determined endogenously. \noindent The fractional nonlinear integer base formulation $\mathbf{B-IFP}$ of problem DNDP is: \begin{subequations}\label{M-BF} \begin{align} \mathbf{B-IFP:} \min & \; \frac{1}{\sum_{l \in I}\lambda_l} \Bigg[ \sum_{i \in I} \sum_{j \in J_i} \frac{\lambda_i d_{ij}y_{ij}}{v} + \notag \\ &\hspace{-2.6cm} \sum_{i \in I} \sum_{j \in J_i} \frac{\lambda_i y_{ij}\sum_{l \in I_j} (\lambda_l y_{lj}\mathbb{E}[S_{lj}^2]) (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{K_j-1}} {2(K_j-1)! (K_j- \sum_{l \in I_j} \lambda_l y_{lj}\mathbb{E}[S_{lj}] )^2 \big[ \sum_{n = 0}^{K_j-1} \frac{ \sum_{l \in I_j} \lambda_l y_{lj}\mathbb{E}[S_{lj}] )^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj}\mathbb{E}[S_{lj}])^{K_j}} {(K_j - 1)!(K_j - \sum_{l \in I_j} \lambda_l y_{lj}\mathbb{E}[S_{lj}])}\big]} \Bigg] \label{D1_obj} \\ \text{s.to} \ \ & \sum_{i \in I_j} \lambda_i y_{ij} \le K_j \frac{\sum_{i \in I_j} \lambda_i y_{ij}}{ \sum_{i \in I_j} \lambda_i y_{ij}\mathbb{E}[S_{ij}]} , \ \ j \in J \label{steady-state}\\ & \sum_{j \in J_i} y_{ij} = 1, \ \ i \in I \label{assignment}\\ & y_{ij} \le x_j \ \ i \in I, j \in J_i \label{eq_lim}\\ & \sum_{j \in J} x_j \le q \label{eq_q}\\ & K_j \le M x_j, \ j \in J \label{open_drone}\\ & \sum_{j \in J} K_j = p \label{eq_p}\\ &x_j \in \{0,1\}, \ \ j \in J, \label{binary} \\ & y_{ij} \in \{0, 1\}, \ \ i \in I, j \in J_i \label{y_binary} \\ & K_j \in \mathbb{Z}_+, \ \ j \in J \label{k_j_integer} \end{align} \end{subequations} The objective function \eqref{D1_obj} minimizes the average response time of the network, a key feature given the time-critical nature of opioid overdoses. Given that the queuing formulas used to represent the response time assume that the system is in steady-state, we introduce \eqref{steady-state} to ensure the steady-state condition and stability of the queuing system \cite{medhi2002stochastic}. The nonlinear constraints \eqref{steady-state} prevent the arrival rate at any DB from exceeding its service rate. Constraint \eqref{assignment} makes sure that each OTR is serviced by exactly one drone. Constraint \eqref{eq_lim} requires that an OTR can only be assigned to an open DB within the drone catchment area. Constraint \eqref{eq_q} limits from above the number of open DBs with the parameter $q$. Constraint \eqref{open_drone} ensures that drones can only be placed at an opened DB. The parameter $M$ defines the maximum number (upper bound) of drones that can be stationed at any DB $j$: $K_j\leq M,j\in J$. As suggested in \cite{MOSHREF} and as implemented in \cite{BAUER,pulver2018optimizing, gao2020dynamic} for medical drone networks, we set $M$ equal to 2 as drones are scarce resources and are typically spread in order to expand the coverage of the network. Constraint \eqref{eq_p} ensures that all available drones are deployed. The binary and general integer nature of the decision variables are respectively enforced by \eqref{binary}, \eqref{y_binary}, and \eqref{k_j_integer} with $\mathbb{Z}_+$ denoting the set of nonnegative integer numbers. Proposition \ref{PROP1} follows immediately from the above discussion. \begin{prop} \label{PROP1} Problem $\mathbf{B-IFP}$ is an NP-hard optimization problem in which: \newline (i) The objective function is neither convex, nor concave: the denominator and the numerator in each ratio term of the objective function are nonlinear and nonconvex functions. \newline (ii) The ratio terms can involve division by 0 and be indeterminate. \newline (iii) The ratio term corresponding to a location where no DB is set up in undefined. \newline (iv) There is a mix of binary and bounded general integer variables. \newline (v) The nonlinear constraints \eqref{steady-state} include fractional and polynomial terms, can be indeterminate, and define a nonconvex feasible area. \newline (vi) The continuous relaxation of $\mathbf{B-IFP}$ is a nonconvex programming problem. \end{prop} \begin{proof} It has been shown \cite{28} that the unconstrained 0-1 linear- fractional problem is NP-hard when the objective function sums two or more ratio terms. The NP-hardness of $\mathbf{B-IFP}$ follows immediately since it adds additional complexity sources (constraints, nonlinear denominator and numerator in ratios, etc.). \newline (i) The numerator and denominator include polynomial and exponential terms. The denominator has also factorial and fractional terms. Both are nonconvex functions. It can be easily shown that the hessian matrix of the fractional objective function is not positive semidefinite, nor negative semidefinite. \newline (ii) Any term $(K_j- \sum_{l \in I_j} \lambda_l y_{lj}\mathbb{E}[S_{lj}])$ can be equal to 0, which leads to a division by 0, if 1) the service rate is equal to the arrival rate at a DB $j$ or 2) if no DB is set up at location $j$, thereby implying $K_j=y_{ij}=0, \forall i \in J_i$. \newline (iii) If no DB is set up at the potential location $j$, no drone can be stationed at $j$, which implies that $K_j=0$. In that case, the denominator includes a term involving taking the factorial of a negative number: $(K_j-1)! = (-1)!$ \newline (iv) Obvious: see constraints \eqref{binary}-\eqref{k_j_integer}. \newline (v) The right-hand side is fractional with a bilinear numerator and linear denominator. \newline (vi) See part (i). \end{proof} We propose in Appendix \ref{APP-ILLU} an illustration for a small network and present the formulation of the objective function in $\mathbf{B-IFP}$. \vspace{0.025in} \section{Reformulation Framework} \label{sec_reformulation} \vspace{-0.075in} Proposition \ref{PROP1} highlights that the base formulation $\mathbf{B-IFP}$ is an NP-hard fractional nonlinear integer problem and is extremely difficult to solve numerically even for problem instances of small size. We propose in this section an equivalent and computationally tractable reformulation of the problem that takes the form of a mixed-integer linear programming (MILP) problem. We first demonstrate (Theorem \ref{T2}) that an MILP reformulation can be derived when the number of servers is two or less as relevant to the considered medical drone network problem (see \cite{BAUER,MOSHREF,pulver2018optimizing}) before generalizing this result (Theorem \ref{TH-GEN}) to any $M/G/K$ queueing system with variable number of servers $K_j$ at each facility $j$. The derivation of the reformulation is relatively complex and we split it two main steps -- presented in Proposition \ref{T1} and Theorem \ref{T2} -- to ease the exposition. Proposition \ref{T1} proposes an equivalent reformulation taking the form of a fractional nonlinear binary problem {\bf R-BFP} with nonconvex continuous relaxation. \begin{prop} \label{T1} Let $\gamma_j^m \in \{0, 1\}, j \in J, m=1,\ldots,M$. The fractional nonlinear binary~problem \begin{subequations}\label{F-R-BFP} \begin{align} \mathbf{R-BFP}: & \; \min \sum_{i \in I} \sum_{j \in J_i} \frac{y_{ij}d_{ij} \lambda_i}{v \sum_{l \in I} \lambda_l} \; + \; \sum_{i \in I} \sum_{j \in J_i} \sum_{m=1}^{M} \ \frac{y_{ij}\lambda_i}{\sum_{l \in I} \lambda_l} \label{OBJ2} \\ & \hspace{-0.8in} \left[ \frac{\gamma_j^m \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}^2] (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m-1}}{2(m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^2 \big[ \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m}}{(m - 1)!(m - \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}]}\big]} \right] \notag\\ \text{s.to} \; & \eqref{assignment}- \eqref{eq_q} ; \eqref{binary}-\eqref{y_binary} \notag \\ & \sum_{i \in I_j} \lambda_i y_{ij}\mathbb{E}[S_{ij}] \le \sum_{m=1}^{M} m \gamma_j^m , \ \ j \in J \label{steady-state-2}\\ & \sum\limits_{m=1}^{M} m \gamma_j^m \le M x_j, \ j \in J \label{open_drone-2}\\ & \sum\limits_{j \in J}\sum\limits_{m=1}^{M} m\gamma_j^m = p \label{eq_p-2}\\ &\sum\limits_{m=1}^{M} \gamma_j^m \leq 1 \ , \ j \in J \label{NEW1} \\ & \gamma_j^m \in \{0,1\}, \ \ j \in J, m =1, \ldots, M \label{binary2} \end{align} \end{subequations} is equivalent to the fractional nonlinear integer problem $\mathbf{B-IFP}$. \end{prop} \begin{proof} Part (i): We first replace each bounded general integer variable $K_j$ by a weighted sum of $M$ binary variables $\gamma_j^m, m=1,\ldots,M$ in the set of constraints. For this variable substitution to work, the following relationship must be enforced: \begin{equation} \label{SUB1} K_j := \sum\limits_{m=1}^{M} m \gamma_j^m \ . \end{equation} This is accomplished in two steps. First, we introduce the linear constraints \vspace{-0.1in} \begin{subequations}\label{SUBSTI} \begin{align} \label{SUB2} \sum\limits_{m=1}^{M} \gamma_j^m \leq 1 \ & \\ \label{SUB3} \gamma_j^m \in \{0,1\} \ & \quad m=1,\ldots,M \end{align} \end{subequations} for each $j \in J$ (see \eqref{NEW1} and \eqref{binary2}) to ensure that $\sum\limits_{m=1}^{M} m \gamma_j^m$ can take any integer value in $\{0,M\}$ which is the restriction imposed on each $K_j$ via \eqref{open_drone} and \eqref{k_j_integer}. Accordingly, we substitute \eqref{NEW1}-\eqref{binary2} for \eqref{k_j_integer} and then replace $K_j$ by the right-side term of \eqref{SUB1} in \eqref{steady-state}, \eqref{open_drone}, and \eqref{eq_p}. The constraints \eqref{steady-state}, \eqref{open_drone}, and \eqref{eq_p} can then be replaced by \eqref{steady-state-2}, \eqref{open_drone-2}, and \eqref{eq_p-2} in the constraint set of $\mathbf{R-BFP}$. Additionally, we also need to divide both sides of \eqref{steady-state} by $\sum_{i \in I_j} \lambda_i y_{ij}$ to obtain its linear equivalent \eqref{steady-state-2}. This gives the mixed-integer feasible set given in the statement of Proposition \ref{T1}. \noindent Part (ii): We now remove the general integer variables $K_j$ from the objective function. In each ratio term \begin{equation} \label{INTERM0} \frac{\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}^2] (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{K_j-1}}{2(K_j-1)! (K_j-\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^2 \big[ \sum_{n = 0}^{K_j-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{K_j}}{(K_j - 1)!(K_j - \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}]}\big]} \end{equation} of the objective function, we first replace the general integer decision variable $K_j$ by the index parameter $m$, which gives: \begin{equation} \label{INTERM1} \frac{\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}^2] (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m-1}}{2(m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^2 \big[ \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m}}{(m - 1)!(m - \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}]}\big]} \end{equation} Next, we multiply each expression \eqref{INTERM1} by the corresponding binary variable $\gamma_j^m$ \begin{equation} \label{INTERM10} \frac{\gamma_j^m \ \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}^2] (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m-1}}{2(m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^2 \big[ \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m}}{(m - 1)!(m - \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}]}\big]} \end{equation} and sum the resulting ratio terms \eqref{INTERM10} over $m$ ($m=1,\ldots,M$) which represents the possible number of drones positioned at any open DB \begin{equation} \label{INTERM2} \hspace{-0.2cm} \sum_{m=1}^{M} \frac{\gamma_j^m \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}^2] (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}]\WM{)}^{m-1}}{2(m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^2 \big[ \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^{m}}{(m - 1)!(m - \sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}]}\big]} \ . \end{equation} Due to \eqref{NEW1}, at most one of the binary variables $\gamma_j^m$ for each $j\in J$ takes a non-zero value (i.e., 1) and, therefore, at most one of the terms in the summation is \eqref{INTERM2} is non-zero. This, combined with the fact that the feasible set of $\mathbf{R-BFP}$ ensures that \eqref{SUB1} holds true for each $j$ (see Part i)), shows that \eqref{INTERM2} is equivalent to \eqref{INTERM0}. \newline Two cases are now to be considered. First, if for any arbitrary $j \in J$, we have $K_j=0$, then each $\gamma_j^m, m=1,\ldots,M$ is equal to 0 since the constraints in $\mathbf{R-BFP}$ imply \eqref{SUB1}, and the expression in \eqref{INTERM2} is equal to 0. Second, if $K_j\neq 0$ with $K_j \leq M$, then exactly one of the $\gamma_j^m, m=1,\ldots,M$ is equal to 1. More precisely, due to \eqref{SUB1}, we have $\gamma_j^{K_j}=1$ and $\gamma_j^m=0, m=1,\ldots,M, m \neq K_j$. This means that the only term in \eqref{INTERM2} taking a non-zero value is the one for which $m=K_j$, which in turn implies that \eqref{INTERM2} is equal to \eqref{INTERM0}, and provides the result that we set out to prove. \hfill$\Box$ \end{proof} The following comments are worth noting. A first difference with $\mathbf{B-IFP}$ is that $\mathbf{R-BFP}$ has only binary decision variables and does not include any general integer variables $K_j$. The second difference and advantage of $\mathbf{R-BFP}$ over $\mathbf{I-BFP}$ is that the polynomial terms in the reformulation $\mathbf{R-BFP}$ are of lower degree than those in $\mathbf{B-IFP}$. Third, in $\mathbf{R-BFP}$, there is no decision variable $K_j$ appearing in the upper limits of the summation operations, such as $ \sum_{n = 0}^{K_j-1} (\sum_{l \in I_j} \lambda_l y_{lj} \mathbb{E}[S_{lj}])^n$ in $\mathbf{B-IFP}$. Fourth, there is no exponential term of form $y_{ij}^{K_j}$ in $\mathbf{R-BFP}$. Fifth, the factorial terms $(K_j-1)!$ in $\mathbf{B-IFP}$ which, besides being nonlinear, can also be undefined if $K_j=0$, are no longer present in $\mathbf{R-BFP}$ in which they are replaced by the fixed parameters $(m-1)!$. The undefined issue is resolved since $m$ is at least equal to 1. The reformulation $\mathbf{R-BFP}$ is valid for any value assigned to $M$. The constraints \eqref{assignment}-\eqref{eq_q}, \eqref{binary}-\eqref{y_binary}, and \eqref{steady-state-2}-\eqref{binary2} representing the feasible area of problem \textbf{R-BFP} define collectively a linear binary feasible set which, to ease the notation, will be thereafter referred to with the notation $\mathcal{BL}$: \begin{equation} \label{FEASIBLESET2} \mathcal{BL} = \left\{(x,y,\gamma) \in \{0,1\}^{|J|+\sum_{i\in I} |J_i|+M|J|} : \eqref{assignment}-\eqref{eq_q}; \eqref{binary}-\eqref{y_binary};\eqref{steady-state-2}-\eqref{binary2}\right\} \ . \end{equation} We derive in Appendix \ref{APP-ILLU} the objective function of problem $\mathbf{R-BFP}$ for a small network. Table \ref{T01} in Appendix \ref{SIZE} specifies the number of constraints and integer decision variables of each type in the base formulation $\mathbf{B-IFP}$ and its reformulation $\mathbf{R-BFP}$. Although simpler than $\mathbf{B-IFP}$, $\mathbf{R-BFP}$ is still a challenging optimization problem as it involves polynomial and fractional terms and its continuous relaxation is nonconvex. We shall now demonstrate in Theorem \ref{T2} that the MILP problem $\mathbf{R-MILP}$ is equivalent to $\mathbf{R-BFP}$ and $\mathbf{B-IFP}$. The proof is split into two main steps that starts with the moving of the fractional terms from the objective function into the constraint set and continues with the linearization of the polynomial terms. To shorten the notations in the proof, we replace $\mathbf{E}[S_{lj}]$ and $\mathbf{E}[S_{lj}^{2}]$ by $\tilde{S}_{lj}$ and $\tilde{S}^{2}_{lj}$, respectively. We first consider the medical drone network design problem DNDP in which $M=2$ (i.e., up to two servers per DB). Theorem \ref{TH-GEN} will later demonstrate that an MILP reformulation can be derived for any value of the parameter $M$, or, in other words, for any M/G/K queuing system in which the capacity of each queueing system (i.e., number of servers) is unknown and variable. \begin{thm} \label{T2} Define the index sets $D_j = \{(l,t): l,t \in I_j, l <t \}, j\in J$. Let $z^j_{lt} \in [0,1]$, $\mu^m_{ij} \in [0,\bar{U}_j^m]$, $\tau^j_{lt} \in [0,\bar{U}^2_j]$, and $\omega^m_{ij} \in [0,\bar{\mu}_{ij}^m]$ $i,l,t \in I_j, (l,t) \in D_j, j\in J, m=1,\ldots,M$ be continuous auxiliary variables defined by the linearization sets: \vspace{-0.2in} \begin{subequations}\label{SZE2} \begin{empheq}[left=\hspace{-0.3in} \mathcal{M}_{z_{lt}^j}\coloneqq \empheqlbrace\ {(y,z)\in \{0,1\}^2\times [0,1]:},right= \empheqrbrace] {align} \label{MAC_z1} &\ z_{lt}^{j} \ge 0 \\ \label{MAC_z2} &\ z_{lt}^{j} \ge y_{tj} + y_{lj}-1 \\ \label{MAC_z3} &\ z_{lt}^{j} \le y_{tj} \\ \label{MAC_z4} &\ z_{lt}^{j} \le y_{lj} \end{empheq} \end{subequations} \begin{subequations}\label{SZE1} \begin{empheq}[left=\hspace{0in}\mathcal{M}_{\mu_{lj}^m}\coloneqq \empheqlbrace\ {(y,U,\mu)\in \{0,1\}\times R^2_+:}, right=\empheqrbrace]{align} \label{MAC1} & \mu_{lj}^m \ge \underline{U}_{j}^m y_{lj} \\ \label{MAC2} & \mu_{lj}^m \ge \overline{U}_{j}^m(y_{lj} - 1) + U_{j}^m \\ \label{MAC3} & \mu_{lj}^m \le \overline{U}_{j}^m y_{lj} \\ \label{MAC4} & \mu_{lj}^m \le \underline{U}_{j}^m(y_{lj} - 1) + U_{j}^m \end{empheq} \end{subequations} \begin{subequations}\label{SZE3} \begin{empheq}[left=\hspace{0in}\mathcal{M}_{\tau_{lt}^j}\coloneqq \empheqlbrace\ {(z,U,\tau)\in \{0,1\} \times R^2_+:}, right=\empheqrbrace]{align} \label{MAC_psi1} &\ \tau_{lt}^{j} \ge \underline{U}_{j}^2z_{lt}^{j} \\ \label{MAC_psi2} &\ \tau_{lt}^{j} \ge \overline{U}_{j}^2(z_{lt}^{j} - 1) + U_j^2 \\ \label{MAC_psi3} &\ \tau_{lt}^{j} \le \overline{U_j^2}z_{lt}^{j} \\ \label{MAC_psi4} &\ \tau_{lt}^{j}\le \underline{U_j^2}(z_{lt}^{j} - 1) + U_j^2 \end{empheq} \end{subequations} \begin{subequations}\label{SZE4} \begin{empheq}[left=\hspace{-0.03in}\mathcal{M}_{\omega_{lj}^m}\coloneqq \empheqlbrace\ {(\gamma,\mu,\omega)\in \{0,1\}\times R^2_+:}, right=\empheqrbrace]{align} \label{MAC_omega11} & \omega_{ij}^{m} \ge \underline{\mu^m_{ij}}\gamma_{j}^m \\ \label{MAC_omega12} & \omega_{ij}^{m} \ge \overline{\mu^m_{ij}} (\gamma_j^1-1)+\mu^m_{ij} \\ \label{MAC_omega13} & \omega_{ij}^{m} \le \overline{\mu^m_{ij}}\gamma_{j}^m \\ \label{MAC_omega14} & \omega_{ij}^{m} \le \underline{\mu^m_{ij}}(\gamma_j^1-1) + \mu^m_{ij} \end{empheq} \end{subequations} The MILP problem $\mathbf{R-MILP}$ \begin{subequations} \label{RDN3} \begin{align} \min & \ \sum_{i \in I} \sum_{j \in J_i} \frac{y_{ij}d_{ij} \lambda_i}{v \sum_{l \in I} \lambda_l} + \sum_{i \in I} \sum_{j \in J_i} \Bigg[ \frac{\omega_{ij}^{1}}{2} + \frac{\omega_{ij}^{2}}{2} \Bigg] \frac{\lambda_i}{\sum_{l \in I} \lambda_l} & \label{obj_lin} \\ \text{s.to} \ & (x,y,\gamma) \in \mathcal{BL} & \notag \\ & U_j^1 = \sum_{l \in I_j}\lambda_l \mu^1_{lj} \tilde{S}_{lj} + \sum_{l \in I_j}\lambda_l y_{lj} \tilde{S}_{lj}^{2} & j \in J \label{U} \\ &\hspace{-0.06in} 4U^2_{j} = \sum_{l \in I_j} (\lambda_l)^2 \tilde{S}_{lj}^{2} (\mu^2_{lj} + y_{lj} \tilde{S}_{lj}) + 2 \sum_{l,t \in D_j} \lambda_l \lambda_t \tilde{S}_{ij} (z_{lt}^{j} \tilde{S}_{lj}^{2} + \tilde{S}_{tj} \tau_{lt}^{j}) & j \in J \label{V} \\ &(y,z) \in \mathcal{M}_{z_{lt}^j} & (l,t) \in D_j, j \in J \notag \\ &(y,U,\mu) \in \mathcal{M}_{\mu_{lj}^r} & \hspace{-2cm} l \in I_j, j \in J, r=1,\ldots,M \notag \\ &(z,U,\tau) \in \mathcal{M}_{\tau_{lt}^j} & (l,t) \in D_j, j \in J \notag \\ &(\gamma,\mu,\omega) \in \mathcal{M}_{\omega_{lj}^r} & \hspace{-2cm}i \in I, j \in J_i, r=1,\ldots,M \notag \end{align} \end{subequations} is equivalent to the nonlinear integer problems $\mathbf{B-IFP}$ and $\mathbf{R-BFP}$ for $M$=2. \end{thm} \begin{proof} For $M = 2$, the fractional terms \begin{equation} \label{FT1} \sum_{m=1}^{M} \left[ \frac{\gamma_j^m \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}^2 (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}\WM{)}^{m-1}}{2(m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^2 \big[ \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^n}{n!} + \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^{m}}{(m - 1)!(m - \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}}\big]} \right] \end{equation} in \eqref{OBJ2} can be rewritten as \begin{align*} &\frac{1}{2} \Bigg[ \frac{\gamma_j^1\sum_{l \in I_j}\lambda_l y_{lj}\tilde{S}_{lj}^{2}}{(1 - \sum_{l \in I_j}\lambda_l y_{lj}\tilde{S}_{lj})} + \\ & \frac{ \gamma_j^2 \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2 \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}} {(2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 + (2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + (2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})( \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^{2}} \Bigg] \end{align*} In order to remove the fractional terms from the objective function, we introduce the nonnegative auxiliary variables $U_j^m, m=1,2$ defined as: {\small \begin{align} U_j^1 &= \frac{\sum_{l \in I_j}\lambda_l y_{lj}\tilde{S}_{lj}^{2}}{1 - \sum_{l \in I_j}\lambda_l y_{lj}\tilde{S}_{lj}} \label{1drone_cons} \\ U_j^2 &= \frac{ \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}}{(2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 + (2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + (2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) (\sum\limits_{l\in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} )^{2}} \label{2drone_cons} \end{align} } Problem {\bf R-BFP} can now be equivalently rewritten as: \begin{align} \min & \sum_{i \in I} \sum_{j \in J_i} \frac{y_{ij}d_{ij} \lambda_i}{v \sum_{l \in I} \lambda_l} + \sum_{i \in I} \sum_{j \in J_i} \Bigg[ \frac{\gamma_j^1 y_{ij}U_j^1}{2} + \frac{\gamma_j^2 y_{ij}U_j^2}{2} \Bigg] \frac{\lambda_i}{\sum_{l \in I} \lambda_l} \label{OBJ1}\\ \; \text{s.to} \ & \eqref{1drone_cons} - \eqref{2drone_cons} \notag \\ & \ (x,y,\gamma) \in \mathcal{BL} \notag \end{align} The second step is the linearization of the nonconvex equality constraints \eqref{1drone_cons} and \eqref{2drone_cons} and the~polynomial terms of the objective function \eqref{OBJ1}. Multiplying both sides of \eqref{1drone_cons} by $(1 - \sum_{l \in I_j}\lambda_l y_{lj}\tilde{S}_{lj})$ gives \begin{equation} U_j^1 = \sum_{l \in I_j}\lambda_l y_{lj} U^1_{j} \tilde{S}_{lj} + \sum_{l \in I_j} \lambda_l y_{ij} \tilde{S}_{lj}^{2} \; , \; j \in J \label{multi1} \end{equation} which can be linearized as \eqref{U} by introducing the linearization auxiliary variables $\mu^1_{lj}$ and the McCormick inequalities \eqref{MAC1}-\eqref{MAC4} in the set $\mathcal{M}_{\mu_{lj}^1}$ to ensure that: $\mu^1_{lj} = y_{lj} U^1_{j}, l\in I_j, j\in J$. \noindent Similarly, multiplying both sides of \eqref{2drone_cons} by $$(2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 + (2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + (2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})( \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} )^{2}$$ gives \begin{align} & \; U_j^2 \left((2-\sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2+(2-\sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + (2-\sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})( \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} )^{2} \right) \notag \\ = & \; \sum_{l \in I_j} \lambda_{l}y_{lj}(\tilde{S}_{lj})^2 \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \label{V_mul_denomi} \end{align} The left-hand side of \eqref{V_mul_denomi} can be rewritten as \begin{subequations} \label{SIMPL} \begin{align} \notag & \; U_j^2 \Bigg((2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) \times \left[ 2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + (2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + ( \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^{2} \right] \Bigg) \\ \notag = & \; U_j^2 \Bigg( (2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) \times \left[ 2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} +2 \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} - (\sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 + ( \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^{2} \right] \Bigg) \\ \notag = & \; U_j^2 \Bigg( (2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) (2 + \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) \Bigg) \; = \; U_j^2 \Bigg( 4- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \Bigg) \\ \label{V_LHS} = & \; U_j^2 \Bigg( 4- \sum_{l \in I_j} \sum_{t \in I_j} \lambda_{l} \lambda_{t} y_{lj} y_{tj} \tilde{S}_{lj} \tilde{S}_{tj} \Bigg) \ . \end{align} \end{subequations} The double summation expression in \eqref{V_LHS} can be simplified. First, using the idempotent identity of binary variables, we have: $(y_{lj})^2 = y_{lj}$. Second, we split the summands in \eqref{V_LHS} into two parts including respectively the separable bilinear terms $(y_{lj})^2$ and the nonseparable ones $y_{lj}y_{tj}, l \neq t$: \begin{equation} \sum_{l \in I_j} \sum_{t \in I_j} \lambda_{l} \lambda_{t} y_{lj} y_{tj} \tilde{S}_{lj} \tilde{S}_{tj} = \sum_{l \in I_j} (\lambda_l)^2 y_{lj} (\tilde{S}_{lj})^{2} + 2\sum_{(l,t) \in D_j} \lambda_l \lambda_t y_{lj} y_{tj}\tilde{S}_{lj} \tilde{S}_{tj} \ . \label{SIMPLI} \end{equation} Combining \eqref{SIMPL} and \eqref{SIMPLI}, we reformulate the left side of \eqref{V_mul_denomi} as: \begin{equation} U_j^2 \Big( 4- \sum_{l \in I_j} (\lambda_l)^2 y_{lj} (\tilde{S}_{lj})^{2} - 2\sum_{(l,t) \in D_j} \lambda_l \lambda_t y_{lj} y_{tj}\tilde{S}_{lj} \tilde{S}_{tj} \Big) \label{SIMPLI2} \end{equation} Using the same reasoning, the right-hand side of \eqref{V_mul_denomi} is equal to $$ \sum_{l \in I_j} (\lambda_l)^2 y_{lj} (\tilde{S}_{lj})^{2} \tilde{S}_{lj} +2\sum_{(l,t) \in D_j} \lambda_l \lambda_t y_{lj} y_{tj}(\tilde{S}_{lj})^{2} \tilde{S}_{tj} \ . $$ Thus, \eqref{2drone_cons} and \eqref{V_mul_denomi} are equivalent to \begin{align} & \ U_j^2 \Big(4 - \sum_{l \in I_j} (\lambda_l)^2 y_{lj} (\tilde{S}_{lj})^{2} - 2\sum_{(l,t) \in D_j} \lambda_{l} \lambda_{t} y_{lj} y_{tj}\tilde{S}_{lj} \tilde{S}_{tj}\Big) \notag \\ = & \ \sum_{l \in I_j} (\lambda_l)^2 y_{lj} (\tilde{S}_{lj})^{2} \tilde{S}_{lj} + 2\sum_{(l,t) \in D_j} \lambda_l \lambda_t y_{lj} y_{tj}(\tilde{S}_{lj})^{2} \tilde{S}_{tj} \ , \label{2cons_r1} \end{align} which defines a nonlinear equality constraint including bilinear terms $U_j^2 y_{lj}$, $y_{lj} y_{tj}$ and trilinear terms $U_j^2 y_{lj} y_{tj}$ and has therefore a nonconvex feasible area. Next, we linearize the binary bilinear term $y_{lj}y_{tj}$ by introducing the variables $z_{lt}^{j}$ and the linear inequalities \eqref{MAC_z1}-\eqref{MAC_z4} in the set $\mathcal{M}_{z^j_{lt}}$ which ensures that $z_{lt}^{j}:= y_{lj}y_{tj}$ and gives the equality: {\footnotesize \begin{align} \label{2cons_R2} 4U_j^2 - \sum_{l \in I_j} (\lambda_l)^2 U_j^2y_{lj} {S}_{lj}^{2} - 2\sum_{l,t \in D_j}\lambda_{l} \lambda_{t} U^2_{j}z_{lt}^{j}\tilde{S}_{lj} \tilde{S}_{tj} &= \sum_{l \in I_j} (\lambda_l)^2 y_{lj} \tilde{S}_{lj}^{2} \tilde{S}_{lj} + 2\sum_{l,t \in D_j} \lambda_l \lambda_t z_{lt}^{j} \tilde{S}_{lj}^{2} \tilde{S}_{tj} \ , \ j \in J \end{align} } To linearize the remaining bilinear terms $U_j^2y_{lj}$ and $U_j^2z_{lt}^{j}$ in \eqref{2cons_R2}, we respectively use the inequalities \eqref{MAC1}-\eqref{MAC4} and \eqref{MAC_psi1}-\eqref{MAC_psi4} in $\mathcal{M}_{\mu^2_{lj}}$ and $\mathcal{M}_{\tau^j_{lt}}$ to ensure $\mu^2_{lj}:= U_j^2 y_{lj}$ and $\tau^j_{lt}:= U_j^2 z_{lt}^{j}$, which gives us, in fine, a mixed-integer linear feasible set equivalent to \eqref{2drone_cons}. Having linearized the bilinear terms in \eqref{1drone_cons} and \eqref{2drone_cons}, we do now the same for the trilinear terms $\gamma_j^1 y_{ij}U_j^1$ and $\gamma_j^2 y_{ij}U_j^2$ in the objective function \eqref{OBJ1}. First, since $\mu^m_{ij} = U_{j}^m y_{ij}$, the trilinear terms can be reduced to the bilinear terms $\gamma_j^1 \mu^1_{ij}$ and $\gamma_j^2 \mu^2_{ij}$. Introducing the variables $\omega_{ij}^{m}$ such that $\omega_{ij}^{m}:= \gamma_{j}^m\mu^m_{ij}$ due to the McCormick inequalities \eqref{MAC_omega11}- \eqref{MAC_omega14} in the set $\mathcal{M}_{\omega^m_{ij}}$, we have $\omega_{ij}^{m} = \gamma_{j}^m\mu^m_{ij} = \gamma_{j}^m y_{ij} U_{j}^m$ and substituting $\omega_{ij}^{m}$ for $\gamma_{j}^m y_{ij} U^m_j$ gives a linear objective function and completes the proof. \hfill$\Box$ \end{proof} In Appendix \ref{APP-ILLU}, we give the objective formulation of problem $\mathbf{R-MILP}$ for a small network. We shall now demonstrate in Theorem \ref{TH-GEN} that the linearization approach proposed above for an $M/G/K$ queueing system with variable number of servers (capacity) $K$ limited from above to 2 can be extended to any $M/G/K$ system with finite variable capacity $K$. We first introduced two Propositions that will be used in the proof of Theorem \ref{TH-GEN}. \begin{prop} \label{P1} Let $y \in \{0,1\}^n$. The polynomial set $\{(y,z) \in \{0,1\}^n \times [0,1]: z = \prod_{i=1}^n y_i \}$ can be equivalently represented with the MILP set defined by: \[ \Big\{ (y,z): z \geq 0, \; z \geq \sum_{i=1}^n y_i - n + 1, \; z \leq y_i, \; i=1,\ldots,n \Big\} \ . \] \end{prop} \begin{prop} \label{P2} Let $x \in [0,\bar{x}]$, $y \in \{0,1\}^n$. The polynomial set $\{(x,y,z) \in [0,\bar{x}] \times \{0,1\}^n \times [0,\bar{x}]: z = x \prod_{i=1}^n y_i \}$ can be equivalently represented with the MILP set defined by: \[ \Big\{ (x,y,z): z \geq 0, \; z \geq x - \bar{x} \Big(n-\sum_{i=1}^n y_i\Big), \; z \leq \bar{x} y_i, \; z \leq x, \; i=1,\ldots,n \Big\} \ . \] \end{prop} \begin{thm} \label{TH-GEN} A stochastic network design model of form $\mathbf{R-BFP}$ that minimizes the average response time of a network of $M/G/K_j$ queuing systems with variable and finitely bounded number of servers $K_j$ is always MILP-representable. \end{thm} \begin{proof} Each fractional term \eqref{FT1} ($m \in \{1,M\}$) in the objective function of $\mathbf{R-BFP}$ can be equivalently reformulated as: \begin{equation} \label{FT2} \frac{\gamma_j^m \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}^2 (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj} )^{m-1}} {2\big( (m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^2 \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^n}{n!} + (m - \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}) (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^m \big)} . \end{equation} Let introduce an auxiliary continuous variable $V_j^m \in [0, \bar{V}_{j}^m]$ for each term ($m$) in \eqref{FT2}: \begin{equation} \label{FT3} V_j^m = \frac{\gamma_j^m \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}^2 (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj} )^{m-1}} {2\big( (m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^2 \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^n}{n!} + (m - \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}) (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^m \big)} \end{equation} Substituting $V_j^m$ for \eqref{FT2} in the objective function \eqref{OBJ2}, problem $\mathbf{R-BFP}$ becomes: \begin{subequations} \label{RDN4} \begin{align} \min & \ \sum_{i \in I} \sum_{j \in J_i} \frac{y_{ij}d_{ij} \lambda_i}{v \sum_{l \in I} \lambda_l} + \sum_{i \in I} \sum_{j \in J_i} \sum_{m=1}^{M} \ \frac{y_{ij}\lambda_i V_j^m}{\sum_{l \in I} \lambda_l} \\ \text{s.to} \ & (x,y,\gamma) \in \mathcal{BL} & \notag \\ & \underbrace{V_j^m (m-1)! (m-\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^2 \sum_{n = 0}^{m-1} \frac{(\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^n}{n!}}_{T1} + \underbrace{V_j^m(m - \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}) (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj})^m}_{T2} \notag \\ = \ & \underbrace{1/2 \ \gamma_j^m \sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj}^2 (\sum_{l \in I_j} \lambda_l y_{lj} \tilde{S}_{lj} )^{m-1}}_{T3} \; , \; j\in J, m \in M \label{SUBST1} \end{align} \end{subequations} It can be seem from the above that: \begin{itemize} \item The objective function includes bilinear terms involving the product of a continuous variable $V_j^m$ and a binary variable $y_{ij}$. These bilinear terms can be linearized using the approach described in Proposition \ref{P2}. \item The expressions T1 and T2 in \eqref{SUBST1} include both polynomial terms of degree $(m+2)$ with monomials involving the product of a continuous variable by up to $(m+1)$ binary variables. These polynomial terms can be linearized using the approach described in Proposition \ref{P2}. \item The expression T3 in the right-side of \eqref{SUBST1} includes polynomial terms of degree $(m+1)$ involving products of $m+1$ binary variables. These polynomial terms can be linearized using the approach described in Proposition \ref{P1}. \end{itemize} This shows that both the objective function and each constraint \eqref{FT2} can be linearized regardless of the value of $M$, which is the result that we set out to prove. \hfill$\Box$ \end{proof} \vspace{-0.1in} \section{Algorithmic Method} \label{sec_ALGO} Even though \textbf{R-MILP} is an MILP problem, its solution remains nonetheless a challenge owing to its combinatorial nature and size. Linearization methods suffer from two possible drawbacks. First, they require the introduction of many decision variables and constraints and second the resulting continuous relaxations can be very loose (i.e., some of the added constraints are big-M ones). To palliate these issues, we use the concepts of lazy constraints, valid inequalities, and optimality-based cuts to devise outer~approximation methods whose efficiency and scalability is demonstrated in Section~\ref{sub_sec_compute_e}. \subsection{Outer Approximation Algorithm with Lazy Constraints} \label{sub_sec_lazy_c} While instances of the special variant of model \textbf{R-MILP} when the maximal number $M$ of drones at each DB is fixed and equal to 1 (i.e., each open DB is an M/G/1 system) can be solved, preliminary experiments reveal that state-of-the-art solvers are however unable to solve, within 1 hour, the root node of the continuous relaxation of the moderate-sized \textbf{R-MILP} problem instances for $M\geq 2$. To overcome this issue, we derive an MILP relaxation for problem $\mathbf{R-MILP}$ using {\it lazy constraints} (see, e.g., \cite{kleinert2021,lundell2019}) and embed it in an outer approximation algorithm. The motivation is to alleviate the issue caused by the significantly lifted decision and constraint space due to the linearization method. A {\it lazy constraint} is an integral part of the actual constraint set (which could lead to invalid solutions in its absence) and is unlikely to be binding at the optimal solution. Instead of incorporating all lazy constraints in the formulation, they are grouped in a pool and are at first removed from the constraint set before being (possibly) iteratively and selectively reinstated on an as-needed basis. The targeted benefit is to obtain a reduced-size relaxation or outer approximation problem, which is quicker to solve and tight. One should err on the side of caution when deciding which constraints are set up as lazy. Indeed, the inspection of whether a lazy constraint is violated is carried out each time a new incumbent solution for the outer approximation problem is found and the computational overheads consecutive to the possible need to reintroduce violated lazy constraints in the constraint set can be significant. Within this approach, the reduced-size relaxation is solved at each node of the tree. Each time a new incumbent solution is found, a verification is made to check whether any lazy constraint is violated. If it is the case, the incumbent integer solution is discarded and the violated lazy constraints are (re)introduced in the constraint set of all unprocessed nodes of the tree, thereby cutting off the current~solution. We define here the linearization constraints \eqref{MAC_z1}-\eqref{MAC_z4} and \eqref{MAC_psi1}-\eqref{MAC_psi4} in the sets $\mathcal{M}_{z_{lt}^j}$ and $\mathcal{M}_{\tau_{lt}^j}$ as lazy constraints. The motivation for this choice is threefold and guided by: the results of preliminary numerical tests, the very large number $\sum_{j\in J}|I_j|\cdot (|I_j|-1) / 2 $ of such constraints, and the fact that \eqref{MAC_psi1}-\eqref{MAC_psi4} are not always needed, i.e., they only play a role if two drones are placed at the same DB. The following notations are used. Let $\mathcal O$ denote the set of open nodes in the tree. We call iteration a node of the branch-and-bound tree at which the optimal solution of the continuous relaxation is integer-feasible. Let $\mathcal{C}$ be the entire constraint set of problem $\mathbf{R-MILP}$ (see Theorem \ref{T2}), $\mathcal{L}_k$ be the set of lazy constraints at node $k$, $\mathcal{V}^L_k$ be the set of violated lazy constraints at $k$, and $\mathcal{A}_k:= C \setminus \mathcal{L}_k$ be the set of active constraints at $k$, i.e., the set of constraints of the reduced-size outer approximation problem $\mathbf{OA-MILP}_k$. The composition of the sets vary across the algorithmic process. The outer approximation algorithm {\tt OA} is designed as follows. At the root node ($k=0$), we have: \begin{align} & \mathcal{L}_0:= \{\eqref{MAC_z1}-\eqref{MAC_z4} ; \eqref{MAC_psi1} - \eqref{MAC_psi4}\}. \label{S1} \\ & \mathcal{A}_0:= \{ \mathcal{BL} ; \eqref{U}-\eqref{V} ; \mathcal{M}_{\mu^m_{lj}} ; \mathcal{M}_{\omega_{lj}^m} \}. \label{S2} \\ &\mathcal{V}_0:= \emptyset. \label{S3} \end{align} \vspace{-0.1in} At any node $k$, the reduced-size relaxation (outer approximation) problem $\mathbf{OA-MILP}_k$ is solved: \[ \mathbf{OA-MILP}_k: \; \min \eqref{obj_lin} \quad \text{s.to} \quad (x,y,\gamma,z,U,\mu,\tau,\omega) \in \mathcal{A}_k \}. \] Two cases are distinguished depending on the optimal solution $X^*_k$ of the continuous relaxation of $\mathbf{OA-MILP}_k$: \begin{enumerate} \item If $X^*_k$ is fractional, we introduce branching linear inequalities to cut off the fractional nodal optimal solution and we continue the branch-and-bound process. \item If $X^*_k$ is an integer-valued solution with better objective value than the one of the current incumbent, we check for possible violation of the current lazy constraints: \begin{itemize} \item If some constraints are violated by $X^*_k$, they are inserted in $\mathcal{V}^L_k \subseteq \mathcal{L}_k$ and $X^*_k$ is discarded. The lazy and active constraint sets of each open node $o \in \mathcal {O}$ are updated as follows: \[ \mathcal{L}_o \leftarrow \mathcal{L}_o \setminus \mathcal{V}^L_k \quad \text{and} \quad \mathcal{A}_o \leftarrow \mathcal{A}_o \cup \mathcal{V}^L_k. \] \item If no lazy constraint in is violated, $X^*_k$ becomes the incumbent solution and the node is pruned. \end{itemize} \end{enumerate} The above process terminates when all nodes are pruned. The verification of the possible violation of the lazy constraints is carried out within a callback function, which is not performed at each node of the tree, but only when a better integer-valued feasible solution is found. The use of the lazy constraints within the outer approximation procedure is pivotal in the proposed method as shown in Section \ref{sub_sec_compute_e}. The pseudo-code of the algorithm is given below in Algorithm \ref{algo_oa}. \begin{algorithm}[] \caption{Outer Approximation (OA)} \label{algo_oa} {\small \begin{algorithmic} \STATE \textbf{Part 1 (Initialization)}: $\mathcal{L}_0:= \{\eqref{MAC_z1}-\eqref{MAC_z4} ; \eqref{MAC_psi1} - \eqref{MAC_psi4}\}$; \ $\mathcal{A}_0:= C \setminus \mathcal{L}_0$; $\mathcal{V}^L_0 = \emptyset$. \STATE \textbf{Part 2 (Iterative Procedure): At node $k$: } \STATE \quad \textbf{Step 1: Solution of nodal relaxation problem} $\mathbf{OA-MILP}_k$: $$ \mathbf{OA-MILP}_k: \; \min \eqref{obj_lin} \quad \text{s.to} \quad (x,y,\gamma,z,U,\mu,\tau,\omega) \in \mathcal{A}_k \} \ . $$ \STATE \quad \textbf{Step 2: Set Update:} \\ \begin{itemize} \item \textbf{If the objective value corresponding to $X^*_k$ is not better than that of the incumbent}, the node is pruned. \item \textbf{If the objective value corresponding to $X^*_k$ is better than that of the incumbent}, then: \begin{itemize} \item \textbf{If $X^*_k$ is fractional}, introduce branching inequalities cutting off $X^*_k$ and move to next node. \item \textbf{If $X^*_k$ is integer-valued}, check for possible violation of lazy constraints: \begin{itemize} \item If $X^*_k$ violates any constraint in $\mathcal{L}_k$: \begin{itemize} \item Move violated lazy constraints to $\mathcal{V}^L_k$ and discard $X^*_k$. \item Update sets of lazy and active constraints for each open node $o \in \mathcal {O}$: \[ \mathcal{L}_o \leftarrow \mathcal{L}_o \setminus \mathcal{V}^L_k \qquad \text{and} \qquad \mathcal{A}_o \leftarrow \mathcal{A}_o \cup \mathcal{V}^L_k. \] \end{itemize} \item If $X^*_k$ does not violate any constraint in $\mathcal{L}_k$, $X^*_k$ becomes the incumbent and node $k$ is pruned. \end{itemize} \end{itemize} \end{itemize} \STATE \textbf{Part 3 (Termination):} The algorithm stops when $\mathcal{O} = \emptyset$. \end{algorithmic} } \end{algorithm} \vspace{-0.15in} \subsection{Outer Approximation Branch-and-Cut Algorithm} \label{sub_sec_B&C} \vspace{-0.05in} The linearization approach involves the introduction of big-M constraints which typically lead to loose continuous relaxations. To tighten the continuous relaxation, we derive valid inequalities and optimality-based cuts. Combining them with the outer approximation method {\tt OA}, we obtain an outer approximation branch-and-cut algorithm {\tt OA-B\&C}. A valid inequality does not rule out any feasible integer solutions but cuts off fractional solutions feasible for the continuous relaxation problem. In essence, it pares away at the space between the linear and integer hulls, thereby providing a tighter formulation. In contrast to lazy constraints, a valid inequality is inserted in the formulation if the optimal solution of the continuous relaxation at any node is fractional and violates this valid inequality. We shall now derive two types of valid inequalities. \begin{comment} The valid inequalities \eqref{valid_c_1} state that if one or two drones are stationed at location $j$, then a DB should be open at location $j$. \begin{prop} \label{valid_c_1} The linear constraints \begin{equation} \label{VI1} \gamma_j^1 \leq 1 - \gamma_j^2 , \quad j \in J \end{equation} are valid inequalities for problem \textbf{R-MILP} \end{prop} \begin{proof} The sum $\gamma^1_j+\gamma^2_j$ can never exceed 1 due to \eqref{NEW1}. If $\gamma^1_j+\gamma^2_j =1$, it follows from \eqref{open_drone-2} that $x_j=1$ and $x_j$ can take value 0 or 1 if $\gamma^1_j+\gamma^2_j =0$. \hfill$\Box$ \end{proof} \end{comment} \begin{comment} The valid inequalities \eqref{valid_c_2} state that for each overdose demand, there has to be at least one base within its radius to be opened. \begin{prop} \label{valid_c_2} The linear constraints \begin{equation} \label{VI2} \gamma_j^1+ \gamma_j^2 \geq y_{ij}, \quad j \in J_{i}, \ i \in I \end{equation} are valid inequalities for problem \textbf{R-MILP} \end{prop} \begin{proof} For any location $j$ selected as drone base, $\gamma_j^1 + \gamma_j^2 = 1 $ hence always equal or greater than $y_{ij}$. For any location $j$ not selected as drone base, $\gamma_j^1, \gamma_j^2$ and $y_{ij}$ all equal to 1. \hfill$\Box$ \end{proof} \end{comment} The valid inequalities \eqref{VI3} reflect the fact that if either one of the variables $U_j^1$ or $U_j^2$ representing the expecting delay time is positive then at least one drone is positioned at DB $j$ and that if DB $j$ is not active, then the variables $U_j^m, m=1,2$ are equal to 0. \begin{prop} Let $U_{j}^m \in [0,\bar{U}_j^m], m=1,2$. The linear constraints \label{valid_c_3} \begin{equation} \label{VI3} \gamma_j^1 + \gamma_j^2 \geq U_j^m / \bar{U}_{j}^m , \quad j \in J, m=1,2 \end{equation} are valid inequalities for problem \textbf{R-MILP}. \end{prop} \begin{proof} If for any $m \in \{1,2\}$, we have $U_j^m / \bar{U}_j^m > 0$, which means $U_j^m >0$ and $U_j^m / \bar{U}_j^m \leq 1$ since $U_j^m \in [0, \bar{U}_j^m]$, this implies in turn $\gamma^{1}_{j} + \gamma^{2}_{j} = 1$. This forces either $\gamma^1_j$ or $\gamma^2_j$ to take value 1, which does not cut off any integer solution. If there is no DB open at $j$, i.e., $x_j=0 =\gamma_j^1 + \gamma_j^2$, then $y_{i,j}=0, i \in J_i$ due to \eqref{eq_lim} and $U^1_j=U^2_j=0$ due to \eqref{1drone_cons} and \eqref{2drone_cons}, which is valid for \eqref{VI3}. \hfill$\Box$ \end{proof} The valid inequalities \eqref{VI4} reflect that the queueing delay at a DB $j$ is a decreasing function of the number of drones positioned at this DB. \begin{prop} \label{valid_c_6} The linear constraints \begin{equation} \label{VI4} U_j^1 \geq U_j^2 , \quad j \in J \end{equation} are valid inequalities for problem \textbf{R-MILP}. \end{prop} \begin{proof} If no DB is set up at location, we have $U_j^1 = U_j^2 = 0$ due to \eqref{1drone_cons} and \eqref{2drone_cons}, and \eqref{VI4} holds. \newline If a DB is set up at location $j$, we have from \eqref{2drone_cons} the first equality below: \begin{align} U_j^2 &= \frac{ \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}}{(2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 + (2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} + (2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}) (\sum\limits_{l\in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} )^{2}} \label{INEQ1} \\ &\leq \frac{ \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}}{(2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}} = \frac{ \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2}{(2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2} \label{INEQ2} \\ & \leq \frac{ \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2}{2- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}} \label{INEQ3} \\ &\leq \frac{ \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}^2}{1- \sum\limits_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}} = U^1_j. \label{INEQ4} \end{align} The validity of the first inequality is implied by the steady-state requirement \eqref{steady-state} according to which we have either $1 - \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \geq 0$ when $\gamma_j^1=1$ or $2 - \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \geq 0$ when $\gamma_j^2=1$. It follows immediately that the denominator of \eqref{INEQ1} is larger than the one in \eqref{INEQ2}. Since the numerators are the same in \eqref{INEQ1} and \eqref{INEQ2}, we have \eqref{INEQ2} $\geq$ \eqref{INEQ1} Next, observe that \eqref{1drone_cons} implies that we have always $1 \geq \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}$ since otherwise the nonnegative auxiliary variable $U_j^1$ would be negative. This in turn implies $2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \geq 1$ and $(2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj})^2 \geq 2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} \geq 1$. Therefore, the denominator of \eqref{INEQ2} is larger than the one of \eqref{INEQ3} and \eqref{INEQ3} $\geq$ \eqref{INEQ2} Similarly, the third inequality is valid since $2- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj} > 1- \sum_{l \in I_j} \lambda_{l}y_{lj}\tilde{S}_{lj}$ which implies \eqref{INEQ4} $>$ \eqref{INEQ3} and allows us to conclude that $U^2_j \leq U^1_j$. \hfill$\Box$ \end{proof} \vspace{0.05in} Besides valid inequalities, we also derive optimality-based cuts (see Proposition \ref{valid_c_5}) which cut off integer feasible solutions that are not optimal or in a way that {\it not all} optimal solutions are removed. The proposed optimality-based cuts \eqref{VI5} state that the opening of a DB at location $j$ is required if and only if either one or two drones are positioned at $j$. \begin{prop} \label{valid_c_5} The linear constraints \begin{equation} \label{VI5} \gamma_j^1 + \gamma_j^2 = x_{j} , \quad j \in J \end{equation} are optimality-based cuts for problem \textbf{R-MILP}. \end{prop} \begin{proof} The sum $\gamma^1_j+\gamma^2_j$ can never exceed 1 due to \eqref{NEW1}. If $\gamma^1_j+\gamma^2_j =1$, it follows from \eqref{open_drone-2} that $x_j=1$, If $\gamma^1_j+\gamma^2_j =0$, \eqref{open_drone-2} allows $x_j$ to be equal to 0 or 1. However, \eqref{VI5} cuts off the integer solutions $(x_j,\gamma_j^1,\gamma_j^2) = (1,0,0), j \in J$, which are not optimal and do not give a better objective value than $(x_j\gamma_j^1,\gamma_j^2) = (0,0,0), j \in J$. \hfill$\Box$ \end{proof} While deceptively simple, the incorporation of the proposed valid inequalities and optimality-based cuts in the outer approximation algorithm branch-and-cut algorithm {\tt OA-B\&C} has a very significant computational impact as shown in Section \ref{PER_EVA}. The set $\mathcal{B}$ of optimality-based cuts \eqref{VI5} are added to the pool of lazy constraints \eqref{S1}: \begin{equation} \label{S4} \mathcal{L}'_0:= \mathcal{L}_0 \cup \mathcal{B}. \end{equation} The valid inequalities are grouped in a user cut pool. $\mathcal{U}_k$ be the user cut set (of valid inequalities) at the root node $k=0$: $\mathcal{U}_0 := \{\eqref{VI3}-\eqref{VI4} \}$ while $\mathcal{V}^U_k$ is the set of valid inequalities violated by the fractional optimal solution of the continuous relaxation at node $k$. The outer approximation branch-and-cut algorithm {\tt OA-B\&C} is structured as follows. The two families of valid inequalities \eqref{VI3} and \eqref{VI4} are derived up-front, incorporated into a pool of user cuts, and applied and checked dynamically each time the nodal optimal solution $X^*_k$ is fractional through a user cut callback implemented in {\sc Gurobi}. If $X^*_k$ is fractional, the user callback is applied and the violated -- if any -- valid inequalities in the current user cut pool $\mathcal{U}_k$ are added to the active constraint set of each open node $o \in \mathcal {O}$ (thereby cutting off $X^*_k$) and are removed from the user cut pool: \[ \mathcal{U}_o \leftarrow \mathcal{U}_o \setminus \mathcal{V}^U_k \quad \text{and} \quad \mathcal{A}_o \leftarrow \mathcal{A}_o \cup \mathcal{V}^U_k. \] If no inequality in $\mathcal{U}_k$ is violated by $X^*_k$, then two branching constraints are entered to cut off $X^*_k$ and the next open node is processed. Note that, if $X^*_k$ is integer feasible, the user cut callback is not applied. The algorithm stops when the set of unprocessed nodes becomes empty. The pseudo-code of the algorithmic method {\tt OA-B\&C} follows. \begin{algorithm}[] \caption{Outer Approximation Branch-and-Cut Algorithm (OA-B\&C)} \label{algo_oa-bc} {\small \begin{algorithmic} \STATE \textbf{Part 1 (Initialization)}: $\mathcal{L}_0:= \{\eqref{MAC_z1}-\eqref{MAC_z4} ; \eqref{MAC_psi1} - \eqref{MAC_psi4}\}$; $\mathcal{B} = \{\eqref{VI5}\};$ \ $\mathcal{L'}_0:= \mathcal{L}_0 \cup \mathcal{B};$ $\mathcal{U}_0:= \{\eqref{VI3};\eqref{VI4} \}$; $\mathcal{A}_0:= C \setminus \mathcal{L}_0$; $\mathcal{V}^L_0 = \emptyset $; $\mathcal{V}^U_0 = \emptyset$. \STATE \textbf{Part 2 (Iterative Procedure): At node $k$} \STATE \quad \textbf{Step 1: Solution of nodal relaxation problem:} $$ \mathbf{OA-MILP}_k: \; \min \eqref{obj_lin} \quad \text{s.to} \quad (x,y,\gamma,z,U,\mu,\tau,\omega) \in \mathcal{A}_k \} \ . $$ \STATE \quad \textbf{Step 2: Set Update:} \\ \begin{itemize} \item \textbf{If the objective value corresponding to $X^*_k$ is not better than that of the incumbent}, the node is pruned. \item \textbf{If the objective value corresponding to $X^*_k$ is better than that of the incumbent}, then: \begin{itemize} \item \textbf{If $X^*_k$ is fractional}, apply user callback: \begin{itemize} \item If $X_k^{*}$ violates any constraint in $\mathcal{U}_k$: \begin{itemize} \item Move violated constraints to $\mathcal{V}_k^{U}$ and discard $X_k^{*}$. \item Update sets of user cuts and active constraints for each open node $o \in \mathcal {O}$ \[ \mathcal{U}_o \leftarrow \mathcal{U}_o \setminus \mathcal{V}^U_k \quad \text{and} \quad \mathcal{A}_o \leftarrow \mathcal{A}_o \cup \mathcal{V}^U_k. \] \end{itemize} \item If no valid inequality in $\mathcal{U}_k$ is violated by $X^*_k$, then branching constraints are entered to cut off $X^*_k$ and the next open node is processed. \end{itemize} \item \textbf{If $X^*_k$ is integer-valued}, update the sets of lazy and active constraints according to Step 2 (Part 2) of Algorithm \ref{algo_oa}. \end{itemize} \end{itemize} \STATE \textbf{Part 3 (Termination):} The algorithm stops when $\mathcal{O} = \emptyset$. \end{algorithmic} } \end{algorithm} \vspace{-0.25in} \section{Data-Driven Tests and Insights} \label{sec_TESTS} \vspace{-0.1in} To demonstrate the benefits and applicability of the proposed approach and validate its computational efficiency, we conduct extensive numerical tests using the real-life opioid overdose data described in Section \ref{sub_data}. Section \ref{sub_delivery} provides practical insights related to response time, chance of survival, quality-adjusted life year (QALY), and costs, and attests the applicability and robustness of the approach through a cross-validation analysis. Section \ref{sub_sec_compute_e} evaluates the computational efficiency and tractability of the reformulation and algorithmic framework. \subsection{Real-life Opioid Overdose Data} \label{sub_data} The dataset used in the tests describes the opioid overdose incidents in the city of Virginia Beach and is publicly available \cite{CustodioLejeune2021}. The data were collected through multiple sources, including the OpenVB data portal, Freedom of Information Act requests to the government of Virginia Beach, and public reports. The data collection process was validated by the Virginia Beach officials. The dataset contains all dispatch records for OTRs from the second quarter of 2018 to the third quarter of 2019, which amounts to a total of 733 data points (overdoses). Each record has four fields including the time at which the request was received by the EMS, the response time, i.e, time between the reception of the request and the arrival of the EMS personnel on the scene, and the location (i.e., latitude and longitude) of the request. The dataset also provides the location of the twenty-six established EMS facilities (i.e., fire, police, and EMS stations) that can be selected as drone bases. As in \cite{boutilier2017optimizing}, we consider that drones can travel at a speed of $27.8$ meters per second (m/s) and take 10 seconds to take off and to land. We use 25 minutes as the expected non-travel service time which includes the time to administer the naloxone at the overdose location and to recharge and prepare the drone for the next assignment. We have conducted a grid search in order to identify the minimal size of the drone network to be able to respond to all OTRs. This revealed that the drone network should include at least $p=11$ drones and does not require more than $q=10$ DBs. Thereafter, we refer to ``base scenario" the case in which the drone response network allows for the opening of ten DBs and the deployment of eleven drones. \subsection{Interplay Between Response Time, Survival Chance, QALY, and Delivery Mode} \label{sub_delivery} Section \ref{sub_response} analyzes the reduction in the response time attributable to the drone network, the applicability and robustness of the proposed approach, and the spatio-temporal adjustments in the OTRs and drone network. Section \ref{sub_SURVIVAL} investigates the increase in the chance of survival of an overdose victim and the expected number of lives saved. Section \ref{QALY} quantifies the impact on the quality-adjusted life year and performs a cost analysis. \subsubsection{Response Time Reduction and Network Robustness} \label{sub_response} We analyze here how the response time, a critical metric for EMSs, can be improved by using drones. We first carry out an in-sample analysis before cross-validating the results and assessing, using out-of-sample data, the stability of the results and the robustness of the designed networks. \noindent {\bf In-sample analysis:} We consider quarterly data sets of opioid overdose incidents which we denote 2018Q2 (2018's second quarter), 2018Q3, 2018Q4, 2019Q1, and 2019Q2. We consider the base scenario in which one can open up to 10 DBs and deploy 11 drones ($q=10, p=11$), To estimate the in-sample performance and response times of the drone network, we solve the network design problem $\mathbf{R-MILP}$ associated to a quarterly training dataset, retrieve its optimal solution and value, and calculate the average response time across all opioid overdoses in the corresponding quarter. Table \ref{test_result} shows the average response times on the training sets with the proposed drone network and those obtained in Virginia Beach with their current ambulance-based EMS network, which highlights the significant decrease in the average response time enabled by the drone network (i.e., 1 minute and 19 seconds versus 8 minutes and 56 seconds for the ambulance network). As compared to the current ambulance network, the drone network reduces the response time by $85.35\%$ on average. The reduction of the response time is stable across all quarters and is not due to chance. \begin{table}[H] \centering \setlength\extrarowheight{1pt} \begin{tabular}{P{3cm}|P{3cm}|P{4.5cm}| P{3cm}} \hline \multirow{2}{*}{Training Set} & \multicolumn{3}{c}{Response Time (minutes)} \\ \cline{2-4} \multirow{2}{*}{} & Drone Networks & VB Ambulance Networks & Time Reduction\\ \hline 2018Q2 & 1.34 & 8.77 & 84.72\% \\ 2018Q3 & 1.36 & 9.02 & 84.92\% \\ 2018Q4 & 1.25 & 9.42 & 86.73\% \\ 2019Q1 & 1.29 & 8.56 & 84.93\% \\ \hline Quarter Average & 1.31 & 8.94 & 85.35\% \\ \hline \end{tabular} \caption{\label{test_result} Response Times in Quarterly Training Sets ($q = 10, p = 11$)} \end{table} \noindent {\bf Out-of-sample analysis:} We now carry out a cross-validation analysis to assess how the training set-based networks perform on out-of-sample data that were not used used in the design of the network. Using the five sets of quarterly data, we create four pairs of consecutive quarterly sets, e.g., (2018Q2 and 2018Q3), where the first one is the training set (2018Q2) and the second one is the testing set (2018Q3). Table \ref{data_summary} describes the size of the data sets. The number of OTRs in each quarterly dataset is $|I|$. \begin{table}[H] \centering \setlength\extrarowheight{1pt} \begin{tabular}{P{3.5cm} |P{2.5cm}|P{2.5cm}|P{2.5cm}|P{2.5cm}} \hline Pairs of Sets & Training Sets & $|I|$ - Training & Testing Sets & $|I|$ - Testing\\ \hline (2018Q2, 2018Q3) & 2018Q2 &173 & 2018Q3 & 131\\ (2018Q3, 2018Q4) & 2018Q3 &131 & 2018Q4 & 120\\ (2018Q4, 2019Q1) & 2018Q4 &120 & 2019Q1 & 138\\ (2019Q1, 2019Q2) & 2019Q1 &138 & 2019Q2 & 171\\ \hline \end{tabular} \caption{\label{data_summary} Data Summary} \end{table} For each pair of training and testing sets, we use the training set only to build the network: we solve the network design problem $\mathbf{R-MILP}$ associated to the training set and retrieve its optimal solution. This provides us with the training set-based optimal configuration of the network, namely the optimal location $x^*$ of the opened DBs and the optimal number of drones (which can be inferred from the variables $\gamma^{*}$) to be stationed at each DB. Using the optimal location decisions for the training set-based network, we then solve an assignment problem corresponding to the testing set, which is a "reduced" form of problem $\mathbf{R-MILP}$ in which the variables $x$ and $\gamma$ are fixed to their optimal values for the training set: $x:=x^*$ and $\gamma:= \gamma^*$. The optimal solution of the testing set-based assignment provides the optimal drone-dispatching decisions for OTRs based on the training set-based network configuration, which allows us to calculate the individual and average out-of-sample (testing set) response times to OTRs. The results are displayed in Table \ref{test_result_testing}. The cross-validation analysis confirms the results envisioned in the in-sample analysis. The network configurations for the training sets reduces in a striking manner the out-of-sample response time. Across all quarters, the average out-of-sample response time is of 1 minute and 26 seconds while it amounts to 9 minutes and 19 seconds for the ambulance network in Virginia Beach. This corresponds to a $84.57\%$ reduction in the average response~time. The quarterly average response times are stable and similar to those obtained in the in-sample analysis. \begin{table}[H] \centering \setlength\extrarowheight{1pt} \begin{tabular}{P{4cm}|P{3cm}|P{4.5cm}| P{3cm}} \hline \multirow{2}{*}{Training/Testing Data} & \multicolumn{3}{c}{Testing Quarter Response Time (min.) } \\ \cline{2-4} \multirow{2}{*}{} & Drone Network & VB Ambulance Network & Time Reduction\\ \hline 2018Q2 / 2018Q3 & 1.51 & 9.02 & 83.26\% \\ 2018Q3 / 2018Q4 & 1.38 & 9.42 & 85.35\% \\ 2018Q4 / 2019Q1 & 1.48 & 8.56 & 82.71\% \\ 2019Q1 / 2019Q2 & 1.38 & 10.27 & 86.56\% \\ \hline Quarter Average & 1.43 & 9.32 & 84.57\% \\ \hline \end{tabular} \caption{\label{test_result_testing} Out-of-Sample Response Times for Testing Quarters ($q = 10, p = 11$)} \end{table} \vspace{-0.1in} The last part of this subsection illustrates the spatio-temporal variability of the OTRs and its impact on the optimal configuration of the drone network. Figure \ref{fig:dnn_network} presents the distribution of the OTRs and the configurations of the drone-based networks for two consecutive quarters, i.e., 2018's second (2018Q2) and third (2018Q3) quarters. In both networks, ten DBs are opened; one drone is located at nine of the DBs operating as M/G/1 systems while the last one operates as an M/G/2 system with two available drones. Comparing the two networks, one can see that the location of most DBs remains unchanged over time. One noticeable change is that, in the 2018Q3 network a new DB is opened in the southwest of the map to cover an increased number of OTRs in this remote area. Another change is the new DB in the North. In Figure \eqref{fig:subim2}, the two new DBs are surrounded by a rectangle while the two closed ones are circled. The aforementioned location changes reflect the impact of the spatial uncertainty on the optimal configuration of the network. In that regard, the flexibility, ease and limited cost to open (close) a DB \cite{van2017drone} are important to be able to adapt to geographical changes in the occurrence of OTRs. \begin{figure}[H] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, height=9.2cm]{2018Q2.JPG} \caption{2018 Quarter 2} \label{fig:subim1} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, height=9.1cm]{2018Q3.JPG} \caption{2018 Quarter 3} \label{fig:subim2} \end{subfigure} \caption{OTR Demand and Drone Network Configuration} \label{fig:dnn_network} \vspace{-0.25in} \end{figure} \subsubsection{Impact on Probability of Survival} \label{sub_SURVIVAL} The goal of this section is to quantify the benefits of the reduced response time afforded by the drone network (Section \ref{sub_response}) on the survival chance of overdose victims. Since we are not aware of any functional relationship between response time and survival probability of an opioid overdose victim, we hereby employ an indirect two-step approach using known survival functions for cardiac arrests. The main reason for this is that many overdoses lead to cardiac arrests. Indeed, the increase in cardiac arrest caused by opioid medications is said to be ''{\it the most dramatic manifestation of opioid use disorder}``\cite{dezfulian2021opioid}. \noindent \underline{Step 1:} We calculate the number of out-of-hospital cardiac arrests (OHCA) resulting from an opioid overdose and use the statistics reported by \citeauthor{dezfulian2021opioid} \cite{dezfulian2021opioid} which indicate that 15\% of opioid overdoses lead to an OHCA. As shown in the last column in Table \ref{data_summary}, 560 overdose incidents are reported in Virginia Beach over one year (i.e., from 2018Q3 to 2019Q2), leading to an estimated (see \cite{dezfulian2021opioid}) number of 84 overdose-associated OHCAs. \noindent \underline{Step 2:} Using three known survival functions for OHCAs (see Table \ref{survival}) that define the survival probability $f(x)$ to an OHCA as either a semi-continuous or a logistic function of the response time $x$, we calculate the estimated probability of survival for overdose-associated OHCAs with the drone network and with the EMS ambulance network in use in Virginia Beach. This allows us in turn to derive the differences in the survival chance and in the expected number of saved lives with the drone and ambulance networks. \begin{table}[h] \centering \vspace{-0.1in} \setlength\extrarowheight{4.75pt} \begin{tabular}{c|c|c} Author & Function Type & $f(x)$ \\ \hline Bandara et al. \cite{Bandara:2014} & Semi-continuous & $\max \left[ 0.594-0.055x, 0\right]$ \\ \hline De Maio et al. \cite{DEMAIO} & Logistic & $\left(1+e^{0.679+0.262x}\right)^{-1}$\\ \hline Chanta et al. \cite{Chanta2014} & Logistic & $\left(1+e^{-0.015+0.245x}\right)^{-1}$ \\ \end{tabular}% \caption{Survival Functions for OHCAs} \label{survival} \vspace{-0.175in} \end{table} As shown in the last row of Table \ref{test_result_testing}, the out-of sample average response time (over one year) for the drone network is 1.43 minutes whereas the current EMS network takes 9.32 minutes for the same period. Figure \ref{fig:survival} shows the survival probability -- for the three survival functions described in Table \ref{survival} -- of the overdose-associated OHCAs obtained with the drone and with Virginia Beach's ambulance EMS network. The difference between the two networks is striking. The drone network significantly increases the patients' survival probability with each survival function. The chance of survival with the drone network is indeed 5.37 (resp., 5.5 and 3.66) times larger than the one with Virginia Beach's current ambulance network as estimated by the survival function \cite{Bandara:2014} (resp., \cite{DEMAIO}, and \cite{Chanta2014}). \begin{figure}[h] \centering \includegraphics[width=\textwidth, height=6cm]{survival.jpg} \vspace{-0.35in} \caption{Survival Probability for Drone Network and Virginia Beach Ambulance Network} \label{fig:survival} \end{figure} Table \ref{casualty} displays the survival probability and the expected number of patients surviving an overdose-associated OHCA in Virginia Beach for the four quarters considered. These statistics are provided for the drone network (column 4) and for Virginia Beach's current ambulance network (column 5) as benchmark. One can see that the drone network is extremely beneficial and increases the number of survivors by 637.5\%, (resp., 650\% and 466.7\%) using the survival function \cite{Bandara:2014} (resp.\cite{DEMAIO} and \cite{Chanta2014}). \begin{table}[H] \centering \begin{tabular}{P{2.4cm} | P{2.8cm} | P{3cm} | P{2.65cm} | P{2.65cm} } \hline Total Number & Overdose-related & Survival & \multicolumn{2}{c}{ Survival Prob. / No. of Survivors} \\ \cline{4-5} of Overdoses& OHCAs & Function & Drone Network & EMS Network \\ \hline \multirow{3}{*}{560} & \multirow{3}{*}{84} & Bandara et al. \cite{Bandara:2014} & 51\% / 42 & 8\% / 6 \\ & & De Maio et al. \cite{DEMAIO} & 26\% / 21 & 4\% / 3 \\ & & Chanta et al. \cite{Chanta2014} & 42\% / 35 & 9\% / 7 \\ \hline \end{tabular}% \caption{\label{casualty} Expected Saved Lives with Drone Network} \end{table} Table \ref{casualty} shows that, depending on the survival function, the number of {\it additional lives} saved thanks to the done network would vary between 18 and 36 in Virginia Beach over a one-year period of time. This is quite an astonishing result and is most likely an underestimate of the actual number of lives that could be saved with the drone network. Indeed, the above results only take into account the benefits of the drone-based reduced response time for the estimated percenatge (15\%) of opioid overdoses leading to a cardiac arrest. It is reasonable to assume that the reduced response time will also affect the outcomes of the overdoses not leading to a cardiac arrest. This is corroborated by \citeauthor{ornato2020feasibility} who state that: "{\it Every minute that goes by before paramedics or others can attempt resuscitation, an opiate overdose victims’ chance of survival decreases by 10 percent}" \cite{2019Burroughs,ornato2020feasibility}. \subsubsection{Impact on Quality-adjusted Life Year and Cost Analysis} \label{QALY} Quality-adjusted life year (QALY) is a concept commonly used in healthcare economic analyses to evaluate how a healthcare issue impacts a survivor's quality and duration of future life (see, e.g., \cite{BogleQALY,SASSI}). A QALY equal to 1 is indicative of one year in perfect health. As shown in \eqref{qaly_formula}, the total QALY (T-QALY) for a patient surviving a healthcare problem is the sum of the discounted QALY of each year during one's mean life expectancy after the healthcare incident \cite{BogleQALY}: \begin{equation} T-QALY = \sum_{t = 1 }^{T} \frac{\alpha t}{(1 + c)^{t}} \ \ , \label{qaly_formula} \end{equation} where $T$ is the number of years of remaining life expectancy, $\alpha \in [0,1]$ is a coefficient accounting for the reduced quality of life consecutive to the healthcare incident, and $c$ is a discount rate reflecting that people prefer good health sooner than later (i.e., a QALY in earlier years is more valuable than in later~ones). To conduct the QALY analysis, we assume, as in \cite{BogleQALY}, that a patient surviving an overdose-associated OHCA has a mean life expectancy of 11.4 years ($T$ = 11.4), that each year corresponds to $0.85$ QALY ($\alpha = 0.85$), and that the discount rate $c$ is 3\%. Using \eqref{qaly_formula}, the T-QALY is equal to 8.47 years for each OHCA survivor (see Table \ref{qaly} in Appendix~\ref{DATA-COST}). Table \ref{add_qaly} presents the {\em additional QALY} gained by using drones instead of ambulances for the survival functions \cite{Bandara:2014}, \cite{DEMAIO}, and \cite{Chanta2014} (see Table \ref{survival}). The additional QALY attributable to the drone network for the overdose-associated OHCAs in Virginia Beach over one year is very high, varying between 152 and 305 years among the three survival functions. \vspace{-0.03in} \begin{table}[H] \centering \resizebox{\columnwidth}{1cm}{% \begin{tabular}{P{1.5cm} | P{3cm} | P{2.4cm} | P{2.4cm} |P{3.5cm} | P{2.9cm}} \hline \multirow{2}{*}{T-QALY} & Survival & \multicolumn{2}{c|}{Number of Survivors} & Additional Survivors & Additional QALY \\ \cline{3-4} & Function & Drone Network & EMS Network & with Drone Network& in over One Year\\ \hline \multirow{3}{*}{8.47} & Bandara et al. \cite{Bandara:2014} & 42 & 6 & 36 & 305 \\ & De Maio et al. \cite{DEMAIO} & 21 & 3 & 18 & 152 \\ & Chanta et al. \cite{Chanta2014} & 35 & 7 & 28 & 237 \\ \hline \end{tabular}% } \vspace{-0.06in} \caption{\label{add_qaly} Additional QALY with Drone Network over one Year (from 2018Q3 to 2019Q2)} \end{table} \vspace{-0.15in} A drone and the accompanying drone station are estimated to cost $\$15,000$, to have a four-year lifespan, and to have an annual maintenance cost of $\$3,000$ \cite{BogleQALY}. Using these statistics (see Table \ref{dronecost} in Appendix \ref{DATA-COST}), the total discounted costs of the eleven drones used in the base scenario amount to $\$287,664$ (i.e., using a 3\% discount rate for the annual maintenance cost) and the drone-based network allows a reduction in the average response time of about 7 minutes and 37 seconds as compared to the ambulance network. That is quite a difference with the option of buying ambulances, each costing between \$150,000 to \$200,000, which, as stated by \citeauthor{2019Burroughs,ornato2020feasibility}. \cite{2019Burroughs,ornato2020feasibility} would cost ``{\it millions and millions of dollars just to reduce the response time by a minute}" (i.e., from eight to seven). Assuming that the number of overdoses in Virginia Beach remains stable over four years (i.e., lifespan of a drone), the total additional QALY in four years attributable to the drones network reaches 1220, (resp., 608 and 948) years with the survival function \cite{Bandara:2014} (resp., \cite{DEMAIO}, and \cite{Chanta2014}), which in turn implies that the proposed drone network only costs \$235 (resp., \$473 and \$303) per incremental QALY. As a benchmark, \citeauthor{BogleQALY} report a \$3143 cost per incremetal QUALY for a network of drones delivering defibrillators to OHCAs in Durham, NC. More generally, it is considered that a medical intervention with a \$50,000 per QALY ratio is cost-effective \cite{NEUMANN}. \vspace{-0.03in} \begin{table}[H] \centering \begin{tabular}{P{3.5cm} | P{3.5cm} | P{3.5cm} | P{3.5cm} } \hline \multirow{2}{*}{Drone Network Cost} & \multirow{2}{*}{Survival Function} & Total Additional & Cost per\\ & & QALY in Four Years &Additional QALY \\ \hline \multirow{3}{*}{\$287,664} & Bandara et al. \cite{Bandara:2014} & 1220 & \$235 \\ & De Maio et al. \cite{DEMAIO} & 608 & \$473\\ & Chanta et al. \cite{Chanta2014} & 948 & \$393 \\ \hline \end{tabular}% \vspace{-0.06in} \caption{\label{QALY_4yr} Cost Analysis for Drone Network per Incremental QALY ($p$=11)} \end{table} \subsection{Computational Efficiency} \label{sub_sec_compute_e} In this section, we conduct a battery of tests to assess the computational efficiency and tractability of the proposed reformulation and algorithms. Section \ref{PER_EVA} compares the two proposed algorithmic methods and the direct solution of the reformulated problem with {\sc Gurobi}. Performance profile plots \cite{PROFILE} highlight the increased benefits gained with our approaches as the size of the problem and the volume of OTRs increases. Section \ref{SENS} assess the sensitivity of the proposed method with respect to the available resources. All the optimization problems are coded in Python 3.7 and solved with the {\sc Gurobi} 9.1.2 solver on a Linux machine, with Intel Core i7-6700 CPU 3.40GHz processors and 64 GB installed physical memory. For each problem instance, the optimality tolerance is set to 0.01\% for each solver, the maximum solution time is set to one hour, and we use one thread only. \subsubsection{Computational Efficiency of Scalability} \label{PER_EVA} We evaluate here the computational efficiency and scalability of the proposed reformulation \textbf{R-MILP} and algorithms with respect to the size of the problem, more precisely the number $|I|$ of OTRs to which the network must respond. Using the data described in Section \ref{sub_data} and considering the base network scenario with up to 10 DBs and 11 drones, we have created ten problem types that differ in the number of OTRs ranging from 50 to 500, by increment of 50: $|I| \in \{50,100,150,200,250,300,350,400,450,500 \}$. Each instance type is identified by the tuple $(q,p,|I|)=(10,11,|I|)$ and we have generated five problem instances for each instance type. This gives a total of 50 problem instances which we solve with the following approaches: \begin{itemize} \item {\tt REFO}: Direct solution of problem $\mathbb{R-MILP}$ with the default settings of {\sc Gurobi}. \item {\tt OA}: Outer approximation algorithm (Section \ref{sub_sec_lazy_c}). \item {\tt OA-B\&C}: Outer Approximation branch-and-cut algorithm (Section \ref{sub_sec_B&C}). \end{itemize} The three approaches are compared in terms of solution times and the number of instances for which optimality can be proven in one hour. We note that none of the 50 problem instances formulated with the base formulation $\textbf{B-IFP}$ can be solved in one hour with the state-of-the-art solver {\sc Baron} specialized for nonconvex MINLP problems. We use performance profile plots displayed in \eqref{fig:p_base_pp} to compare the efficiency of the solution methods. The horizontal axis indicates the running time in seconds while the vertical axis represents the number of instances solved to optimality within the corresponding running time. The solution time for each instance is given in Table \ref{table_compute} in the Appendix. \begin{figure}[H] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width = \textwidth, height = 9cm]{performance_profile.jpg} \caption{Performance Profile} \label{fig:p_base_pp} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, height=9cm]{avg_sol_time.JPG} \caption{Average Solution time} \label{fig:p_base_avg} \end{subfigure} \caption{Computational Efficiency for $(10,11, |I|)$: Performance Profile in Figure \eqref{fig:p_base_pp} and Average Solution Time by Demand Size $|I|$ in Figure \eqref{fig:p_base_avg}} \label{fig:p_base} \end{figure} The performance profile reveals without any possible ambiguity that the two proposed algorithmic methods {\tt OA} and {\tt OA-B\&C} are much faster, scale much better, and allow the solution of many more instances than the direct solution method {\tt REFO}. While the direct solution approach {\tt REFO} only solves five (i.e. the five smallest instances with $|I|$ = 50 OTRs) of the 50 instances in one hour, {\tt OA} and {\tt OA-B\&C} solve and prove the optimality of the solution for respectively 46 and 49 instances. Comparing now the algorithms {\tt OA} and {\tt OA-B\&C}, the dynamic incorporation of valid inequalities and optimality-based cuts permit to further efficiency and scalability gains. This can be seen from the {\tt OA-B\&C} line in the performance plot being above the {\tt OA} line at any time interval, thereby indicating that the {\tt OA-B\&C} method solves more instances to optimality in any amount of time. As an illustration, Table \ref{Num_solved} in Appendix shows that {\tt OA-B\&C} solves 96\% of the instances in less than 40 minutes while {\tt OA-B} needs one hour to solve 92\% of the instances. Figure \eqref{fig:p_base_avg} shows the average (across five instances) solution times for each instance type and associated size $|I|$. When an instance can not be solved to optimality within one hour, the solution time is estimated to be 3600 seconds, which explains that for all instances of size $|I| \in [100,500]$ the solution time for {\tt REFO} is 3600 seconds. Figure \eqref{fig:p_base_avg} highlights the added benefits of {\tt OA-B\&C} over {\tt OA} for larger instances. While {\tt OA-B\&C} (blue bar) and {\tt OA} (orange bar) perform similarly for $|I| \in [50, 300]$, {\tt OA-B\&C} solve the instances of larger size $|I| \in [350, 500]$ much faster than {\tt OA} can do. For example, {\tt OA} takes on average about 80\% more time than {\tt OA-B\&C} to solve the 450-sized instances. \subsubsection{Sensitivity Analysis with Respect to Network Resources} \label{SENS} In this section, we evaluate the sensitivity of the solution times with respect to the available network resources and, in particular the number of DBs ($q$) and drones ($p$). We use the algorithm {\tt OA-B\&C} since it enjoys the quickest solution time as shown in Section \ref{PER_EVA}. \begin{figure}[H] \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, height=8cm]{base_sensitivity.jpg} \caption{Sensitivity to Number $q$ of DBs ($p = 11$)} \label{fig:p_sense_base} \end{subfigure} \hfill \begin{subfigure}[b]{0.5\textwidth} \includegraphics[width=\textwidth, height=8cm]{drone_sensitivity.jpg} \caption{Sensitivity to Number $p$ of Drones ($q = 10$)} \label{fig:p_sense_drone} \end{subfigure} \caption{Sensitivity with respect to Number of DBs (Figure \eqref{fig:p_sense_base}) and Number of Drones (Figure \eqref{fig:p_sense_drone}} \label{fig:p_sense} \end{figure} We consider four possible values $q \in \{6, 10, 15, 20 \}$ for the number of DBs and three values $|I| \in \{400, 450, 500\}$ for the demand size (number of OTRs), and generate for each combination of $q$ and $|I|$ five problem instances. We assume that $p=11$ drones are available. This gives 12 instance types $(q,11,|I|)$ for a total of 60 problem instances and we solve each of these with the {\tt OA-B\&C} algorithm. Figure \eqref{fig:p_sense_base} shows the performance profile associated to each considered number $q$ of DBs. It appears that the computational time of the {\tt OA-B\&C} method is not particularly sensitive to the number of open DBs since the performance profiles for the considered values of $q$ intersect, thereby indicating the solution time is not systematically higher (or lower) for any considered number of DBs. We proceed similarly to evaluate the sensitivity of the computational time with respect to the number of drones. We assume that the maximum number of DBs that can be open is $q=10$. We consider nine possible number of available drones $p \in \{12, 13,14,15,16,17,18,19,20\}$, three possible demand sizes $|I| \in \{400, 450, 500\}$, and generate five instances for each of the 27 instance types of each type $(10, p, |I|)$ for a total of 135 instances tat we solve with the {\tt OA-B\&C} method. The performance profile corresponding to each considered $p$ is displayed in Figure \ref{fig:p_sense_drone}. It shows that the computational time tends to be quicker (i.e. more instances solved in a given amount of time) when the number of available drones is larger. We notice in particular that the average solution time is much lower for $p = 20$ than for any other (smaller) value assigned to $p$. A likely explanation for this is that each DB will automatically house two drones when $q = 10$. \section{Conclusions} \label{sec_conclusion} Opioid use disorder affects about 2 million Americans and cost about \$78.5 billion in annual health care expenses \cite{dezfulian2021opioid}. In this context, it has been argued that the drone-based delivery of naloxone has the ``{\it potential to be a transformative innovation due to its easily deployable and flexible nature}" \cite{gao2020dynamic}. The efficacy of naloxone depends on how quickly it is administered, which is a priority of the US Food and Drug Administration \cite{ornato2020feasibility}. \citeauthor{BucklandDesignConsider} \cite{BucklandDesignConsider} argue that further research is needed to ``{\it to develop a decision support system to aid the 911 dispatchers to efficiently assign the drones in a time-sensitive environment}" such as an opioid overdose. This study responds to the above pressing needs and proposes a new queueing-optimization model to design an EMS network in which drones deliver naloxone to overdose victims in a timely fashion. The objective is to increase the chance of survival of overdose victims by reducing the time needed to deliver and administer naloxone and is accomplished by determining the location of drones and DBs, the capacity of DBs, and the dispatching of drones to overdose incidents. Some new and distinctive features in the model are its survival objective function, the explicit modeling of congestion and decision-dependent (parameter) uncertainty that affects the performance of the network, and its flexibility as the capacity of DBs is not fixed ex-ante but is instead determined by the model. Besides modeling, the contributions of this study are threefold. On the reformulation side, we propose a tractable MILP reformulation of the stochastic network design problem, which, in its original form, is an NP-hard fractional nonlinear integer optimization problem. We further demonstrate the generalizability of the reformulation approach and prove that the problem of minimizing the average response time in a network organized as a collection of interdependent $M/G/K$ queueing systems in which the capacity $K$ of each system is variable is MILP-representable. On the algorithmic side, we derive two algorithmic methods which are significant upgrades over the direct solution of the MILP reformulations. The outer approximation branch-and cut algorithm proves to be the best performing one, significantly reducing solution times, increasing the number of instances solved, and enabling the solution of problems of larger size than those encountered in our case study based on real-life data. On the EMS practice side, the data-driven tests based on overdose data from Virginia Beach reveal that the use of drones to deliver naloxone in response to overdoses could possibly be a game-changer. Besides showing the out-of-sample robustness and applicability of the proposed approach, the tests indeed reveal that using the drone network: 1) the response time is on average close to seven times smaller (1 min and 26 sec) than the one (9 min and 19 sec) with the ambulance network currently in use in Virginia Beach; 2) the estimated chance of survival to an overdose is more than 4.6 times larger than the one with the Virginia Beach ambulance network; 3) many more lives (of overdose victims) could be saved as compared to the Virginia Beach ambulance network; 4) the total QALY per patient is 8.47 years amounting to up to 305 additional per year across all overdose victims in Virginia Beach, 5) the cost per additional QALY is very low, varying between \$235 and \$473. This study showcases the potential of using drones to alleviate the devastating consequences of the opioid overdose crisis and could pave the way for the practical implementation of drone-based EMS systems. While very promising, we must however be aware of a number of obstacles for the widespread use of drones to respond to overdoses or other time-critical medical emergencies. These barriers include, for example, regulation, flying condition and zones, safety, data privacy, operations related to the design and maintenance of a medical drone network \cite{Johnson2021impact,pulsiri2021drones}. While not the focus of this study, future research will be needed on these hindrances as well as on user perceptions and acceptance. \printbibliography \newpage \setcounter{page}{1} \begin{appendices} \section{Notations} \label{sec:notations} \addcontentsline{toc}{section}{Appendix A. Notations} \subsection{Sets and Indices} \begin{itemize} \item[$i \in I$:] Index and set of OTR locations. \item[$j \in J$:] Index and set of candidate DBs. \item[$i \in I_j$:] Index and set of OTRs that are within the catchment area of a drone at DB $j$. \item[$j \in J_i$:] Index and set of DBs that can cover OTR $i$. \end{itemize} \vspace{-0.2in} \subsection{Parameters and Constants} \begin{itemize} \item[ $A_j$:] Vector of parameters $(a_j, b_j, c_j)$ representing the coordinates of DB $j$ in the earth-centered, earth-fixed coordinate system. \item [$A_i$:] Vector of parameters $(a_i, b_i, c_i)$ representing the location of OTR $i$ in the earth-centered, earth-fixed coordinate system. \item[$M$:] Maximum number of drones that can be stationed at any DB. \item[$d_{ij}$:] Distance between OTR $i$ and candidate DB $j$. \item[$v$:] Flight speed of drones. \item[$\lambda_i$:] OTR arrival rate from location $i$. \item[$p$:] Maximal number of available drones. \item[$q$:] Maximal number of drone bases that can can be established. \item[$\beta$:] Coefficient modulating the travel speed to and from an OTR location. \item[$r$:] flight radius of a drone. \end{itemize} \subsection{Decision Variables} \begin{itemize} \item[$x_j$:] Binary variable determining if a DB is open at candidate location $j$ ($x_j = 1$) or not ($x_j = 0$). \item[$y_{ij}$:] Binary variable defining if OTR at location $i$ is assigned to open DB $j$ ($y_{ij} = 1$) or not ($y_{ij} = 1$). \item[$\gamma_j^m$:] Binary integer variable indicating the number $m$ of drones deployed at DB $j$: $\gamma_j^1 = 1$ if and only one drone is positioned at DB $j$ and $\gamma_j^2 = 1$ if and only two drones are positioned at DB~$j$. \item[$K_j$:] General integer variable defining the number of drones stationed at DB $j$. \item[$\eta_j$:] Arrival rate of OTRs serviced by DB $j$. \item[$U_j^m$:] Auxiliary variable set equal to the fractional terms in \eqref{1drone_cons} and \eqref{2drone_cons}, respectively. \item[$\mu_{ij}^{m}$:] Auxiliary variable introduced to linearize bilinear terms $U_j^{m} y_{ij}$. \item[$z_{it}^{j}$:] Auxiliary variable introduced to linearize bilinear terms $y_{ij}y_{lj}$. \item[$\tau_{il}^{j}$:] Auxiliary variable introduced to linearize bilinear terms $U_{j}^{2}z_{il}^{j}$. \item[$\omega_{ij}^{m}$:] Auxiliary variable introduced to linearize bilinear terms $\gamma_j^m\mu_{ij}^{m}$. \end{itemize} \subsection{Random Variables} \begin{itemize} \item[$S_{ij}$:] Service time for OTR at $i$ if serviced with a drone stationed at DB $j$. \item[$S_{j}$:] Service time of a drone at DB $j$. \item[$Q_{j}$:] Queueing delay time for DB $j$. \item[$R_{i}$:] Response time between reception of the delivery request and arrival of a drone for an OTR at $i$. \item[$\alpha_i$:] On-scene service time (e.g. naloxone toolkit unloading time) for location $i$. \item[$\epsilon_i$:] Drone reset time needed to recharge and load new naloxone toolkit for location $i$. \item[$\xi_i$:] Non-travel drone service time equal to $\alpha_i + \epsilon_i$ for OTR at location $i$. \end{itemize} \vspace{0.1in} \section{Drone-delivery Network for Opioid Overdoses- Illustration}\label{APP-ILLU} We use a small network to illustrate the formulations presented in this study. We consider two candidate locations for drone bases (DB $j$=1 and $j$=2) which can each house at most two ($M$) drones. There are three OTRs ($i=1,2,3$). Requests $i=1$ and $i=2$ are within the catchment area of a drone positioned at DB $j=1$ while requests $i=2$ and $i=3$ are within the catchment area of a drone at DB $j=2$. Figure \ref{fig-illustration} provides a visualization of the network. The solid circle represents DBs while the dotted circles represent catchment areas of a DB and its drones. The diamonds are the locations of the overdose incidents. \begin{figure}[H] \centering \includegraphics[width=12cm, height=8cm]{illustration.jpg} \caption{Small Drone Network and OTRs} \label{fig-illustration} \end{figure} The objective function \eqref{D1_obj} of problem \textbf{B-IFP} can be written as: \begin{subequations} \begin{align} \min & \; \frac{1}{\lambda_1 + \lambda_2 + \lambda_3} \Bigg[ \frac{\lambda_1 d_{11}y_{11}}{v} + \frac{\lambda_2 d_{21}y_{21}}{v} + \frac{\lambda_2 d_{22}y_{22}}{v} + \frac{\lambda_3 d_{32}y_{32}}{v} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_1 y_{11} (\lambda_1 y_{11}\mathbb{E}[{S}_{11}^2]+ \lambda_2 y_{21}\mathbb{E}[{S}_{21}^2]) (\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])} {2(2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ))^2 \big[ 1 + (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ) + \frac{(\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])^2} {2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_2 y_{21} (\lambda_1 y_{11}\mathbb{E}[{S}_{11}^2]+ \lambda_2 y_{21}\mathbb{E}[{S}_{21}^2]) (\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])} {2(2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ))^2 \big[ 1 + (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ) + \frac{(\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])^2} {2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_2 y_{22} (\lambda_2 y_{22}\mathbb{E}[{S}_{22}^2]+ \lambda_3 y_{32}\mathbb{E}[{S}_{32}^2]) (\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])} {2(2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ))^2 \big[ 1 + (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ) + \frac{(\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])^2} {2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_3 y_{32} (\lambda_2 y_{22}\mathbb{E}[{S}_{22}^2]+ \lambda_3 y_{32}\mathbb{E}[{S}_{32}^2]) (\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])} {2(2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ))^2 \big[ 1 + (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ) + \frac{(\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])^2} {2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] )}\big]} \Bigg] \notag \end{align} \end{subequations} \vspace{0.1in} The objective function \eqref{OBJ2} of problem \textbf{R-BFP} can be written as: \begin{subequations} \begin{align} \min & \; \frac{1}{\lambda_1 + \lambda_2 + \lambda_3} \Bigg[ \frac{\lambda_1 d_{11}y_{11}}{v} + \frac{\lambda_2 d_{21}y_{21}}{v} + \frac{\lambda_2 d_{22}y_{22}}{v} + \frac{\lambda_3 d_{32}y_{32}}{v} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_1 y_{11} \gamma_{1}^{1} (\lambda_1 y_{11}\mathbb{E}[{S}_{11}^2]+ \lambda_2 y_{21}\mathbb{E}[{S}_{21}^2])} {2(1- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ))^2 \big[ 1 + \frac{\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}]} {1- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_1 y_{11} \gamma_{1}^{2} (\lambda_1 y_{11}\mathbb{E}[{S}_{11}^2]+ \lambda_2 y_{21}\mathbb{E}[{S}_{21}^2]) (\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])} {2(2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ))^2 \big[ 1 + (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ) + \frac{(\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])^2} {2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_2 y_{21} \gamma_{1}^{1} (\lambda_1 y_{11}\mathbb{E}[{S}_{11}^2]+ \lambda_2 y_{21}\mathbb{E}[{S}_{21}^2])} {2(1- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ))^2 \big[ 1 + \frac{(\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])} {1- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_2 y_{21} \gamma_{1}^{2} (\lambda_1 y_{11}\mathbb{E}[{S}_{11}^2]+ \lambda_2 y_{21}\mathbb{E}[{S}_{21}^2]) (\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])} {2(2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ))^2 \big[ 1 + (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] ) + \frac{(\lambda_1 y_{11} \mathbb{E}[S_{11}] + \lambda_2 y_{21} \mathbb{E}[S_{21}])^2} {2- (\lambda_1 y_{11}\mathbb{E}[S_{11}] + \lambda_2 y_{21}\mathbb{E}[S_{21}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_2 y_{22} \gamma_{2}^{1} (\lambda_2 y_{22}\mathbb{E}[{S}_{22}^2]+ \lambda_3 y_{32}\mathbb{E}[{S}_{32}^2]) } {2(1- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ))^2 \big[ 1 + \frac{\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}]} {1- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_2 y_{22} \gamma_{2}^{2} (\lambda_2 y_{22}\mathbb{E}[{S}_{22}^2]+ \lambda_3 y_{32}\mathbb{E}[{S}_{32}^2]) (\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])} {2(2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ))^2 \big[ 1 + (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ) + \frac{(\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])^2} {2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_3 y_{32} \gamma_{2}^{1} (\lambda_2 y_{22}\mathbb{E}[{S}_{22}^2]+ \lambda_3 y_{32}\mathbb{E}[{S}_{32}^2]) } {2(1- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ))^2 \big[ 1 + \frac{\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}]} {1- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] )}\big]} + \notag \\ &\hspace{-0.6cm} \frac{\lambda_3 y_{32} \gamma_{2}^{2} (\lambda_2 y_{22}\mathbb{E}[{S}_{22}^2]+ \lambda_3 y_{32}\mathbb{E}[{S}_{32}^2]) (\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])} {2(2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ))^2 \big[ 1 + (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] ) + \frac{(\lambda_2 y_{22} \mathbb{E}[S_{22}] + \lambda_3 y_{32} \mathbb{E}[S_{32}])^2} {2- (\lambda_2 y_{22}\mathbb{E}[S_{22}] + \lambda_3 y_{32}\mathbb{E}[S_{32}] )}\big]} \Bigg] \notag \end{align} \end{subequations} The objective function \eqref{obj_lin} of problem \textbf{R-MILP} can be written as: \begin{subequations} \begin{align} \min & \; \frac{1}{\lambda_1 + \lambda_2 + \lambda_3} \Bigg[ \frac{\lambda_1 d_{11}y_{11}}{v} + \frac{\lambda_2 d_{21}y_{21}}{v} + \frac{\lambda_2 d_{22}y_{22}}{v} + \frac{\lambda_3 d_{32}y_{32}}{v} + \notag \\ & \lambda_1(\frac{\omega_{11}^{1}}{2} + \frac{\omega_{11}^{2}}{2}) + \lambda_2( \frac{\omega_{21}^{1}}{2} + \frac{\omega_{21}^{2}}{2} + \frac{\omega_{22}^{1}}{2} + \frac{\omega_{22}^{2}}{2}) + \lambda_{3}( \frac{\omega_{32}^{1}}{2} + \frac{\omega_{32}^{2}}{2}) \notag \Bigg] \notag \end{align} \end{subequations} \newpage \section{Size of Formulations} \label{SIZE} \begin{table}[h!t] \centering \begin{adjustbox}{width=0.85\textwidth} \begin{tabular}{c|c|c} \hline & $\mathbf{B-IFP}$ & $\mathbf{R-BFP}$ \\ \hline Number of constraints & $\sum_{i \in I}|J_i| + |I|+2|J|+2$ & $ \sum_{i \in I}|J_i| + |I|+3|J|+2 $\\ \hline Number of binary variables & $\sum_{i \in I}|J_i| + |J|$ & $ \sum_{i \in I}|J_i| + (M + 1)|J| $ \\ \hline Number of general integer variables & $|J|$ & 0 \\ \hline \end{tabular}% \end{adjustbox} \caption{Dimensions of Problems $\mathbf{B-IFP}$ and $\mathbf{R-BFP}$} \label{T01} \end{table} \section{Data for QALY and Cost Analysis} \label{DATA-COST} \begin{table}[h] \centering \setlength\extrarowheight{-0.1pt} \begin{tabular}{P{4cm} |P{4cm}|P{3cm}|P{3cm}} \hline $T$ & $\alpha$ & $c$ & T-QALY \\ \hline 11.4 years & 0.85 & 3\% & 8.47 years \\ \hline \end{tabular} \caption{\label{qaly} QALY Analysis for each OHCA Survivor} \end{table} \begin{table}[h] \centering \setlength\extrarowheight{-0.01pt} \begin{tabular}{P{2cm} |P{3.5cm}|P{2.5cm}|P{2cm}|P{3.5cm}} \hline Price per & Annual & Number of & Discount & Total Discounted \\ Drone & Maintenance Cost & Drones & Rate & Cost in 4 Years. \\ \hline \$15,000 & \$3,000 & 11 & 3\% & \$287,664 \\ \hline \end{tabular} \caption{\label{dronecost} Cost for Drone Network} \end{table} \section{Computational Efficiency Study} \newpage \vspace{-0.15in} \begin{table}[H] \centering \resizebox{\columnwidth}{11.5cm}{% \begin{tabular}{P{2cm} | P{2cm} | P{2.5cm} | P{2.5cm} | P{2.5cm} } \hline \multirow{2}{*}{$|I|$} & \multirow{2}{*}{Instance} & \multicolumn{3}{c}{ Solution Time (sec.)} \\ \cline{3-5} & & {\tt REFO} & {\tt OA} & {\tt OA-B\&C} \\ \hline \multirow{6}{*}{50} & 1 & 7 & 3 & 3 \\ & 2 & 25 & 3 & 3 \\ & 3 & 17 & 2 & 2 \\ & 4 & 17 & 3 & 3 \\ & 5 & 6 & 3 & 2 \\ & Average & 14 & 3 & 3 \\ \hline \multirow{6}{*}{100} & 1 & 3600 & 12 & 11 \\ & 2 & 3600 & 13 & 12 \\ & 3 & 3600 & 11 & 10 \\ & 4 & 3600 & 12 & 12 \\ & 5 & 3600 & 12 & 12 \\ & Average & 3600 & 12 & 11 \\ \hline \multirow{6}{*}{150} & 1 & 3600 & 46 & 60 \\ & 2 & 3600 & 47 & 33 \\ & 3 & 3600 & 37 & 29 \\ & 4 & 3600 & 41 & 43 \\ & 5 & 3600 & 39 & 32 \\ & Average & 3600 & 42 & 39 \\ \hline \multirow{6}{*}{200} & 1 & 3600 & 170 & 151 \\ & 2 & 3600 & 109 & 105 \\ & 3 & 3600 & 161 & 122 \\ & 4 & 3600 & 138 & 170 \\ & 5 & 3600 & 221 & 169 \\ & Average & 3600 & 160 & 143 \\ \hline \multirow{6}{*}{250} & 1 & 3600 & 706 & 358 \\ & 2 & 3600 & 517 & 302 \\ & 3 & 3600 & 408 & 227 \\ & 4 & 3600 & 431 & 440 \\ & 5 & 3600 & 233 & 218 \\ & Average & 3600 & 459 & 309 \\ \hline \multirow{6}{*}{300} & 1 & 3600 & 831 & 252 \\ & 2 & 3600 & 608 & 374 \\ & 3 & 3600 & 935 & 706 \\ & 4 & 3600 & 656 & 813 \\ & 5 & 3600 & 288 & 446 \\ & Average & 3600 & 664 & 518 \\ \hline \multirow{6}{*}{350} & 1 & 3600 & 595 & 699 \\ & 2 & 3600 & 1160 & 540 \\ & 3 & 3600 & 3172 & 893 \\ & 4 & 3600 & 710 & 620 \\ & 5 & 3600 & 1872 & 337 \\ & Average & 3600 & 1502 & 618 \\ \hline \multirow{6}{*}{400} & 1 & 3600 & 1515 & 778 \\ & 2 & 3600 & 1442 & 560 \\ & 3 & 3600 & 2824 & 1249 \\ & 4 & 3600 & 596 & 1295 \\ & 5 & 3600 & 847 & 471 \\ & Average & 3600 & 1445 & 871 \\ \hline \multirow{6}{*}{450} & 1 & 3600 & 965 & 669 \\ & 2 & 3600 & 2155 & 934 \\ & 3 & 3600 & 3600 & 1815 \\ & 4 & 3600 & 810 & 833 \\ & 5 & 3600 & 1966 & 1033 \\ & Average & 3600 & 1899 & 1057 \\ \hline \multirow{6}{*}{500} & 1 & 3600 & 3600 & 2943 \\ & 2 & 3600 & 3600 & 1814 \\ & 3 & 3600 & 3600 & 3600 \\ & 4 & 3600 & 2865 & 1208 \\ & 5 & 3600 & 1728 & 1872 \\ & Average & 3600 & 3079 & 2287 \\ \hline \end{tabular}% } \vspace{-0.135in} \caption{\label{table_compute} Computational Efficiency for Base Case Scenario: $q = 10$ and $p = 11$} \end{table} \begin{table}[h] \centering \begin{tabular}{P{4cm} |P{2.5cm}|P{2.5cm}|P{2.5cm}} \hline Solution Time (seconds) & {\tt REFO} & {\tt OA} & {\tt OA-B\&C} \\ \hline 600 & 10\% & 54\% & 64\% \\ \hline 1200 & 10\% & 74\% & 84\% \\ \hline 1800 & 10\% & 80\% & 90\% \\ \hline 2400 & 10\% & 86\% & 96\% \\ \hline 3000 & 10\% & 90\% & 98\% \\ \hline 3600 & 10\% & 92\% & 98\% \\ \hline \end{tabular} \caption{\label{Num_solved} Percentage of Problem Instances Solved to Optimality} \end{table} \end{appendices} \end{document}
proofpile-arXiv_067-11451
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} The structure of real complex networks lies somewhere in-between order and randomness~\cite{Watts1998,Goldenfeld1999,Strogatz2005}, with the consequence that it cannot typically be fully characterized by a concise set of synthesizing observables. This \textit{irreductibility} explains why most theoretical approaches to model complex networks are inspired by statistical physics in that they consider ensembles of networks constrained by the values of observables (e.g. density of links, degree-degree correlations, clustering coefficient, degree/motif distribution) and otherwise organized randomly. These approaches have three notable advantages. First, they usually yield analytical treatment. Second, they are \textit{intensive} in network size, meaning that their complexity scales with the support of the observables (i.e., sub-linearly with the numbers of nodes and links). Third, they provide null models, of which many have led to the identification of fundamental properties characterizing the structure of real complex networks~\cite{Newman2010,Barabasi2016}. Despite important leaps forward in recent years, these approaches still fail to capture enough information to systematically provide accurate quantitative predictions of most dynamical processes on real complex networks. The reason for this shortcoming is that the properties from which the ensembles are constructed are not constraining enough; the ensembles are ``too large'' such that the original real networks are exceptions, rather than typical instances, in the ensembles. As a result, the current state-of-the-art approach---the so-called \textit{message passing approach} (MPA)~\cite{Karrer2014}---requires the whole structure to be specified as an input (i.e., the adjacency matrix, or a transformation thereof). This method is interesting because it is mathematically principled, meaning that it yields \textit{exact} results on trees, and offers inexact, albeit generally good, predictions on networks containing loops (i.e., most real complex networks)~\cite{Radicchi2015}. However, by considering the whole structure of networks and thereby considering every link on equal footing, the accuracy of the MPA comes at a significant computational and conceptual cost. First, its time and space complexity are \textit{extensive} in the number of links and therefore in the size of the network. Second, and most importantly, it does not provide any insight on the role played by any given structural property in the outcome of a dynamical process. With the MPA, getting good predictions comes at the expense of understanding what led to that outcome. In this paper, we bridge the gap between intensive and extensive approaches to the mathematical modeling of bond percolation on networks. We introduce a random network ensemble that relies solely on an \textit{intensive} description of the network structure that, nevertheless, yields predictions that are comparable to the ones from the MPA for most of the 111 real complex networks considered in this study. This ensemble is based on the \textit{onion decomposition} (OD), a refined $k$-core decomposition~\cite{Hebert-Dufresne2016a}. Critically, the OD can be translated into local connection rules allowing an exact mathematical treatment using probability generating functions (pgf) in the limit of large network size. This approach leads to exact predictions on trees like the MPA, and highlights the critical contribution of the OD to an accurate effective mathematical description of real complex networks. \begin{figure}[t] \begin{center} \includegraphics[width=\linewidth]{Fig1} \caption{Illustration of the Onion Decomposition (OD) of a simple network. The number of the layer to which each node belongs is indicated and the different $k$-cores are shown using increasingly darker background shades. The color of each stub according the LCCM is also shown.} \label{fig:illutrationOD} \end{center} \end{figure} \section*{Results and discussions} Most analytical models of complex networks rely on some variation of the tree-like approximation which assumes that complex networks have essentially no loops beyond some local structure of interest~\cite{Karrer2010,Allard2015}. While this approximation is inaccurate for the vast majority of real complex networks, it nevertheless allows an elegant mathematical treatment which typically works surprisingly well~\cite{Melnik2011}. In the case of the MPA, the tree-like approximation implies that a lot of information given to the model is thrown away due to loops being included in the input information (i.e., the adjacency matrix) to then be mathematically ignored. We here propose to limit the information we give to our model by compressing complex networks following their tree-like decomposition. We therefore rely on a known peeling process, which iteratively removes leaves (i.e., the peripheral nodes of the network) to calculate the depth of every node in the \textit{effective tree}. Taking this information into account, we then focus on predicting the outcome of bond percolation on complex networks: a canonical problem of network science analogous to many applied problems such as disease propagation or network resilience~\cite{Latora2017}. Given a network structure, this simple stochastic process consists in the occupation of each original link with probability $p$. We aim to predict the size of the largest connected component composed of occupied links, $S$, as well as the percolation threshold, $p_\mathrm{c}$, above which that component corresponds to a macroscopic fraction of the network. The outcome of percolation depends on structural properties at all scales, thus making it a good benchmark for theoretical network models. \begin{figure*}[t] \begin{center} \subfloat[Message Passing Approach]{\centering \includegraphics[width=.27\textwidth, angle=0]{Fig2a}}\hspace{0.75cm} \subfloat[Layered and Correlated Configuration Model]{\centering \includegraphics[width=.27\textwidth, angle=0]{Fig2b}}\hspace{0.75cm} \subfloat[Configuration Model]{\centering \includegraphics[width=.27\textwidth, angle=0]{Fig2c}} \caption{Compression of a perfect tree with different network models. (a) The Message Passing Approach assigns a unique ID to every node and preserves the full structure of the tree. (b) The Layered and Correlated Configuration Model assigns an ID to every node corresponding to its degree and its position in the core-periphery structure of the network. Degrees are not shown to lighten the presentation. Stubs are colored according to the layer to which they point: red if they point to more central layers and black if they point to the previous layer. There are no green stubs in this example. (c) The Configuration Model assigns an ID to every node according to its degree before randomly connecting them, therebyy destroying the mesoscopic and macroscopic structure of the original network. The Correlated Configuration Model fixes the number of links between different degree classes, and would therefore prohibit components formed by two nodes with degree 1, but would otherwise be very similar to the configuration model shown here.} \label{fig:trees} \end{center} \end{figure*} \subsection*{Onion decomposition} \begin{figure*}[t] \centering \includegraphics[width = \textwidth]{Fig3} \caption{Relative size of the extensive components predicted by the LCCM with the CM, the CCM and the MPA for 4 representative real network datasets. (upper left) One-mode projection of a Norwegian boards of directors bipartite network~\cite{Seierstad2011}. (upper right) PGP web of trust~\cite{Boguna2004}. (lower left) A subset of the Internet at the autonomous level~\cite{Leskovec2005}. (lower right) Protein-protein interaction network of Homo sapiens~\cite{Song2005}. The insets show the absolute value of the difference between the MPA and the CM the CCM and the LCCM as a function of $p$, as well as an enlargement of the region around the percolation threshold. The largest connected component was used for all dataset.} \label{fig:bifurcation} \end{figure*} The $k$-core decomposition is a well-known network metric that identifies a set of nested maximal sub-networks---the $k$-cores---in which each node shares at least $k$ links with the other nodes~\cite{Seidman1983,Dorogovtsev2006}. A node belonging to the $k$-core but not to the $(k+1)$-core is said to be of \textit{coreness} $k$ and to be part of the $k$-\textit{shell}. Nodes with a high coreness are generally seen as more central whereas nodes with low corenesses are seen as being part of the periphery of the network. The onion decomposition (OD) refines the $k$-core decomposition by assigning a layer $l$ to each node to further indicate its \textit{position} within its shell (e.g., in the middle of the layer or at its boundary). The OD therefore unveils the internal organization of each centrality shell and, unlike the original $k$-core decomposition, can be used to assess whether the structure of a core is more similar to a tree or to a lattice, among other things~\cite{Hebert-Dufresne2016a}. The OD of a given network structure is obtained via the following pruning process (see Fig.~\ref{fig:illutrationOD}). First we remove every nodes with the smallest degree, $k_\mathrm{min}$; the coreness of these nodes is equal to $k_\mathrm{min}$ and they are part of the first layer ($l=1$). Removing these nodes may yield nodes whose \textit{remaining} degree is now equal to or smaller than $k_\mathrm{min}$; these nodes must also be removed, have a coreness of $k_\mathrm{min}$ as well, but are part of the second layer ($l=2$). If removing nodes of the second layer yields new nodes with a remaining degree equal to or lower than $k_\mathrm{min}$, they will be part of the third layer ($l=3$), will have a coreness of $k_\mathrm{min}$ and will also be removed. This process is repeated until no new nodes with a remaining degree equal to or lower than $k_\mathrm{min}$ are left. We then update the value of $k_\mathrm{min}$ to reflect the lowest remaining degree and repeat this whole process until every node has been assigned a coreness and a layer (the layer number keeps increasing such that each layer corresponds to a unique coreness). An efficient implementation of this procedure has a run-time complexity of $\mathcal{O}(L \log N)$, where $L$ and $N$ are respectively the number of links and nodes, which implies that the OD can be quickly obtained for virtually any real complex network~\cite{Hebert-Dufresne2016a}. Most importantly, nodes belonging to a same layer are \textit{topologically similar} with regard to the mesoscale centrality organization of the network. Because the layer of a node is only weakly related to its degree (i.e., the coreness of a node provides a lower bound to its degree), the pair layer-degree can therefore be used to indicate how well a node is connected, but also to indicate its ``topological position'' in the network. It therefore allows us to discriminate central nodes from peripheral ones which, based on their degree alone, would have otherwise been deemed identical. \subsection*{Effective random network ensemble: the LCCM} From the pruning process described above, it can be concluded that a node of coreness $c$ belonging to the $l$-th layer is in one of two scenarios. 1) It must have \textit{exactly} $c$ links to nodes in layers $l^\prime \geq l$ if layer $l$ is the first layer of the $c$-shell (i.e., nodes in layer $l-1$ belong to the $c^\prime$-shell with $c^\prime < c$). 2) Otherwise, if it is not in the first layer of its $c$-shell, it must have \textit{at least} $c+1$ links to nodes of layers $l^\prime \geq l-1$ and \textit{at most} $c$ links to nodes of layers $l^\prime \geq l$. The distinction between the two scenarios is that nodes not in the first layer of their shell require at least one link to the previous layer to \textit{anchor} them to their own layer. Also, the common feature of these scenarios is that a node of coreness $c$ needs at least $c$ links with nodes of equal or greater coreness. By rewiring the links of a given network using a degree-preserving procedure~\cite{Coolen2009,Fosdick2016} while ensuring that the aforementioned rules are respected at all time, it is possible to explore the ensemble of all possible single networks with the same fixed layer-degree sequence (i.e., the sequence of every pairs $(l, k)$ in the original network). Exactly preserving the layers---and thus the coreness of every nodes---is of critical significance since previous rewiring approaches could only approximately preserve the $k$-core decomposition \cite{Hebert-Dufresne2013}. Additionally, the pair layer-degree assigned to each node can be used to enforce two-point correlations (i.e., the (layer-degree)--(layer-degree) correlations), thus reducing the size of a random network ensemble. This correlated ensemble can be explored via a double link swap Markov chain method preserving both the layer-degree sequence and the number of links within and between every node classes (i.e., nodes with the same layer-degree). One way to implement this method is by first choosing one link at random (e.g., joining nodes A and B) and then choosing another link at random (e.g., joining nodes C and D) among the links that are attached to at least one node whose layer-degree pair is the same at one of the two nodes connected by the first link (e.g., A and C have the same layer-degree)~\cite{Colomer-de-Simon2013}. The two links are then swapped (e.g., A becomes connected to D and B to C) if no self-link or multi-link would be created. Doing so ensures that that both the degree sequence and the two-point correlations are preserved at all time. We call \textit{layered and correlated configuration model} (LCCM) the ensemble of maximally random networks with a given joint layer-degree sequence and (layer-degree)--(layer-degree) correlations. Since it preserves both the degree sequence and the degree-degree correlations, the LCCM is a subset of two commonly used random network ensembles defined by the \textit{configuration model} (CM)~\cite{Newman2002} and the \textit{correlated configuration model} (CMM)~\cite{Vazquez2003}; the latter being known for its fair accuracy in many applications~\cite{Melnik2011}. The LCCM, however, distinguishes itself from these models (and other variants) by enforcing a mesoscopic organization via the layers of the OD. This feature has the critical advantage of making the LCCM a mathematically principled approached in the sense that it exactly preserves the structure of a wide variety of trees (see Fig.~\ref{fig:trees}). As we show below, this mesoscopic information accounts for a significant portion of the missing gap between the predictions of the intensive configuration models and the extensive, current state-of-the-art MPA. \subsection*{Percolation on the LCCM} We adapt the approach of Ref.~\cite{Allard2015} to solve bond/site percolation on the LCCM in the limit of large network size. This approach requires to specify 1) the classes of nodes, which here correspond to the distinct pairs layer-degree noted $(l,k)$, and 2) the colors of stubs (i.e., half-links), which in the LCCM are identified based on the layer $l^\prime$ of the neighboring node. More precisely, from the connection rules stated in the previous section, the LCCM requires to keep track of the number of links that each node in each layer $l$ shares with nodes i) in layers $l^\prime \geq l$, ii) in layer $l^\prime = l - 1$ and iii) in layers $l^\prime < l - 1$. We identify the corresponding half-links as red, black and green stubs, respectively. For instance, a link between nodes in layers 3 and 5 consists in a red stub stemming out of the node in layer 3 paired with a green stub belonging to the node in layer 5. Note that a link between two given layers can only consist in a unique pair of stub colors, and the only allowed combinations are red-red, red-black and red-green. From the link correlation matrix $\mathbf{L}$, whose entries specify the fraction of links within and between every classes of nodes, we can derive the function (see Methods) \begin{align} \label{eq:varphi_definition} \varphi_{lk}(\bm{x}) = \sum_{k^\mathrm{r} k^\mathrm{b} k^\mathrm{g}} P_{lk}(k^\mathrm{r},k^\mathrm{b},k^\mathrm{g}) [x_{lk}^\mathrm{r}]^{k^\mathrm{r}} [x_{lk}^\mathrm{b}]^{k^\mathrm{b}} [x_{lk}^\mathrm{g}]^{k^\mathrm{g}} \end{align} generating the probability $P_{lk}(k^\mathrm{r},k^\mathrm{b},k^\mathrm{g})$ that a node in class $(l,k)$ has $k^\mathrm{r}$ red stubs, $k^\mathrm{b}$ black stubs and $k^\mathrm{g}$ green stubs, given the connection rules of the LCCM. From the same link correlation matrix, we can also derive the functions (see Methods) \begin{align} \label{eq:gamma} \gamma_{lk}^\mathrm{\alpha}(\bm{x}) & = \sum_{l^\prime k^\prime} \sum_{\alpha^\prime\in\{\mathrm{r},\mathrm{b},\mathrm{g}\}} Q_{lk}^\alpha(l^\prime, k^\prime, \alpha^\prime) x_{l^\prime k^\prime}^{\alpha^\prime} \ , \end{align} for every $\alpha\in\{\mathrm{r},\mathrm{b},\mathrm{g}\}$, generating the probability $Q_{lk}^\alpha(l^\prime, k^\prime, \alpha^\prime)$ that a stub of color $\alpha$ stemming of a node of class $(l,k)$ is attached to a stub of color $\alpha^\prime$ belonging to a node in class $(l^\prime,k^\prime)$. Combining these two functions yields the pgf generating the distribution of the number of nodes of each class that are neighbors of a randomly chosen node of class $(l,k)$ \begin{align} \label{eq:g} g_{lk}(\bm{x}) = \varphi_{lk}(\bm{\gamma(\bm{x})}) \ . \end{align} Note that this pgf also includes the colors of the stub through which these neighors are connected to the node of class $(l,k)$. Similarly, the number of such nodes that can be reached from a node of class $(l,k)$ that has itself been reached by one of its stubs of color $\alpha$ is \begin{align} \label{eq:f} f_{lk}^\alpha(\bm{x}) = \frac{1}{\langle k^\alpha \rangle_{lk}} \left. \frac{\partial \varphi_{lk}(\bm{x^\prime})}{\partial x_{lk}^{\prime\alpha}} \right|_{\bm{x^\prime}=\bm{\gamma(\bm{x})}} \ , \end{align} where $\langle k^\alpha \rangle_{lk} = \frac{\partial \varphi_{lk}(\bm{1})}{\partial x_{lk}^\alpha}$ is the average number of stubs of color $\alpha$ nodes of class $(l,k)$ have. \begin{figure*}[t] \centering \includegraphics[width = 0.48\linewidth]{Fig4a} \includegraphics[width = 0.48\linewidth]{Fig4b} \caption{Predictions of the intensive models (CM, CCM and LCCM) compared to the predictions of the extensive MPA for 111 real biological, technological, transportation and social complex networks downloaded from \texttt{icon.colorado.edu}. The whiskers cover the range between the 5th and the 95th percentiles, the black dots indicate the mean and the outliers data points are shown with a circle. Each box indicates the first, second and third quartiles, as usual. (left) Relative error of the percolation threshold defined as $|p_\mathrm{c}^\mathrm{model}-p_\mathrm{c}^\mathrm{MPA}|/p_\mathrm{c}^\mathrm{MPA}$. The calculation of $p_\mathrm{c}^\mathrm{LCCM}$ is detailed in Methods. (right) Area of the region bounded by the curves $S^\mathrm{model}$ and $S^\mathrm{MPA}$ computed as $\int_0^1 |S^\mathrm{model}-S^\mathrm{MPA}|dp$. References.~\cite{Newman2002,Vazquez2003,Karrer2014} provide the methods to compute $p_\mathrm{c}^\mathrm{model}$ and $S^\mathrm{model}$ for the CM, the CCM and the MPA.} \label{fig:thresholds} \end{figure*} To compute the size of the extensive component, we assume that the networks in the ensemble are locally tree-like, which occurs in the limit of large network size or when the detailed structure of matrix $\mathbf{L}$ only permits exact trees (i.e., when loops are structurally impossible). We define $a_{lk}^\alpha$ as the probability that attempting to reach a node in class $(l,k)$ by one of its stubs of color $\alpha$ does not eventually lead to the extensive component. Noting $p$ the probability that links are occupied, the probabilities $\{a_{lk}^\alpha\}$ are the solution of \begin{align} \label{eq:a_lk_self_consistency} a_{lk}^\alpha = 1 - p + p f_{lk}^\alpha(\bm{a}) \ , \end{align} for all $l$, $k$ and $\alpha$. This last expression encodes the simple self-consistent argument that attempting to reach the node will not lead to the extensive component if 1) the link is unoccupied, which occurs with probability $1-p$, or if 2) the link is occupied, with probability $p$, but the attempts to reach the other neighbors of the node that has just been reached will all fail, which occurs with probability $f_{lk}^\alpha(\bm{a})$. Note that this argument relies on the assumption that the state of these neighbors are independent, which is true for a tree-like structure. Having solved Eq.~\eqref{eq:a_lk_self_consistency}, the relative size of the extensive component, $S$, is then given by the probability that a randomly chosen node is found in $S$ \begin{align} S = 1 - \sum_{lk} P(l,k) g_{lk}(\bm{a}) \ , \end{align} where $P(l,k)$ is the fraction of nodes in class $(l,k)$ which can be extracted from the link correlation matrix $\mathbf{L}$ (see Methods). Notice that since we assume the networks of the ensemble to be tree-like, the relative size of the extensive component if nodes (instead of links) were occupied with probability $p$ is simply $S^\mathrm{site} = pS$ to account for the probability that the initial randomly chosen node is occupied. Note also that the percolation threshold, $p_\mathrm{c}$, is the value of $p$ at which $\bm{a}=\bm{1}$ becomes an unstable solution of Eq.~\eqref{eq:a_lk_self_consistency} (see Methods), which corresponds to the emergence of the extensive component. \subsection*{Effective tree-like structure} Because it is a subset of both the CM and the CCM, the cardinality of the ensemble defined by the LCCM should, in principle, be smaller than the ensembles considered by the formers. Consequently, if the mesoscale structural information provided by the layers $l$ is of any significance, we expect the predictions of the LCCM to be the closest to the ones obtain with the MPA. Figures~\ref{fig:bifurcation}~and~\ref{fig:thresholds} confirm this observation. In fact, our results demonstrate that identifying nodes using the layer in the OD alongside their degree does not merely improve the predictions, it drastically changes their nature, making them qualitatively very similar to the ones of the MPA when not strikingly quantitatively identical. As shown on Fig.~\ref{fig:bifurcation}, the LCCM reproduces the general shape of the curves, has the same number of inflection points, and always predict a connected network when all links are occupied (i.e., $S$ must be 1 at $p=1$ since we considered the largest connected components of every datasets). Interestingly, only the LCCM and the MPA are able to capture the mesoscopic core-periphery and/or modular structures that were numerically shown to lead to smeared (or double) phase transitions~\cite{Colomer-de-Simon2014} such as the one observed on the protein-protein interaction network. Perhaps most importantly, the LCCM approximates to high accuracy the percolation threshold predicted by the MPA, as seen in Fig.~\ref{fig:thresholds}(left), with an relative error of less than 1.5\% for 75\% of the 111 network datasets considered. Additionally, Fig.~\ref{fig:thresholds}(right) shows the expected error on the size of the extensive component averaged over the entire range of occupation probability $p$. When using the LCCM to compress the network structure, we find that the error, relative to the MPA, to be of the order of $10^{-3}$ for 75\% of the datasets considered; an improvement of at least one order of magnitude from existing approaches. Altogether, these results indicate that categorizing nodes with the classes $(l,k)$ captures critical features of the local and mesoscopic tree-like organization of many real complex networks, thus offering an intensive effective description of their structure. \section*{Conclusion} We introduced a random network ensemble that relies solely on an \textit{intensive} description of the network structure that, nevertheless, yields predictions for percolation that are either essentially quantitatively identical---or at least strikingly qualitatively similar---to the ones obtained with the state-of-the-art MPA. This ensemble assigns two structural features to each node---its degree $k$ (local) and its position $l$ in the Onion Decomposition of the network (mesoscale)---and creates links according to simple connection rules that exactly preserve these two features. This ensemble lends itself to exact analytical calculations using probability generating functions in the limit of large network size, and is mathematically principled, meaning that it leads to exact predictions on trees, like the MPA, but unlike other intensive approaches such as the configuration model and its variants. The accuracy of the predictions of the LCCM shows that the OD easily captures important features of the mesoscale structural organization of many real complex networks, and that this information should be leveraged by the future generations of models of complex networks. For instance, Eq.~\eqref{eq:varphi_definition}, which provides the distribution of different link types (e.g., the number of links leading to lower or higher layers) for any node, could be straightforwardly included in equations for other problems such as the Susceptible-Infectious-Susceptible dynamics. It would thus be possible to track the fraction of infected nodes with a given pair $(l,k)$ whose time evolution would be driven by the transmission events along the connections prescribed by Eqs.~\eqref{eq:g}--\eqref{eq:f}. In a purely numerical context, and using a simpler, less accurate version of the LCCM, this approach was already shown to lead to predictions of SIS dynamics that are an order of magnitude more precise than other network models \cite{Hebert-Dufresne2016a}. More generally, the pair $(l,k)$ consists in a straightforward and computationally inexpensive observable to characterize and rank nodes based on their local connectivity (through $k$) and global centrality (through $l$). Finally, the accuracy of the LCCM strongly suggests that the long-range correlations induced by the OD effectively emulate the correlations considered in the MPA, and, consequently, that a large chunk of the structural properties behind the accuracy of the MPA now lend themselves to intensive analytical treatment. This opens the way for future work to focus on bringing the analytical modeling of complex networks beyond the ubiquitous tree-like approximation. Doing so should provide a unified framework for random graphs, regular structures like lattices, and the complex networks that lie in-between.
proofpile-arXiv_067-11561
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} Polar codes are a class of error-correcting codes proposed in \cite{arikan}. With infinite code length, under successive-cancellation decoding (SC), they can provably achieve capacity over binary memoryless channels. However, their error-correction performance degrades at finite code lengths, while SC yields long decoding latency, due to its inherent serial nature. SC-list (SCL) decoding was proposed in \cite{tal_list}; it relies on a number of parallel SC decoders, and can substantially improve the error-correction performance of SC, especially when the polar code is concatenated with a cyclic redundancy check (CRC). This comes at the cost of additional complexity and latency. SC-flip decoding \cite{afisiadis} limits the increase in complexity by running a series of SC attempts, sacrificing decoding speed. Improved SC-based algorithms have led polar codes to be included in the $5^{th}$ generation wireless systems standard (5G) as one of the coding schemes for the enhanced mobile broadband communication scenario. Different works over the years have attempted at reducing the decoding latency of SC-based algorithms at a limited complexity cost. The techniques described in \cite{alamdar,sarkis,hanif} rely on the recursive construction of polar codes to identify particular subcodes in their structure, and propose fast decoders for these subcodes that can be used in SC decoding. In \cite{hashemi_SSCL, hashemi_SSCL_TCASI,hashemi_FSSCL,hashemi_FSSCL_TSP,giardFlip}, the decoding of some of these subcodes has been extended to SCL and applied to SC-flip, reducing their latency and making them more practical to implement. In this work, we introduce a generalized approach to fast decoding of polar codes to further reduce SC-based decoding latency. We propose three multi-node polar code subcodes whose identification patterns also envelop most of the existing subcodes \cite{alamdar,sarkis,hanif}. Moreover, we provide their extended decoding rules for both SC-based and SCL-based fast decoding: along with new subcodes, the codes identified in \cite{hanif} can thus be applied to SCL decoding. The impact of the proposed approach on SC and SCL decoding latency is evaluated, showing substantial speedup with respect to existing fast decoding algorithms \cite{sarkis, hashemi_SSCL_TCASI}. The error-correction performance loss brought by one of the new subcodes is then studied in terms of code, subcode and decoding algorithm parameters. The remainder of this work is organized as follows. Section \ref{sec:prel} introduces polar codes, their decoding algorithms and existing fast decoding approaches. The proposed generalized fast decoding is detailed in Section \ref{sec:GFD}, while its speed and error-correction performance evaluation is carried out in Section \ref{sec:perf}. Conclusions are finally drawn in Section \ref{sec:conc}, along with future research directions. \section{Preliminaries} \label{sec:prel} \subsection{Polar Codes} \label{sec:prel:polar} Polar codes are linear block codes of length $N=2^n$ and rate $R = K/N$. They can be constructed using the transformation matrix $\mathbf{G}^{\otimes n}$ as \begin{equation} \mathbf{x} = \mathbf{u} \mathbf{G}^{\otimes n} \text{,} \label{eq:polarGen} \end{equation} that encodes vector $\mathbf{u} = \{u_0,u_1,\ldots,u_{N-1}\}$ into vector $\mathbf{x} = \{x_0,x_1,\ldots,x_{N-1}\}$. The matrix $\mathbf{G}^{\otimes n}$ is obtained as the $n$-th Kronecker product of the polarizing kernel $\mathbf{G} = \left[\begin{smallmatrix} 1&0\\1&1 \end{smallmatrix}\right]$. The polar encoding process identifies $K$ reliable bit-channels out of the $N$ available ones, and assigns the $K$ information bits to them. The remaining $N-K$ bit-channels in $\mathbf{u}$ are set to a known value, and represent the frozen set $\mathcal{F}$. To easily distinguish between frozen and information bits, a flag $s_i$ is assigned to each bit-channel, where \begin{equation} s_i = \begin{cases} 0 &\mbox{if } u_i \in \mathcal{F} \text{,} \\ 1 & \mbox{otherwise }\text{.} \end{cases} \end{equation} \subsection{SC-Based Decoding} \label{sec:prel:SCDec} The SC decoding algorithm proposed in \cite{arikan} can be interpreted as a binary tree search, as portrayed in Fig.~\ref{fig:SCDec}. Every node at stage $t$ receives soft information in the form of logarithmic likelihood ratios (LLRs) from its parent node ($2^{t+1}$-element $\bm{\alpha}$ vector), and returns the hard decision vector $\bm{\beta}$. The tree is explored depth-first, with priority being given to the left branches. \begin{figure} \centering \input{figures/sc-dec.tex} \caption{SC-based decoding for a polar code with $N=8$, $R=1/2$.} \label{fig:SCDec} \end{figure} The LLR vector $\bm{\alpha}^\ell$ sent to the left child node is computed through the $f_t$ function as \begin{equation} \label{eq:Ffunc1} \alpha^{\ell}_i =f_t(\alpha)= 2\arctanh \left(\tanh\left(\frac{\alpha_i}{2}\right)\tanh\left(\frac{\alpha_{i+2^{t}}}{2}\right)\right) \text{,} \end{equation} where (\ref{eq:Ffunc1}) identifies the $f_t$ function. The LLR vector $\bm{\alpha}^\text{r}$ directed to the right child node is instead calculated through the $g_t$ function: \begin{equation} \label{eq:Gfunc1} \alpha^{\text{r}}_i = g_t(\alpha)= \alpha_{i+2^{t}} + \left(1-2 \beta^\ell_i \right) \alpha_i \text{.} \end{equation} Partial sums $\bm{\beta}$ are computed as \begin{equation} \beta_i = \begin{cases} \beta^\ell_i\oplus \beta^\text{r}_i \text{,} & \text{if} \quad i < 2^{t} \text{,}\\ \beta^\text{r}_{i-2^{t}} \text{,} & \text{otherwise} \text{,} \end{cases} \label{eq:beta} \end{equation} where $\oplus$ is the bitwise XOR operation. At leaf nodes, $\beta_i$ is set to the estimated bit $\hat{u}_i$: \begin{equation} \hat{u}_i = \begin{cases} 0 \text{,} & \mbox{if } s_i=0 \mbox{ or } \alpha_{i}\geq 0\\ 1 \text{,} & \mbox{if } s_i=1 \mbox{ and } \alpha_{i}< 0 \end{cases} \label{eq:leafSC} \end{equation} The SC decoding process, and in particular the exploration of the tree according to its schedule, can be viewed as a sequence of $f_t$ and $g_t$ operations. For example, the exploration of the tree represented in Fig. \ref{fig:SCDec} can be expressed as $\{f_2,f_1,f_0,g_0,g_1,f_0,g_0,g_2,f_1,f_0,g_0,g_1,f_0,g_0\}$. SCL decoding \cite{tal_list} maintains $L$ concurrent decoding candidates. At leaf nodes, $\hat{u}_i$ is estimated as both 0 and 1 when not a frozen bit, doubling the number of candidates. To limit the exponential increase in complexity, a path metric (PM) is assigned to each candidate \cite{balatsoukas_SCL_HW}: \begin{align} \PM_{{i}} = \begin{cases} \PM_{{i-1}} + |\alpha_{i}| \text{,} & \text{if } \hat{u}_{i} \neq \text{HD}(\alpha_{i})\text{,}\\ \PM_{{i-1}} \text{,} & \text{otherwise,} \end{cases} \label{eq7} \end{align} where $\PM$ is initialized as 0 and \begin{align} \text{HD}(\alpha_{i}) = \begin{cases} 0 & \text{if } \alpha_{i}\ge 0\text{,}\\ 1 & \text{otherwise.} \end{cases} \end{align} The $L$ candidates with the lowest PM are allowed to survive. SCL error-correction performance ca be further improved by concatenating the polar code with a CRC, that help in the selection of the final candidate. \subsection{Fast SC-Based Decoding} \label{sec:prel:FPDec} To increase the speed of SC-based decoding, in \cite{alamdar,sarkis,hanif}, particular sequences of frozen and information bits have been identified, and efficient fast decoders have been proposed. The decoding of the subcodes identified by these patterns, called special nodes, avoids the complete exploration of the SC-tree, allowing substantial speed increment. Fast simplified SC decoding (Fast-SSC, \cite{sarkis}) considers four special nodes, whose structures are summarized as follows: \begin{itemize} \item \emph{Rate-0 Node}: all bits are frozen, $\mathbf{s} = \{0,0,\ldots,0\}$. \item \emph{Rate-1 Node}: all bits are information bits, $\mathbf{s} = \{1,1,\ldots,1\}$. \item \emph{Repetition (Rep) Node}: all bits are frozen except the last one, $\mathbf{s} = \{0,\ldots,0,0,1\}$. \item \emph{Single parity-check (SPC) Node}: all bits are information bits except the first, $\mathbf{s} = \{0,1,1,\ldots,1\}$. \end{itemize} Additional special nodes and their efficient SC decoders have been observed in \cite{hanif}: \begin{itemize} \item \emph{Type-I Node}: all bits are frozen bits except the last two, $\mathbf{s} = \{0,\ldots,0,1,1\}$. \item \emph{Type-II Node}: all frozen bits except the last three, $\mathbf{s} = \{0,\ldots,0,1,1,1\}$. \item \emph{Type-III Node}: all information bits except the first two, $\mathbf{s} = \{0,0,1,\ldots,1\}$. \item \emph{Type-IV Node}: all information bits except the first three, $\mathbf{s} = \{0,0,0,1,\ldots,1\}$. \item \emph{Type-V Node}: all frozen bits except the last three and the fifth-to-last, $\mathbf{s} = \{0,\ldots,0,1,0,1,1,1\}$. \end{itemize} \section{Generalized Fast Decoding} \label{sec:GFD} The nodes described in Section \ref{sec:prel:FPDec} identify patterns in the frozen and information bits. While in \cite{sarkis}, along with Rate-0, Rate-1, Rep and SPC nodes, some node mergers have been identified, literature and decoding methods have mostly focused on single node types. However, the identification of multi-node patterns leads to a generalized approach to fast decoding of polar codes. In this section, we describe three multi-node frozen and information bit patterns that can be exploited to increase the decoding speed at low complexity. They envelop Rep and SPC nodes, together with Type-I to V nodes, and extend their properties to a wider set of patterns. Moreover, general identification and decoding rules for both SC and SCL fast decoding are provided. \subsection{Generalized Repetition Node} \label{subsec:GREP} Repetition nodes are so named due to the pattern identified in the calculation of $\bm{\beta}$ (\ref{eq:beta}). In fact, vector $\bm{\beta}$ of a Rep node can be computed by performing a hard decision on the sum of the LLRs present in vector $\bm{\alpha}$ and replicating this result. However, this repetition pattern can be applied to a more general class of nodes. We call generalized Rep node (G-Rep) any node at stage $t$ for which all its descendants are Rate-0 nodes, except the rightmost one at a certain stage $p<t$, that is a generic node of rate C (Rate-C). The structure of a G-Rep node is depicted in Fig.~\ref{fig:Grep}. A G-Rep node can be decoded under SC using only the partial sum vector $\bm{\beta}^\text{r}$ of its Rate-C descendant and repeating it $2^{t-p}$ times. \begin{lemma} The partial sum vector of a G-Rep node is given by $\bm{\beta} = \{ \bm{\beta}^\text{r},\dots,\bm{\beta}^\text{r} \}$, where $\bm{\beta}^\text{r}$ is calculated on the basis of the LLRs vector $\bm{\alpha}^\text{r}$ defined as $\alpha^\text{r}_i = \sum_{j = 0}^{2^p-1} \alpha_{i + j 2^{t-p}}$. \begin{proof} If we call $P$ the list of operations needed to decode the Rate-C node, the list of operations needed to decode the G-Rep node is given by $\{ g_t,g_{t-1},\dots,g_{p+1},P \}$. By induction, the LLR vector $\bm{\alpha}^\text{r}$ of the Rate-C node is calculated as $\alpha^\text{r}_i = \sum_{j = 0}^{2^p-1} \alpha_{i + j 2^{t-p}}$ since all the left nodes are Rate-0 and hence $\beta^\ell_i=0$ in \eqref{eq:Gfunc1}. This vector is then used to decode the node performing the operations in $P$, that return the vector $\bm{\beta}^\text{r}$. The partial sum vector $\bm{\beta}$ at stage $p$ is then calculated recursively using \eqref{eq:beta} with $\beta^\ell_i=0$ by construction, obtaining that $\bm{\beta} = \{ \bm{\beta}^\text{r},\dots,\bm{\beta}^\text{r} \}$. \end{proof} \end{lemma} \begin{figure} \centering \input{figures/G-rep.tex} \caption{Generalized Repetition Node} \label{fig:Grep} \end{figure} According to the lemma, the output of a G-Rep node can be calculated as follows. First, the LLR vector $\bm{\alpha}^\text{r}$ of its Rate-C node is calculated as \begin{equation} \alpha^\text{r}_i = \sum_{j = 0}^{2^p-1} \alpha_{i + j 2^{t-p}}. \end{equation} Then, the Rate-C node is decoded under SC, obtaining the partial sum vector $\bm{\beta}^\text{r}$. In the case that the Rate-C node is a special node, partial sums can be computed through fast decoding techniques. Finally, the partial sum vector $\bm{\beta}$ of the G-Rep node is given by \begin{equation} \bm{\beta} = \underbrace{ \{ \bm{\beta}^\text{r},\dots,\bm{\beta}^\text{r} \} }_{2^{t-p}} \end{equation} Several special nodes identified in the past are particular cases of the G-Rep node class: aside from the straightforward Rep node, also Type-I, Type-II and Type-V nodes fit into this category, as long as all information bits are found in the rightmost child node. \subsection{Generalized Parity-Check Node} \label{subsec:GPC} Frozen bits drive the decoding process thanks to their predetermined value: since they are assigned to low-reliability channels, bit estimations likely to be incorrect can be avoided and LLRs representing wrong values can be influenced positively. From an algebraic point of view, each frozen bit adds a constraint on the possible value of the codeword. Given the recursive nature of polar codes, the same concept applies to the decoding of constituent codes, or nodes in the SC tree. The constraint imposed by the frozen bit in SPC nodes is that of even parity in the codeword \cite{sarkis}: it can be exploited through Wagner decoding, i.e computing the parity of all the node bits and, if not fulfilled, flipping the one associated to the least reliable LLR. The two frozen bits in the leftmost positions of a Type-III node impose even parity constraints on the codeword, namely even and odd bit indices are treated as separate SPC nodes. Type-IV nodes rely on the same concept: the three frozen bits impose even parity constraints on bit indices modulo 4. However, since the fourth bit is an information bit, a suboptimal artifice is developed so that a parity constraint is imposed on the remaining bits, and four separate SPC nodes can be identified and decoded with Wagner decoding. \begin{figure} \centering \input{figures/G-PC.tex} \caption{Generalized Parity-Check Node} \label{fig:GPC} \end{figure} This even parity constraint can be generalized to a wider category of frozen bit patterns. We call generalized parity-check node (G-PC) a node at stage $t$ having all Rate-1 descendants, except the leftmost one at a certain stage $p<t$ that is Rate-0. This structure, depicted in Fig.~\ref{fig:GPC}, imposes $N_p$ parallel single parity checks as follows. \begin{lemma} A G-PC node at stage $t$ with the Rate-0 note at stage $p$ contains $N_p$ parallel SPC constraints such that $\sum_{i=0}^{2^{t-p}} \alpha_{i N_p+j} = 0$ for all $j = 0,\dots,N_p-1$. \begin{proof} A G-PC node identifies a code of rate $R = 1 - N_p/N_t$, for which $N_p$ bits out of $N_t$ are redundancy. So, if there exist $N_p$ independent parity check constraints, we have that those ones are the only constraints that should be used in the decoding, since all other constraints are linear combinations of these independent constraints. The generator matrix of the code identified by the G-PC node is $M = (G^{\otimes t-p})_{0} \otimes G^{\otimes p}$, where $(G^{\otimes n})_{0}$ represents the matrix obtained by the $n$-Kronecker power of $G$ excluding the first row. All the rows of $(G^{\otimes t-p})_{0}$ have even weight by construction. This imposes an even parity check on the codewords of the code defined by $M$. More in detail, given $j\in\{0, ... , N_p-1\}$, the XOR of bits with index $i\modulo N_p=j$, $0\le i < N_t$ is equal to zero. These parity check constraints are clearly independent, since they have no bits in common. \end{proof} \end{lemma} The lemma suggests to decode a G-PC node with $N_p$ parallel Wagner decoders. The LLRs vector $\bm{\alpha}$ of the G-PC node is thus divided into $N_p$ parts $\bm{\alpha^0},\dots,\bm{\alpha^{N_p-1}}$ such that \begin{equation} \alpha_i^j = \alpha_{iN_p + j}. \end{equation} Every LLRs sub-vector $\bm{\alpha^j}$ is treated as an SPC and decoded independently through a Wagner decoder. For every sub-vector, the index of the least reliable position is identified as $p^j = \argmin_{i}\{\alpha_i^j\}$, and the partial sum vector $\bm{\beta^j}$ is calculated through hard-decisions as \begin{equation} \beta_i^j = \begin{cases} \text{HD}(\alpha_i^j) \oplus \mbox{Parity} & \mbox{if } i = p_j \\ \text{HD}(\alpha_i^j) & \mbox{otherwise}\mbox{,} \end{cases} \end{equation} where $\mbox{Parity}=\bigoplus \text{HD}(\alpha_i^j)$ for all $i$. Finally, each element in the partial sum vector $\bm{\beta}$ of the G-PC node is calculated as \begin{equation} \beta_i = \beta^{i \modulo N_p}_{\lfloor i/N_p \rfloor}~. \end{equation} It is worth noticing that the proposed decoding algorithm for G-PC nodes allows to reduce the decoding latency not only through SC tree pruning, but also by allowing decoder parallelization with a factor $N_p$, since the Wagner decoders are independent. The even parity constraints imposed by the frozen bits in G-PC are the only independent ones present in the constituent code: thus, the proposed fast decoding technique is optimal. However, this technique can be applied also if other frozen bits are present, i.e. if some of the Rate-1 nodes are in fact Rate-C nodes with $\mbox{C} \simeq 1$. In this case, the even parity constraints are still valid, but other constraints brought by the inner frozen bits should be taken into account. We propose to ignore those additional constraints and apply Wagner decoding as if the node was a G-PC. In this case, we call this node a relaxed G-PC (RG-PC), and identify the frozen bits in the Rate-C node as additional frozen (AF) bits. The proposed decoding algorithm is suboptimal, and introduces a tradeoff between error-correction performance and decoding latency. \subsection{Path Metric for List Decoding} \label{subsec:metric} Fast list decoding of polar codes poses the question of how to compute the PM without descending the tree. It has been proven that fast decoding of Rate-0, Rate-1, Rep and SPC can be performed in list decoding as well, with the path metric computed exactly from the LLRs input to the node \cite{sarkis_list,hashemi_SSCL,hashemi_FSSCL,hashemi_SSCL_TCASI,hashemi_FSSCL_TSP}. In the same way, the proposed generalized fast decoding allows to compute exactly the path metrics at the top of the tree. Path metrics for G-Rep nodes are computed in two stages: the first one is relative to the Rate-C node, and is computed according to the decoding criterion of its particular frozen bit pattern. Once the Rate-C node has been decoded, the G-Rep node path metric calculation follows the same criterion of standard Rep nodes \cite{hashemi_SSCL_TCASI}: \begin{equation*} \PM_{\text{G-Rep}}= \PM_{\text{Rate-C}} + \frac{1}{2} \sum_{i = 0}^{N_t/N_p-2}\Big(\sum_{j = 0}^{N_p-1} \sgn\left(\alpha_i^j\right)\alpha_i^j - \eta_{j}\alpha_i^j\Big) \text{,} \end{equation*} where $\bm{\eta} = 1-2\bm{\beta}$ is output by the Rate-C node, and $\bm{\alpha}$ is received from the parent of the G-Rep node. Since G-PC nodes are compositions of parallel SPC nodes, the path metric at the top of the tree is computed in the same way \cite{hashemi_SSCL_TCASI}, but considering $N_p$ independent SPC nodes: \begin{equation*} \PM_{\text{G-PC}}= \sum_{j = 0}^{N_p-1} \PM_{\text{SPC}_j} \text{.} \end{equation*} The same calculation applies to RG-PC nodes: while it is suboptimal, since it ignores the constraints imposed by the AF bits, it is still exact with respect to the same metric computed descending the tree and ignoring said additional constraints. \section{Performance Analysis} \label{sec:perf} In this section, we analyze the impact of generalized fast decoding on both SC and SCL in terms of both speed and error-correction performance. \subsection{Speed} \begin{table*}[t!] \vspace{10pt} \begin{center} \caption{Generalized fast decoding time steps.} \label{tab:GFDspeed} \setlength{\extrarowheight}{1.7pt} \begin{tabular}{c|c||c|c|c|ccc||c|c|c|ccc} \multirow{3}{*}{$N$} & \multirow{3}{*}{$R$} & \multicolumn{6}{c||}{SC} & \multicolumn{6}{c}{SCL}\\ \cline{3-14} & & \multirow{2}{*}{Fast-SSC} & \multirow{2}{*}{+ G-Rep} & \multirow{2}{*}{+ G-PC} & \multicolumn{3}{c||}{+ RG-PC} & \multirow{2}{*}{SSCL-SPC} & \multirow{2}{*}{+ G-Rep} & \multirow{2}{*}{+ G-PC} & \multicolumn{3}{c}{+ RG-PC}\\ & & & & & 1 AF & 2 AF & 3 AF & & & & 1 AF & 2 AF & 3 AF \\ \hline \hline \multirow{5}{*}{128} & $1/8$ & 31 & 28 & 28 & 26 & 22 & 17 & 51 & 47 & 47 & 42 & 34 & 33 \\ & $1/4$ & 61 & 60 & 54 & 54 & 42 & 42 & 98 & 96 & 96 & 78 & 66 & 66\\ & $1/2$ & 82 & 80 & 80 & 80 & 49 & 39 & 176 & 172 & 172 & 172 & 113 & 103\\ & $2/3$ & 52 & 51 & 51 & 50 & 40 & 35 & 200 & 198 & 198 & 192 & 170 & 137 \\ & $5/6$ & 55 & 54 & 42 & 34 & 25 & 20 & 247 & 245 & 175 & 142 & 129 & 124 \\ \hline \multirow{5}{*}{256} & $1/8$ & 116 & 114 & 114 & 104 & 96 & 78 & 127 & 125 & 124 & 114 & 106 & 96 \\ & $1/4$ & 142 & 140 & 140 & 140 & 120 & 115 & 187 & 184 & 184 & 184 & 156 & 151\\ & $1/2$ & 113 & 111 & 108 & 107 & 85 & 75 & 323 & 317 & 312 & 307& 269 & 235 \\ & $2/3$ & 115 & 114 & 105 & 100 & 75 & 57 & 408 & 402 & 370 & 355 & 318 & 285\\ & $5/6$ & 79 & 75 & 72 & 72 & 64 & 45 & 476 & 468 & 455 & 455 & 440 & 358 \\ \hline \multirow{5}{*}{512} & $1/8$ & 116 & 109 & 109 & 107 & 92 & 82 & 194 & 188 & 182 & 176 & 156 & 134\\ & $1/4$ & 232 & 220 & 211 & 211 & 155 & 140 & 394 & 382 & 342 & 342 & 342 & 252 \\ & $1/2$ & 238 & 231 & 231 & 224 & 163 & 131 & 650 & 641 & 641 & 624 & 515 & 477\\ & $2/3$ & 202 & 193 & 190 & 185 & 151 & 121 & 805 & 797 & 785 & 771 & 707 & 617\\ & $5/6$ & 136 & 125 & 116 & 113 & 86 & 78 & 940 & 925 & 891 & 881 & 831 & 793 \\ \hline \multirow{5}{*}{1024} & $1/8$ & 250 & 240 & 240 & 238 & 185 & 160 & 398 & 386 & 386 & 380 &309 & 276\\ & $1/4$ & 353 & 344 & 344 & 344 & 269 & 224 & 712 & 702 & 697 & 697 & 589 & 496 \\ & $1/2$ & 420 & 405 & 405 & 401 & 311 & 256 & 1274 & 1251 & 1251 & 1241 & 1091 & 936 \\ & $2/3$ & 344 & 335 & 334 & 334 & 254 & 211 & 1444 & 1432 & 1422 & 1397 & 1280 & 1174\\ & $5/6$ & 232 & 224 & 215 & 202 & 173 & 141 & 1477 & 1470 & 1431 & 1350 & 1305 & 1195\\ \end{tabular} \end{center} \end{table*} Table \ref{tab:GFDspeed} shows the number of time steps required to decode a set of polar codes with different lengths and rates, and different decoding algorithms. In particular, four code lengths are considered, namely $N=\{128,\,256,\,512,\,1024\}$, and five code rates $R=\{1/2,\,2/3,\,5/6,\,1/4,\,1/8\}$. All combinations of $N$ and $R$ have been constructed targeting the AWGN channel, and a noise standard deviation $\sigma=0.5$. Two baseline decoding algorithms are considered, i.e. Fast-SSC \cite{sarkis} and SSCL-SPC \cite{hashemi_SSCL_TCASI}; to each of these algorithms, the proposed generalized fast decoding is progressively applied, evaluating the impact on the decoding speed of G-Rep nodes first, then G-Rep and G-PC combined, and finally the three proposed node types, with increasingly high number of AF bits for RG-PC nodes. The cost of decoding operations for the considered SC-based algorithms is computed as follows: both $f_t$ and $g_t$ operations have a cost of $1$ time step, regardless of the stage $t$. Rate-0 and Rate-1 nodes cost $1$ time step each, while Rep nodes and SPC nodes cost $2$ and $3$ time steps, respectively. G-Rep nodes have a cost of $1$ time step plus whatever is the cost of the Rate-C node, while both G-PC and RG-PC require $3$ time steps to be completed. These cost assumptions do not assume any kind of resource limitation. For SCL-based decoding, we instead assume a common structure in SCL decoders, in which the bit estimates memory structure implements a hardwired XOR tree so that partial sums relative to all SC tree stages are updated as soon as a bit is estimated \cite{balatsoukas_SCL_HW,hashemi_SSCL_TCASI}. This implies that information bits need to be estimated one at a time. Moreover, every bit estimation is coupled with the path metric calculation and sorting, and selection of the surviving paths. Thus, given that the size of a node at stage $t$ is $2^t$, the cost of Rate-1 nodes rises to $2\times 2^t$, that of Rep nodes to $1+2^t$, and that of SPC nodes to $2\times 2^t-1$. In the same way, G-PC and RG-PC node require $1+2\times(\frac{2^t-1}{N_p})$ time steps. The cost of Rate-0 and G-Rep nodes is unchanged. G-Rep nodes can be mostly found close to the imaginary border between the majority of unreliable bit-channels and the majority of reliable ones. Regardless of the code rate, their number is small, and the gain in terms of time steps limited. The size of G-Rep nodes tends to increase as the code length increases. As pointed out in Section \ref{subsec:GPC}, G-PC nodes revert to SPC nodes when $N_p=1$. Additional G-PC nodes where $N_p>1$ are not always found: this can be noticed by the fact that the number of time steps in the ``+ G-PC" columns in Table \ref{tab:GFDspeed} is sometimes unchanged from the ``+ G-Rep" columns. Nevertheless, the speedup brought by the fast decoding of G-PC nodes can be significant: for SC decoding, G-PC can save up to $21.8\%$ time steps with $N=128$, $R=5/6$, for a combined gain of $23.6\%$ with G-Rep nodes. With SCL decoding, the gain brought by G-PC nodes is larger, since $N_p$ SPC nodes of shorter length can be decoded in parallel: G-PC nodes save up to $28.3\%$ time steps, and up to $29.2\%$ when combined to G-Rep nodes. Similar behavior can be observed for RG-PC nodes. With higher number of AF bits, the number of RG-PC nodes found in a code increases, and so does their time step saving. For SC decoding, RG-PC nodes with a single AF bit can save up to $14.5\%$ time steps with $N=128$, $R=5/6$, for a combined contribution with G-Rep and G-PC nodes of $38.2\%$. The combined gain can reach $54.5\%$ if the AF bits increase to $2$, and to $63.6\%$ with $3$ AF bits. The gain brought by RG-PC nodes in SCL decoding is larger in absolute value, but averagely smaller in percentage. With one AF bit, the time step gain can reach $18.4\%$, with $N=128$, $R=1/4$, for a combined contribution with G-Rep and G-PC nodes of $20.4\%$. The combined time step gain can instead reach $47.8\%$ and $49.8\%$ with 2 and 3 AF bits, respectively. As shown in the next Section, with a higher number of AF bits comes more significant error-correction performance loss. The decoding speed can be further increased with respect to the results detailed in Table \ref{tab:GFDspeed} by ad-hoc code construction that maximizes the occurrence of the identified special nodes, as shown in \cite{GiardLowRate}. \subsection{Error-Correction Performance} The proposed G-Rep and G-PC nodes do not impact the error-correction performance of the considered Fast-SSC and SSCL-SPC decoding algorithms. However, the approximation introduced by the RG-PC nodes can cause a performance loss. As an example, Fig. \ref{fig:ECP-1K} and Fig. \ref{fig:ECP-256} report the block error rate (BLER) curves for $N=1024$, $R=1/2$ and $N=256$, $R=1/8$, respectively. The SSCL-SPC curves have been obtained with a a list size $L=4$, and a CRC length of 16 for Fig. \ref{fig:ECP-1K} and of 8 for Fig. \ref{fig:ECP-1K}. It can be seen that as the number of AF bits increases, a larger error-correction performance degradation is observed. The entity of this degradation depends on the number of RG-PC nodes encountered, the length and rate of the code, and the effectiveness of the decoding algorithm. List-based decoding shows a higher degree of resilience to the RG-PC degradation in both cases, while the weaker code used in Fig. \ref{fig:ECP-256} suffers larger losses than its stronger counterpart in Fig. \ref{fig:ECP-1K}. \begin{figure} \centering \scalebox{1}{\input{figures/ECP1.tikz}} \ref{ECP-1K} \\ \vspace{2pt} \caption{BLER curves for $N=1024$, $R=1/2$. For SSCL-SPC, $L=4$, CRC length 16.} \label{fig:ECP-1K} \end{figure} \section{Conclusion and Future Work} \label{sec:conc} In this work, we introduced a generalized approach to fast decoding of polar codes. It identifies three multi-node subcode patterns that, along with including most existing subcodes, allow fast decoding of a wide variety of frozen and information bit patterns. Decoding rules are provided for any SC-based decoding algorithm, while fast path metric calculation for SCL is derived as well. The proposed decoding approach is evaluated in terms of speedup and error correction performance against baseline fast decoding algorithms, over a wide set of code lengths and rates. Without any error-correction performance degradation, our technique shows up to $23.6\%$ and $29.2\%$ decoding latency gain with respect to fast SC and SCL decoding algorithms, respectively. These figures can rise up to $63.6\%$ and $49.8\%$ if a performance loss is accepted: the entity of the degradation depends on the combination of code and decoding algorithm parameters, and on the desired speedup. The framework described in this work is not limited to polar codes; since the three identified subcodes are multi-node patterns, they valid for multi-kernel codes as well \cite{Gabry_MK}, that are constructed with combinations of kernels of different sizes. Future work foresees the evaluation of the effectiveness of the proposed generalized fast decoding to practical multi-kernel codes. \bibliographystyle{IEEEtran} \begin{figure} \centering \scalebox{1}{\input{figures/ECP2.tikz}} \ref{ECP-256} \\ \vspace{2pt} \caption{BLER curves for $N=256$, $R=1/8$. For SSCL-SPC, $L=4$, CRC length 8.} \label{fig:ECP-256} \end{figure}
proofpile-arXiv_067-11563
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{s:intro} The revolutionary detection of gravitational waves from the coalescence of two black holes showed the formation of rapidly rotating black hole boosted with linear velocity~\cite{LIGO16b,LIGO16c,LIGO16d}. The possible observation of the electromagnetic counterpart from black hole merger could provide more information about angular and linear momentum of the black hole in such systems~\cite{Morozova14,Lyutikov11}. This fact indicates the importance of the inclusion of the boost parameter to Kerr spacetimes in order to study the effects of the boost velocity to the geometry (gravitational field) around a black hole. The solution of Einstein's vacuum field equations describing a boosted Kerr black hole relative to an asymptotic Lorentz frame at future null infinity was obtained in~\cite{Soares17}. The electromagnetic structure around a boosted black hole has been studied in~\cite{Morozova14}. The author of Ref.~\cite{Lyutikov11} has considered the solution of Maxwell equations in the background geometry of a boosted black hole. In the present paper, we study weak gravitational lensing around a boosted black hole described by the solution in~\cite{Soares17}. The gravitational lensing effect is a good tool to test Einstein's theory of general relativity. For a review on light propagation in the curved spacetime and geometrical optics in general relativity, see e.g.~\cite{Synge60,Schneider92,Perlick00,Perlick04}. The photon motion is also affected by the presence of a plasma and the effect of plasma around a compact objects on lensing effects has been studied in~\cite{Rogers15,Rogers17,Er18,Rogers17a,Broderick03,Bicak75,Kichenassamy85,Perlick17,Perlick15,Abdujabbarov17,Eiroa12,Kogan10,Tsupko10,Tsupko12,Morozova13,Tsupko14,Kogan17,Hakimov16,Turimov18,Benavides16,Kraniotis14}. In the literature, we can find a lot of work devoted to another optical property of black holes, the so-called black hole shadow~\cite{Vries00,Abdujabbarov17c,Abdujabbarov16a,Abdujabbarov16b,Abdujabbarov15,Abdujabbarov17b,Abdujabbarov15,Atamurotov15a,Amarilla12,Bambi09,Bambi12,Bambi13c,Ghasemi-Nodehi15,Cunha15,Wei13,Takahashi05,Ohgami15,Nedkova13,Falcke00,Grenzebach15,Takahashi04,Li14a,Wei15b,Papnoi14,Bambi10,Atamurotov13b,Atamurotov13,Tsukamoto14,Grenzebach2014,Abdujabbarov13c,Amarilla10,Amarilla13,Hioki09,Mizuno18}. Starting from~\cite{Paczynski86a,Alcock93,Aubourg93,Udalski93,Paczynski96}, there is a rich literature on weak gravitational lensing. Strong gravitational lensing around spherically symmetric compact objects is described in~\cite{Tsupko09,Bozza06}. In the present paper, we study weak lensing around a boosted black hole in the presence of plasma. The paper is organized as follow. In Section~\ref{sec:Optics}, we briefly review the optics in curved spacetime and describe the procedure to obtain the deflection angle in the weak field approximation following~\cite{Kogan10,Morozova13}. In Section~\ref{sec:Kerr}, we present the boosted Kerr metric in both diagonal and non-diagonal cases (non-rotating and slowly rotating cases, respectively). In Subsections~\ref{sec:nonrotating_case_uniformplasma} and \ref{sec:Deflection_angle_for_the_slowly_rotating_case}, we find the expression for the deflection angle. Then, in section \ref{sec:Models}, we study the deflection angle in the presence of plasma, both for uniform and non-uniform distributions. For the inhomogeneous case, we consider three distribution models: singular isothermal sphere (\textbf{SIS}), non-singular isothermal sphere (\textbf{NSIS}), and the case of a plasma in a galaxy cluster (\textbf{PGC}). Finally, as an application, we devote section \ref{sec:magnification} to study the magnification for the uniform and \textbf{SIS} plasma distributions.\\ Throughout the paper we use the convention in which Greek indices run from 0 to 3, while Latin indices run from 1 to 3. Moreover, with the exception of Section~\ref{sec:Optics}, we use geometrized units, where $c=G=1$. \section{\label{sec:Optics} Optics in a curved space-time} In this section, we review the optics in a curved space-time developed by Synge in 1960~\cite{Synge60}. Let us consider a static spacetime metric describing a weak gravitational field in an asymptotically flat spacetime. The metric coefficients can be written as~\cite{Kogan10,Morozova13,Landau-Lifshitz2} \begin{eqnarray} \label{asymptotically_flat} g_{\alpha\beta}&=&\eta_{\alpha\beta}+h_{\alpha\beta}\ ,\\ g^{\alpha\beta}&=&\eta^{\alpha\beta}-h^{\alpha\beta}\ , \end{eqnarray} where $\eta_{\alpha\beta}$ is the metric of the Minkowski spacetime, $h_{\alpha\beta}\ll 1$, $h_{\alpha\beta}\rightarrow 0$ for $x^\alpha \rightarrow \infty$, and $h^{\alpha\beta}=h_{\alpha\beta}$.\\ Using this approach for the static case, the phase velocity\footnote{The phase velocity is defined as the minimum value of~\cite{Synge60} \begin{equation} u'^2=1+\frac{dx_\alpha dx^\alpha}{(V_\beta dx^\beta)^2}, \end{equation} where $u'$ is the velocity of a fictitious particle riding on the wave front relative to a time-like world-line $C$ (intersecting the wave) of an observer with 4-velocity $V^\mu$ (see \cite{Synge60} for details).} $u$ and the $4-$vector of the photon momentum $p^\alpha$ are related by the following equation~\cite{Synge60} \begin{equation} \label{refraction_index_of_the_medium} \frac{c^2}{u^2}=n^2=1+\frac{p_\alpha p^\alpha}{(p^0\sqrt{-g_{00}})^2}.\ \end{equation} \\ In order to obtain the photon trajectories in the presence of a gravitational field, one can modify the Fermat's least action principle for the light propagation by considering a dispersive medium~\cite{Synge60}. Then, using the Hamiltonian formalism, it is easy to show that the variational principle% \begin{equation} \label{variational_principle} \delta\left(\int p_\alpha dx^\alpha \right)=0\ , \end{equation} with the condition \begin{eqnarray} \label{W} W(x^\alpha,p_\alpha)=\frac{1}{2}\left[g^{\alpha\beta}p_\alpha p_\beta-(n^2-1)\left(p_0\sqrt{-g^{00}}\right)^2\right]=0,\nonumber \end{eqnarray} leads to the following system of differential equations that describes the trajectories of photons \begin{eqnarray} \label{differential_equations} \frac{dx^\alpha}{d\lambda}&=&\frac{\partial W}{\partial p_\alpha}\ ,\nonumber\\ \frac{dp_\alpha}{d\lambda}&=&-\frac{\partial W}{\partial x^\alpha}\ , \end{eqnarray} with the affine parameter $\lambda$ changing along the light trajectory. Note that the scalar function $W(x^\alpha,p_\alpha)$ has been defined by means of Eq.~(\ref{refraction_index_of_the_medium}).\\ In the Refs.~\cite{Kogan10,Morozova13}, it has been considered a static inhomogeneous plasma with a refraction index $n$ which depends on the space location $x^i$ \begin{eqnarray} \label{refraction_index_inhomogeneous_plasma} n^2&=&1-\frac{\omega_e^2}{[\omega(x^i)]^2}\ ,\\ \omega^2_e&=&\frac{4\pi e^2 N(x^i)}{m}=K_eN(x^i)\ , \end{eqnarray} where $\omega(x^i)$ is the frequency of the photon that, due to gravitational redshift, depends on the space coordinates $x^1$, $x^2$, $x^3$, $e$ is the electron charge, $m$ is the electron mass, $\omega_e$ is the electron plasma frequency, and $N(x^i)$ is the electron concentration in an inhomogeneous plasma~\cite{Kogan10}.\\ According to Synge~\cite{Synge60}, for the case of a static medium in a static gravitational field, one can express the photon energy as \begin{equation} p_0\sqrt{-g^{00}}=-\frac{1}{c}\hbar\omega(x^i). \end{equation} Using Eq.~(\ref{refraction_index_inhomogeneous_plasma}) one can express the scalar function $W(x^\alpha,p_\alpha)$ in the following form \begin{eqnarray} \label{function_W} W(x^\alpha,p_\alpha)=\frac{1}{2}\left[g^{\alpha\beta}p_{\alpha}p_{\beta}+\frac{\omega^2_e\hbar^2}{c^2}\right], \end{eqnarray} where $\hbar$ is the Planck's constant. The scalar function expressed in Eq.~(\ref{function_W}) has been used in Refs.~\cite{Kogan10,Morozova13} to find the equations of light propagation for diagonal and non-diagonal spacetimes.\\ In contrast with the case of a flat spacetime in vacuum, where the solution for photon's trajectory is a straight line, the presence of an arbitrary medium in curved spacetimes makes photons move along bent trajectories. However, taking into account only small deviations, it is possible to use the components of the 4-momentum of the photon moving in a straight line along the $z-$axis as an approximation. This components are given by (see, e.g.~\cite{Kogan10,Morozova13}) \begin{eqnarray} \label{null_approximation} p^\alpha&=&\left(\frac{\hbar\omega}{c},0,0,\frac{n\hbar\omega}{c}\right)\ ,\\p_\alpha&=&\left(-\frac{\hbar\omega}{c},0,0,\frac{n\hbar\omega}{c}\right).\label{null_approximation2} \end{eqnarray} Eqs.~(\ref{null_approximation}) and (\ref{null_approximation2}) are known as the null approximation. It is important to point out that both $\omega$ and $n$ are evaluated at $\infty$. In this sense, we have introduced the notation in which \begin{equation} \begin{aligned} \omega&=\omega(\infty)\\ n&=n(\infty).\\ \end{aligned} \end{equation} This notation has been also used in \cite{Kogan10,Morozova13}, and will be used along the manuscript. \subsection{\label{sec:level3}Equations of light propagation in a diagonal spacetime} First, we consider the spacetime with a diagonal metric tensor. In this spacetime, the components of the metric tensor $g_{\alpha\beta}$ vanish for $\alpha\ne \beta$. Hence, after using Eq.~(\ref{function_W}), the system in~(\ref{differential_equations}) can be expressed as \cite{Kogan10} \begin{equation} \label{systme_diagonal} \begin{aligned} \frac{dx^i}{d\lambda}&=g^{ij}p_j,\\ \frac{dp_i}{d\lambda}&=-\frac{1}{2}g^{lm}_{\;\;\;\;,i}p_l p_m-\frac{1}{2}g^{00}_{\;\;\;\;,i}p^2_0-\frac{1}{2}\frac{\hbar^2}{c^2}K_eN_{,i}. \end{aligned} \end{equation} Then, with the aid of the null approximation, the first equation in~(\ref{systme_diagonal}) reduces to \begin{equation} \label{relation_differential} \frac{dz}{d\lambda}=\frac{n\hbar\omega}{c}\ . \end{equation} In the null approximation, the $3-$vector in the direction of the photon momentum is written as $e^i =e_i=(0,0,1)$. Therefore $p_i$ can be expressed as \begin{equation} p_i=\frac{n\hbar\omega}{c}(0,0,1)=\frac{n\hbar\omega}{c}e_i. \end{equation} Hence, the second equation in (\ref{systme_diagonal}) can be expressed by \begin{eqnarray} \frac{d}{d\lambda}\left(\frac{n\hbar\omega}{c}e_i\right)&=&-\frac{1}{2}g^{lm}_{\;\;\;\;,i}p_l p_m\nonumber\\&&-\frac{1}{2}g^{00}_{\;\;\;\;,i}p^2_0-\frac{1}{2}\frac{\hbar^2}{c^2}K_eN_{,i}. \end{eqnarray} Then, after using Eq.~(\ref{relation_differential}) and differentiating, the last expression takes the form \begin{eqnarray} \label{second_equation_system} \frac{de_i}{dz}&=&-\frac{1}{2}\frac{c^2}{n\hbar^2\omega^2}\left(g^{00}_{\;\;\;\;,i}(p_0)^2+g^{lm}_{\;\;\;\;,i}p_l p_m+\frac{\hbar^2}{c^2}K_eN_{,i}\right)\nonumber\\ &&-e_i\frac{dn}{dz}\ . \end{eqnarray} In Ref.\cite{Kogan10}, only those components of the $3-$vector that are perpendicular to the initial direction of propagation were taken into account. In this sense, the contribution to the deflection of photons is due only to the change in $e_1$ and $e_2$. Hence, after using the null approximation $e_i=0$ along with the assumption of weak gravitational field, Eq.~(\ref{second_equation_system}) reduces to \begin{equation} \label{derivative_e} \frac{de_i}{dz}=\frac{1}{2}\left(h_{33,i}+\frac{1}{n^2}h_{00,i}-\frac{1}{n^2\omega^2}K_eN_{,i}\right)\ , \end{equation} for $i=1,2$.\\ The deflection angle is determined by the change of the $3-$vector $e_i$. This means that \begin{equation} \vec{\hat{\alpha}}=\mathbf{e}(+\infty)-\mathbf{e}(-\infty). \end{equation} Then, using Eq.~(\ref{derivative_e}), the deflection angle becomes \begin{eqnarray} \hat{\alpha}_i=\frac{1}{2}\int^{\infty}_{-\infty}\left(h_{33,i}+\frac{\omega^2}{\omega^2-\omega^2_e}h_{00,i}-\frac{K_e}{\omega^2-\omega^2_e}N_{,i}\right)dz,\nonumber\\\label{deflection_angle_diagonal_weak_field} \end{eqnarray} for $i=1,2$. In the last expression $\omega_e$ and $n$ are evaluated at infinity, and $\omega(\infty)=\omega$ \cite{Kogan10}. In terms of the impact parameter $b$, Eq.~(\ref{deflection_angle_diagonal_weak_field}) takes the form \cite{Kogan10} \begin{eqnarray} \label{deflection_angle_diagonal_weak_field_b} \hat{\alpha}_b&=&\frac{1}{2}\int^\infty_{-\infty}\frac{b}{r}\nonumber \\ &&\left(\frac{dh_{33}}{dr}+\frac{1}{1-\omega^2_e/\omega^2}\frac{dh_{00}}{dr}-\frac{K_e}{\omega^2-\omega^2_e}\frac{dN}{dr}\right)\ , \end{eqnarray} where $r=\sqrt{b^2+z^2}$. \subsection{Equations of light propagation in a non-diagonal spacetime} Now we consider a spacetime with a non-diagonal metric tensor; that is, the components of metric tensor $g_{\alpha\beta}$ do not vanish for $\alpha\neq\beta$. Therefore, the scalar function $W(x^\alpha,p_\alpha)$ in Eq.~(\ref{function_W}) can be expressed as \cite{Morozova13} \begin{eqnarray} \label{function_W_for_non_diagonal_space_time} W&(&x^\alpha,p_\alpha)=\nonumber\\ &&\frac{1}{2}\left[g^{00}p^2_0+2g^{0l}p_{0}p_{l}+g^{lm}p_{l}p_{m}+\frac{\omega^2_e\hbar^2}{c^2}\right]. \end{eqnarray} Hence, the system of differential equations in (\ref{differential_equations}) takes the form \begin{eqnarray} \frac{dx^i}{d\lambda}&=&g^{ij}p_j\\ \frac{dp_i}{d\lambda}&=&-\frac{1}{2}g^{lm}_{\;\;\;\;,i}p_l p_m-\frac{1}{2}g^{00}_{\;\;\;\;,i}p^2_0-g^{0l}_{\;\;\;\;,i}p_0p_l\nonumber\\ &&-\frac{1}{2}\frac{\hbar^2}{c^2}K_eN_{,i}. \end{eqnarray} Then, using Eq.~(\ref{relation_differential}) and assuming that the gravitational field is weak, we obtain \begin{eqnarray} \frac{dp_i}{dz}&=&\frac{1}{2}\frac{n\hbar\omega}{c}\nonumber \\ &&\times\left(h_{33,i}+\frac{1}{n^2}h_{00,i}+\frac{1}{n}h_{03,i}-\frac{K_eN_{,i}}{n^2\omega^2}\right). \end{eqnarray} Therefore, following the procedure in Subsection~\ref{sec:level3}, the deflection angle for a non-diagonal spacetime in the weak limit has the form \begin{eqnarray} \label{deflection_angle_non_diagonal} \hat{\alpha}_i&=&\frac{1}{2}\int^{\infty}_{-\infty}\bigg(h_{33,i}+\frac{\omega^2}{\omega^2-\omega^2_e}h_{00,i}+\frac{1}{n}h_{03,i}\nonumber\\ &&-\frac{K_eN_{,i}}{\omega^2-\omega^2_e}\bigg)dz\ . \end{eqnarray} \section{\label{sec:Kerr} Boosted Kerr metric} The boosted Kerr metric, which describes a boosted black hole relative to an asymptotic Lorentz frame, is a solution of Einstein's vacuum field equations obtained in~\cite{Soares17}. This solution has three parameters: mass, rotation and boost. In Kerr-Schild coordinates, the line element reads \begin{eqnarray} \label{boosted_kerr_metric_in_kerrSchild_coordinates} ds^2&=&-\left(1-\frac{2Mr}{\Sigma}\right)dt'^2+\left(1+\frac{2Mr}{\Sigma}\right)dr^2\nonumber+\frac{\Sigma}{\Lambda}d\theta^2\\ &&+\frac{A\sin^2(\theta)}{\Lambda^2\Sigma}d\phi^2-\frac{4Mra\sin^2\theta}{\Lambda\Sigma}dt'd\phi\nonumber\\ &&-\frac{4Mr}{\Sigma}dt'dr-\frac{2a\sin^2\theta}{\Lambda}\left(1-\frac{2Mr}{\Sigma}\right)drd\phi\ , \end{eqnarray} with \begin{eqnarray} \label{definitions} \Sigma&=&r^2+a^2\left(\frac{\beta+\alpha\cos\theta}{\alpha+\beta\cos\theta}\right)^2\ ,\\ \Lambda&=&(\alpha+\beta\cos\theta)^2\ ,\\ A&=&\Sigma^2+a^2\left(\Sigma+2Mr\right)\sin^2\theta\ , \end{eqnarray} where $a={J}/{M}$ is the specific angular momentum of the compact object with total mass $M$, $\alpha=\cosh\gamma$, $\beta=\sinh\gamma$, and $\gamma$ is the usual Lorentz factor which defines the boost velocity $v$ by the formula $v=\tanh\gamma={\beta}/{\alpha}$. Note that the metric in~(\ref{boosted_kerr_metric_in_kerrSchild_coordinates}) exactly reduces to the Kerr one when $\Lambda=1$ ($v=0$). It is also important to point out that the direction of the boost for the Kerr black hole is along the axis of rotation while for Schwarzschild is along the $z-$axis.\\ To study the deflection angle for the boosted Kerr metric in the presence of a medium, we consider both the non-rotating and the slowly rotating cases. In this sense, following the ideas in~\cite{Kogan10} and \cite{Morozova13}, we devote this section to find the form of the line element (\ref{boosted_kerr_metric_in_kerrSchild_coordinates}) in each case. \subsection{\label{sec:kerr_nonrotating}Boosted Kerr metric: non-rotating case} The non-rotating case is obtained by setting $a=0$. Hence, the metric (\ref{boosted_kerr_metric_in_kerrSchild_coordinates}) reduces to \begin{eqnarray} \label{boosted_kerr_metric_in_kerrSchild_coordinates_a0} ds^2&=&-\left(1-\frac{2M}{r}\right)dt'^2+\left(1+\frac{2M}{r}\right)dr^2\nonumber\\ &&+\frac{r^2}{\Lambda}d\theta^2+\frac{\sin^2\theta}{\Lambda^2}r^2d\phi^2-\frac{4M}{r}dt'dr\ . \end{eqnarray} In Ref.~\cite{Kogan10}, Cartesian coordinates have been used to find the terms $h_{ik}$. Nevertheless, before changing the coordinates, we want to write the form of the metric in Eq.~(\ref{boosted_kerr_metric_in_kerrSchild_coordinates_a0}) for small values of the velocity ($v\ll1$). In order to do so, we express $1/\Lambda$ and $1/\Lambda^2$ in terms of $v$ and consider a Taylor expansion up to first order. Therefore, the metric (\ref{boosted_kerr_metric_in_kerrSchild_coordinates_a0}) takes the form \begin{eqnarray} \label{final_line_element} ds^2&=&-\left(1-\frac{2M}{r}\right)dt'^2+\left(1+\frac{2M}{r}\right)dr^2\nonumber\\ &&+r^2(1-2v\cos\theta)d\theta^2+r^2\sin^2\theta d\phi^2\nonumber\\ &&-4vr^2\sin^2\theta\cos\theta d\phi^2-\frac{4M}{r}dt'dr. \end{eqnarray} Now, to transform the line element (\ref{final_line_element}) into Boyer-Lindquist coordinates, we use the relation (see \cite{Visser07}, page 15) \begin{equation} \label{Boyer_Lindquist} t'=t-2M\ln\left(\frac{r}{2M}-1\right); \end{equation} from which one can easily obtain \begin{eqnarray} \label{line_element_in_Boyer_lindquist_coordinate} ds^2&=&-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2\nonumber\\ &&r^2\left[(1-2v\cos\theta)d\theta^2+(1-4v\cos\theta)\sin^2\theta d\phi^2\right].\nonumber\\ \end{eqnarray} In the weak field limit, the approximation is done by considering ${2M}/{r}\ll 1$. In this sense, according to \cite{Kogan10}, the main idea is to express the line element in Eq.~(\ref{line_element_in_Boyer_lindquist_coordinate}) as \begin{equation} \label{weak_field_limit} ds^2=ds^2_0+ds'^2 \end{equation} where \begin{equation} ds^2_0=-dt^2+dr^2+r^2(d\theta^2+\sin^2\theta d\phi^2), \end{equation} is the flat space-time, and $ds'^2$ is the part of the metric containing the perturbation terms $h_{ik}$. Therefore, after considering the weak approximation, the line element~(\ref{line_element_in_Boyer_lindquist_coordinate}) has the form \begin{eqnarray} \label{line_element_in_weak_limit_in_Boyer_lindquist_coordinates} ds^2&=&ds^2_0+\frac{2M}{r}dt^2+\frac{2M}{r}dr^2 \nonumber\\ &&-2vr^2\cos\theta d\theta^2-4vr^2\cos\theta\sin^2\theta d\phi^2. \end{eqnarray} Eq.~(\ref{line_element_in_Boyer_lindquist_coordinate}) is the non-rotating boosted Kerr metric in the weak field approximation expressed in Boyer-Lindquist coordinates. In order to identify the components $h_{ik}$, we need to express the line element in Eq.~(\ref{line_element_in_weak_limit_in_Boyer_lindquist_coordinates}) in Cartesian coordinates. After following the procedure described in Appendix~I, we found that $h_{00}$ and $h_{33}$ are \begin{eqnarray} \label{perturbation_h00_h33} h_{00}&=&\frac{2M}{r}\\ \label{perturbation_h33} h_{33}&=&\frac{2M}{r}\cos^2\theta-2v\cos\theta\sin^2\theta, \end{eqnarray} \subsection{\label{sec:kerr_rotating} Boosted Kerr metric: rotating case} The spacetime describing a slowly rotating massive object was obtained in~\cite{Hartle68}. However, in this work, we use the form of the metric reported in~\cite{Morozova13}. Using geometrized units, this metric takes the form \begin{eqnarray} ds^2&=&-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2\nonumber \\ &&+r^2(d\theta^2+\sin^2\theta d\phi^2)-2\omega_{LT}r^2\sin^2\theta dtd\phi\ , \end{eqnarray} where $\omega_{LT}={2Ma}/{r^3}={2J}/{r^3}$ is the Lense-Thirring angular velocity of the dragging of inertial frames. \\ For the case of the boosted Kerr metric, the line element has the same form. Introducing the notation $\overline{\omega}_{LT}={2\overline{J}}/{r^3}$, where $\overline{J}={J}/{\Lambda}$, one may obtain the ``modified'' metric of slowly rotating boosted velocity. Finally, the spacetime around boosted slowly rotating objects can be expressed by the following metric% \begin{eqnarray} \label{boosted_rotating_case} ds^2&=&-\left(1-\frac{2M}{r}\right)dt^2+\left(1-\frac{2M}{r}\right)^{-1}dr^2\nonumber\\ &&+r^2(d\theta^2+\sin^2\theta d\phi^2)-2\overline{\omega}_{LT}r^2\sin^2\theta dtd\phi. \end{eqnarray} When $v=0$, the expression in~(\ref{boosted_rotating_case}) reduces to that in~\cite{Morozova13}. \section{Deflection angle in uniform plasma} \subsection{\label{sec:nonrotating_case_uniformplasma}Deflection of light for the non-rotating case} In Subsection~\ref{sec:level3}, we discussed the procedure in~\cite{Kogan10} to obtain Eq.~(\ref{deflection_angle_diagonal_weak_field_b}). Now, we apply this result to find the deflection angle for the boosted Kerr metric in the presence of a uniform plasma. We first consider the non-rotating case. From Eqs.~(\ref{perturbation_h00_h33}) and (\ref{perturbation_h33}) we have that \begin{eqnarray} \label{h33_derivative_with_respect_to_r} \frac{b}{r}\frac{dh_{00}}{dr}&=&-\frac{2Mb}{r^3}=-\frac{2Mb}{\sqrt{b^2+z^2}^\frac{3}{2}}\ ,\\ \frac{b}{r}\frac{dh_{33}}{dr}&=&-\frac{6Mb}{r^5}z^2+\frac{2bv}{r^3}z-\frac{6vb}{r^5}z^3\nonumber \\ &=&-\frac{6Mb}{(b^2+z^2)^\frac{5}{2}}z^2+\frac{2bv}{(b^2+z^2)^\frac{3}{2}}z\nonumber\\ &&-\frac{6vb}{(b^2+z^2)^\frac{5}{2}}z^3\ . \end{eqnarray} Then, recalling that $\cos\theta={z}/{r}$, $r=\sqrt{b^2+z^2}$, and using Eq.~(\ref{deflection_angle_diagonal_weak_field_b}), the deflection angle is \begin{eqnarray} \label{delflection_angle_with_plasma_boosted_kerr_metric} \hat{\alpha}_b&=&-3Mb\int^\infty_{-\infty}\frac{z^2}{(b^2+z^2)^\frac{5}{2}}dz\nonumber\\ &&+bv\int^\infty_{-\infty}\frac{z}{(b^2+z^2)^\frac{3}{2}}dz\nonumber\\ &&-3bv\int^\infty_{-\infty}\frac{z^3}{(b^2+z^2)^\frac{5}{2}}dz\nonumber\\ &&-Mb\int^\infty_{-\infty}\frac{\omega^2}{(\omega^2-\omega^2_e)(b^2+z^2)^\frac{3}{2}}dz\nonumber\\ &&-\frac{bK_e}{2}\int^\infty_{-\infty}\frac{1}{\omega^2-\omega^2_e}\frac{1}{r}\frac{dN}{dr}dz\ . \end{eqnarray} Thus, after integration, we obtain \begin{eqnarray} \label{deflection_angle_nonrotating_case} \hat{\alpha}_b&=&\frac{2M}{b}+\frac{2Mb}{1-\frac{\omega^2_e}{\omega^2}}\int^\infty_{0}\frac{dz}{(b^2+z^2)^\frac{3}{2}}\nonumber\\ &&+\frac{bK_e}{2}\int^\infty_{-\infty}\frac{1}{\omega^2-\omega^2_e}\frac{1}{r}\frac{dN}{dr}dz. \end{eqnarray} In the last expression, we took into account the symmetry of the limits (see appendix II for details). We also considered the fact that the deflection angle is defined as the difference between the initial and the final ray directions; that is, $\mathbf{\hat{\alpha}}=\mathbf{e}_{in}-\mathbf{e}_{out}$. Therefore, it has the opposite sign (see \cite{Schneider92}).\\ From Eq.~(\ref{deflection_angle_nonrotating_case}) we note that, at first order, $\hat{\alpha}_b$ does not depend on the velocity. This is due to the fact that the second and third integrals in Eq.~(\ref{delflection_angle_with_plasma_boosted_kerr_metric}), which contain the dependence on $v$, vanish. If we consider a uniform plasma ($\omega_e$ constant), and the approximation $1-n\ll\frac{\omega_e}{\omega}$, Eq.~(\ref{deflection_angle_nonrotating_case}) reduces to \cite{Kogan10}\\ \begin{eqnarray} \label{non_rotating_case_approximation} \hat{\alpha}_b=\frac{2M}{b}\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right). \end{eqnarray} \begin{figure*}[t] \includegraphics[scale=0.38]{alpha_b_uniform_plasma_vs_omega_nonrotating_case.png} \includegraphics[scale=0.38]{alpha_vs_omega_uniform_plasma_rotating_case_omega.png} \caption{\textbf{left}: Plot of $\hat{\alpha}_b$ vs. $\omega^2_e/\omega^2$ for $b/2M=10$ (continuous line), $b/2M=50$ (dashed line), and $b/2M=100$ (dot-dashed line) for uniform plasma. \textbf{right}: Plot of $\hat{\alpha}_b$ vs. $\omega^2_e/\omega^2$ for the rotating case. We used different values of the impact parameter: $b/2M=10$ (continuous line), $b/2M=50$ (dashed line), and $b/2M=100$ (dot-dashed line). We assumed $\Lambda=0.5$, $J_r/M^2=0.25$, $\sin\chi=1$, and $\omega^2_e/\omega^2=0.5$. Note that there is a small increment for $b/2M=10$ when we compare with Schwarzschild (left panel). \label{fig2} \end{figure*} In Fig.~\ref{fig2} left we plotted $\hat{\alpha}_b$ as a function of $\omega^2_e/\omega^2$ for different values of $b/2M$. The plot shows that $\hat{\alpha}_b$ increases as the ration $\omega^2_e/\omega^2$ increases. On the other hand, for small values of $b/2M$ the values of the deflection angle are greater. For example, for $b/2M=100$ the figure shows that $\hat{\alpha}_b$ is greater than $0.2$; however, for $b/2M=50,100$ the deflection angle is less than $0.1$. It is also possible to see from the figure that $\hat{\alpha}_b$ has the value $4M/b$ when there is not plasma ($\omega_e=0$). \subsection{\label{sec:Deflection_angle_for_the_slowly_rotating_case} Deflection angle for the slowly rotating case} Due to the presence of non-diagonal terms in the line element (\ref{boosted_rotating_case}), we use the form of the deflection angle in Eq.~(\ref{deflection_angle_non_diagonal}). According to \cite{Morozova13}, the effect of dragging of the inertial frame contributes to $\hat{\alpha}$ only by means of the projection $\overline{J}_r$ of the angular momentum. Hence, after the introduction of polar coordinates $(b,\chi)$ on the intersection point between the light ray and the $xy$-plane, where $\chi$ is the angle between $\vec{J}_{r}$ and $\vec{b}$ , we find that~\cite{Morozova13}~(see Fig.~\ref{esquema}) \begin{figure}[h!] \begin{centering} \includegraphics[scale=0.38]{schemetic_boosted.pdf} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Schematic representation of the gravitational lensing system. Here, $\chi$ represents the inclination angle between the vectors $\mathbf{J}_r$ and $\mathbf{b}$. In the figure $D_s$, $D_{l}$, and $D_{ls}$ are the distances from the source to the observer, from the lens to the observer, and from the source to the lens, respectively. \label{esquema} \end{centering} \end{figure} \begin{equation} \label{h03_rotating_case} h_{03}=-2\frac{\overline{J}_rb\sin\chi}{(b^2+z^2)^{3/2}}. \end{equation} Since Eq.~(\ref{h03_rotating_case}) depends on $\chi$ and $b$, the deflection angle contains two contributions: the partial derivatives \begin{eqnarray} \label{derivative_of_h03_b_and_chi} \frac{\partial h_{03}}{\partial b}&=&-2\overline{J}_r\sin\chi\left(\frac{1}{(b^2+z^2)^{3/2}}-\frac{3b^2}{(b^2+z^2)^{5/2}}\right) ,\\ \frac{\partial h_{03}}{\partial\chi}&=&-2\frac{\overline{J}_rb\cos\chi}{(b^2+z^2)^{3/2}}\ . \end{eqnarray} Thus, Eq.~(\ref{deflection_angle_non_diagonal}), for both contributions, takes the form \begin{eqnarray} \label{deflection_angle_rotating_case} \hat{\alpha}_b&=&\hat{\alpha}_{bS}-2\overline{J}_r\sin\chi\nonumber \\ &&\times\int^\infty_{0}\left(\frac{1}{n(b^2+z^2)^{3/2}}-\frac{3b^2}{n(b^2+z^2)^\frac{5}{2}}\right)dz\\ \hat{\alpha}_\chi&=&-2\overline{J}_r\cos\chi\int^\infty_{0}\frac{1}{n(b^2+z^2)^{3/2}}dz , \end{eqnarray} where $\hat{\alpha}_{bS}$ is the deflection angle for Schwarzschild (see Eq.~(\ref{deflection_angle_nonrotating_case})). Therefore, considering an homogeneous plasma (constant value of $\omega_e$), these contributions reduce to \begin{eqnarray} \label{deflection_angle_rotating_case_homgeneous_plasma} \hat{\alpha}_b&=&\underbrace{\frac{2M}{b}\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)}_{\hat{\alpha}_{bS}}+\underbrace{\frac{1}{\sqrt{1-\frac{\omega^2_e}{\omega^2}}}\frac{2J_r\sin\chi}{b^2\Lambda}}_{\hat{\alpha}_{bD}}\\ \label{chi} \hat{\alpha}_\chi&=&-\frac{2J_r\cos\chi}{b^2\Lambda\sqrt{1-\frac{\omega^2_e}{\omega^2}}}; \end{eqnarray} where $n$ was replaced by $\sqrt{1-\frac{\omega^2_e}{\omega^2}}$. It is important to point out that Eq.~(\ref{deflection_angle_rotating_case_homgeneous_plasma}) is only valid for $\omega>\omega_e$, because waves with $\omega<\omega_e$ do not propagate in the plasma~\cite{Kogan10,Ginzburg70}. \\ \begin{figure}[ht] \includegraphics[scale=0.38]{alpha_vs_b2M_uniform_plasma_rotating_case.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_b$ vs. $b/2M$ in the presence of uniform plasma for the slowly rotating (dot-dashed line) and $\hat{\alpha}_{bS}$ (dashed line). In the figure it is also plotted the Schwarzschild case in vacuum (continuous line). We used $\Lambda=0.5$, $J_r/M^2=0.25$, $\sin\chi=1$, and $\omega^2_e/\omega^2=0.5$. \label{fig3} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.35]{rotating_case_uniform_plasma_Lambda.png} \label{rotating_Lambda} \caption{Plot of $\hat{\alpha}_b$ vs. $\Lambda$ for $J_r/M^2=0.1$ (continuous line), $J_r/M^2=0.2$ (dot-dashed line) , and $J_r/M^2=0.3$ (dashed line). We assumed $b/2M=10$, $\sin\chi=1$, and $\omega^2_e/\omega^2=0.5$. \label{fig5}} \end{figure} In Fig.~\ref{fig3}, we plot $\hat{\alpha}_{bS}$ and $\hat{\alpha}_b$ for the slowly rotating case as a function of the impact parameter $b/2M$. From this figure, we can see that there is a difference between both angles. This means that $\hat{\alpha}_b$ for a boosted Kerr black hole is greater than $\hat{\alpha}_{bS}$. This is due to the rotation and boost velocity $v$, which is larger for small values of $b/2M$. On the other hand, for larger values of the impact parameter $b/2M$, this difference becomes very small, and both angles behave in the same way since ${2J_r\sin\chi}/(nb^2\Lambda)\rightarrow 0$ when $b/2M\rightarrow\infty$.\\ Fig.~\ref{fig2} right shows $\hat{\alpha}_b$ as a function of $\omega^2_e/\omega^2$. The behavior is very similar to that of Schwarzschild (Fig.~\ref{fig2} left). However, note that there is a small increment for $b/2M=10$. On the other hand, we see that the deflection angle tends to $2M/b+{2J_r\sin\chi}/(b^2\Lambda)$ when there is not plasma ($\omega^2_e=0$). In Fig.~\ref{fig5} we plotted Eq.~(\ref{deflection_angle_rotating_case_homgeneous_plasma}) as a function of $\Lambda$ for different values of $J_r$. We took into account the condition in which $0<\Lambda\leq 1$ in order to give the values. In this figure, for different values of $\Lambda$, we see that $\hat{\alpha}_b$ is bigger when $\Lambda\rightarrow 0$. Moreover, for $\Lambda=1$, the deflection angle reduces to the value $\hat{\alpha}_{bS}+{2J_r\sin\chi}/{nb^2}$. \section{\label{sec:Models} Models for the boosted Kerr metric with non-uniform plasma distribution} The deflection angle for a boosted Kerr metric in a non-uniform plasma was calculated in Subsection~\ref{sec:Deflection_angle_for_the_slowly_rotating_case}. Hence, for ${\omega^2_e}/{\omega^2}\ll 1$, Eq.~(\ref{deflection_angle_rotating_case}) reduces to \begin{eqnarray} \label{non_uniform_plasma} \hat{\alpha}_b&=&\underbrace{\frac{4M}{b}}_{\hat{\alpha}_{S1}}+\underbrace{\frac{2Mb}{\omega^2}\int^\infty_0\frac{\omega^2_e}{r^3}dz}_{\hat{\alpha}_{S2}}\nonumber\\ &&+\underbrace{\frac{bK_e}{\omega^2}\int^\infty_0\frac{1}{r}\frac{dN}{dr}dz}_{\hat{\alpha}_{S3}}+\underbrace{\frac{bK_e}{\omega^4}\int^\infty_0\frac{\omega^2_e}{r}\frac{dN}{dr}dz}_{\hat{\alpha}_{S4}}\nonumber\\ &&+\underbrace{\frac{2J_r}{\Lambda b^2}\sin\chi}_{\hat{\alpha}_{B1}}-\underbrace{\frac{J_r}{\Lambda\omega^2}\sin\chi\int^\infty_0\frac{\omega_e^2}{r^3}dz}_{\hat{\alpha}_{B2}}\nonumber\\ &&+\underbrace{\frac{3b^2J_r}{\Lambda\omega^2}\sin\chi\int^\infty_0\frac{\omega_e^2}{r^5}dz}_{\hat{\alpha}_{B3}}, \end{eqnarray} where $r=\sqrt{b^2+z^2}$, and $S$ and $B$ stand for Schwarzschild and Boosted, respectively. Using Eq.~(\ref{non_uniform_plasma}), we calculate the deflection angle by considering different plasma distributions: singular isothermal sphere (\textbf{SIS}), non-singular isothermal gas sphere (\textbf{NSIS}), and a plasma in a galaxy cluster (\textbf{PGC}).\\ Eq.~(\ref{non_uniform_plasma}) is quite similar to that obtained in~\cite{Kogan10}. In this equation, we also find the vacuum gravitational deflection $\hat{\alpha}_{S1}$, the correction to the gravitational deflection due to the presence of the plasma $\hat{\alpha}_{S2}$, the refraction deflection due to the inhomogeneity of the plasma $\hat{\alpha}_{S3}$, and its small correction $\hat{\alpha}_{S4}$. Nevertheless, when the boosted Kerr metric is considered, three more terms appear: $\hat{\alpha}_{B1}$, $\hat{\alpha}_{B2}$, and $\hat{\alpha}_{B3}$. These are contributions due to the dragging of the inertial frame. The former is a constant that appears in all models considered, while the others two depend on the plasma distribution.\\ From now on, let us suppose that the vectors $\vec{J}_r$ and $\vec{b}$ are perpendicular to each other ($\cos\chi=0$). Therefore, the contribution $\hat{\alpha}_\chi$ vanishes (see Eq.~(\ref{chi})) and $\sin\chi=1$. Furthermore, since $\hat{\alpha}_{S4}$ is small, we neglect its contribution (see \cite{Kogan10}). \subsection{\label{sec:singular} Singular isothermal sphere} In this subsection, we consider the model for a singular isothermal sphere proposed in~\cite{Chandreaskahr39a,Binney87}. In this model, often used in lens modelling of galaxies and clusters, the density distribution has the form \begin{equation} \label{density_distribution} \rho(r)=\frac{\sigma^2_v}{2\pi r^2} , \end{equation} where $\sigma^2_v$ is a one-dimensional velocity dispersion. The concentration of the plasma has the form \begin{equation} \label{concentration_plasma} N(r)=\frac{\rho(r)}{\kappa m_p}, \end{equation} where $m_p$ is the proton mass and $\kappa$ is a non-dimensional coefficient which is related to the dark matter contribution~\cite{Kogan10}. Using Eqs.~(\ref{refraction_index_inhomogeneous_plasma}) and (\ref{density_distribution}) the plasma frequency is \begin{equation} \label{plasma_frequency_SIS} \omega^2_e=K_eN(r)=\frac{K_e\sigma^2_v}{2\pi\kappa m_pr^2}. \end{equation} Then, from Eqs.~(\ref{non_uniform_plasma}) and (\ref{plasma_frequency_SIS}), and the well known property of the $\Gamma$-function \cite{Gradshteyn07} (see Appendix II), the contributions to the deflection angle can be found in the form \begin{equation} \label{contributions_SIS} \begin{array}{cc} \hat{\alpha}_{S2}=\frac{1}{12\pi}\frac{\omega^2_c}{\omega^2\overline{b}^3},& \hat{\alpha}_{S3}=-\frac{1}{16}\frac{\omega^2_c}{\omega^2\overline{b}^2}\\\\ \hat{\alpha}_{B2}=-\frac{1}{48\pi}\frac{\widetilde{J}_r\omega^2_c}{\Lambda \omega^2\overline{b}^4},&\hat{\alpha}_{B3}=\frac{1}{20\pi}\frac{\widetilde{J}_r\omega^2_c}{\Lambda\omega^2\overline{b}^4}.\\ \end{array} \end{equation} Where $\omega^2_c=\frac{K_e\sigma^2_v}{M^2\kappa m_p}$, $\widetilde{J}_r=J_r/M^2$, and $\overline{b}=b/2M$. Hence deflection angle takes the form \begin{eqnarray} \label{alpha_SIS} \hat{\alpha}_{SIS}&=&\frac{2}{\overline{b}}+\frac{1}{12\pi}\frac{\omega^2_c}{\omega^2\overline{b}^3}-\frac{1}{16}\frac{\omega^2_c}{\omega^2\overline{b}^2} +\frac{1}{2}\frac{\widetilde{J}_r}{\Lambda \overline{b}^2}\nonumber \\ &&-\frac{1}{48\pi}\frac{\widetilde{J}_r\omega^2_c}{\Lambda \omega^2\overline{b}^4}+\frac{1}{20\pi}\frac{\widetilde{J}_r\omega^2_c}{\Lambda\omega^2\overline{b}^4} \end{eqnarray} \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_b2M_SIS_diff_Lambdas.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{SIS}$ vs. $b/2M$ for $\Lambda=1$ (continuous line), $\Lambda=0.2$ (dashed line), and $\Lambda=0.1$ (dot-dashed line). We used $J_r/M^2=0.25$, $\sin\chi=1$, and $\omega^2_c/\omega^2=0.5$. \label{fig6} \end{figure} In Fig.~\ref{fig6}, we plot $\hat{\alpha}_{SIS}$ as a function of $\overline{b}$ for different values of $\Lambda$. The figure does not show any difference for values of $b/2M$ greater than 10. However, for values of $b/2M$ near to 10, we see a small difference. This means that $\hat{\alpha}_{SIS}$ is greater when $\Lambda$ is small. For $\Lambda=1$ ($v=0$), we have the case of a slowly rotating massive object. Therefore, the parameter $\Lambda$ has a small effect on the deflection angle. This tendency can be seen clearly in Fig.~\ref{fig7}, where we plotted the behavior of the deflection angle as a function of $\Lambda$ for different values of $\widetilde{J}_r$. Note that the boosted parameter is constrained to be in the interval $0<\Lambda\leq 1$. \\ \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_Lambda_SIS_diff_J.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{SIS}$ vs. $\Lambda$ for $\widetilde{J}_r=0.1$ (continuous line), $\widetilde{J}_r=0.2$ (dashed line), and $\widetilde{J}=0.3$ (dot-dashed line). We used, $\overline{b}=10$, $\sin\chi=1$, and $\omega^2_c/\omega^2=0.5$. \label{fig7}} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_J_SIS_diff_Lambdas.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{SIS}$ vs. $\widetilde{J}_r$ for $\Lambda=1$ (continuous line), $\Lambda=0.25$ (dashed line), and $\Lambda=0.1$ (dot-dashed line). We used, $\overline{b}=10$, $\sin\chi=1$, and $\omega^2_c/\omega^2=0.5$. \label{fig8}} \end{figure} In Fig.~\ref{fig8}, on the other hand, we plot $\hat{\alpha}_{SIS}$ as a function of $\widetilde{J}_r$ for different values of $\Lambda$. From this figure we conclude that, not only the dragging of the inertial system, but also the boosted parameter $\Lambda$ contribute to the deflection angle: the greater the values of $\widetilde{J}_r$ (plus small values of $\Lambda$) the greater the value of the deflection angle $\hat{\alpha}_{SIS}$. \subsection{\label{sec:nonsingular} Non-singular isothermal gas sphere} Now we consider a gravitational lens model for an isothermal sphere. For this model, the singularity at the origin is replaced by a finite core and the density distribution is given in~\cite{Hinshaw87} \begin{eqnarray} \label{Density_distribution_nosingular} \rho(r)=\frac{\sigma^2_v}{2\pi(r^2+r^2_c)}=\frac{\rho_0}{\left(1+\frac{r^2}{r^2_c}\right)},&&\rho_0=\frac{\sigma^2_v}{2\pi r^2_c}, \end{eqnarray} where $r_c$ is the core radius. Therefore, after substitution of Eq.~(\ref{Density_distribution_nosingular}) in Eqs.~(\ref{concentration_plasma}) and (\ref{plasma_frequency_SIS}), the plasma frequency is expressed as \begin{equation} \label{plasma_frequency_for_non_singular_isthermal_gas_sphere} \omega^2_e=\frac{K_e\sigma^2_v}{2\pi\kappa m_p(r^2+r^2_c)}. \end{equation} Then, from Eqs.~(\ref{non_uniform_plasma}) and (\ref{plasma_frequency_SIS}), the contributions to the deflection angle are (see Appendix~II) \begin{eqnarray} \label{contributions_NSIS} \hat{\alpha}_{S2}&=&\frac{2\overline{b}\omega^2_c}{\pi\omega^2}\bigg[\frac{1}{4\overline{b}^2\overline{r}^2_c}-\frac{{\rm arctanh}\bigg(\frac{\overline{r}_c}{\sqrt{4\overline{b}^2+\overline{r}^2_c}}\bigg)}{\overline{r}^3_c\sqrt{\overline{r}^2_c+4\overline{b}^2}}\bigg]\ ,\\ \hat{\alpha}_{S3}&=&-\frac{1}{2}\frac{\overline{b}\omega^2_c}{(4\overline{b}^2+\overline{r}^2_c)^\frac{3}{2}\omega^2}\ ,\\ \hat{\alpha}_{B2}&=&-\frac{\widetilde{J}_r\omega^2_c}{2\pi\Lambda\omega^2}\left[\frac{1}{4\overline{b}^2\overline{r}^2_c}-\frac{{\rm arctanh}\left(\frac{\overline{r}_c}{\sqrt{4\overline{b}^2+\overline{r}^2_c}}\right)}{\overline{r}^3_c\sqrt{\overline{r}^2_c+4\overline{b}^2}}\right], \\ \hat{\alpha}_{B3}&=&\frac{6}{\pi}\frac{\overline{b}^2\widetilde{J}_r\omega^2_c}{\Lambda\omega^2}\left[\frac{2\overline{r}^2_c-12\overline{b}^2}{48\overline{b}^4\overline{r}^4_c}+\frac{{\rm arctanh}\left(\frac{\overline{r}_c}{\sqrt{4\overline{b}^2+\overline{r}^2_c}}\right)}{\overline{r}^5_c\sqrt{\overline{r}^2_c+4\overline{b}^2}}\right],\nonumber\\ \end{eqnarray} where $\omega^2_c=\frac{K_e\sigma^2_v}{M^2\kappa m_p}$, $\overline{r}_c=r_c/M$, $\widetilde{J}_r=J_r/M^2$, and $\overline{b}=b/2M$.\\ \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_b2M_NSIS_diff_Lambdas.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{NSIS}$ vs. $\overline{b}$ for $\Lambda=1$ (continuous line), $\Lambda=0.25$ (dashed line), and $\Lambda=0.1$ (dot-dashed line). We used, $\widetilde{J}_r=0.25$, $\overline{r}_c=10$, $\sin\chi=1$, and $\omega^2_c/\omega^2=0.5$.\label{fig9} \end{figure} In Fig.~\ref{fig9} we plot $\hat{\alpha}_{NSIS}$ as a function of $\overline{b}$ for different values of $\Lambda$. In the plot, we have $\overline{b} \gg \overline{r}_c$ because we are in the weak field limit. According to the figure, the behavior is quite similar to that of the deflection angle in the case of a singular plasma distribution: there are small differences in $\hat{\alpha}_{NSIS}$ when small values of $\Lambda$ are considered, and no there is no difference in the deflection angle when the impact parameter $\overline{b}$ takes values greater than $10$. Fig.~\ref{fig10} helps to see this behavior clearly.\\ In Fig.~\ref{fig11} we plot the deflection angle as a function of $\widetilde{J}_r$ for different values of $\Lambda$. Once again, the dragging of the inertial system along with small values of the boosted parameter $\Lambda$ play an important role when compared with the slowly rotating case~\cite{Morozova13}. \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_Lambda_NSIS_diff_J.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{NSIS}$ vs. $\Lambda$ for $\widetilde{J}_r=0.1$ (continuous line), $\widetilde{J}_r=0.2$ (dashed line), and $\widetilde{J}=0.3$ (dot-dashed line). We used, $\overline{b}=100$, $\overline{r}_c=10$, $\sin\chi=1$, and $\omega^2_c/\omega^2=0.5$. Note the scale used for the deflection angle: each value is multiplied by $1e-5=1\times 10^{-5}$ \label{fig10} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_J_NSIS_diff_Lambdas.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{NSIS}$ vs. $\widetilde{J}_r$ for $\Lambda=1$ (continuous line), $\Lambda=0.25$ (dashed line), and $\Lambda=0.1$ (dot-dashed line). We used, $\overline{b}=100$, $\overline{r}_c=10$, $\sin\chi=1$, and $\omega^2_c/\omega^2=0.5$. Note the scale used for the deflection angle: each value is multiplied by $1e-2=1\times 10^{-2}$ \label{fig11} \end{figure} \subsection{ \label{sec:galaxy_cluster}Plasma in a galaxy cluster} In a galaxy cluster, due to the large temperature of the electrons, the distribution of electrons may be homogeneous. Therefore, it is proper to suppose a singular isothermal sphere as a model for the distribution of the gravitating matter. Using this approximation, and without considering the mass of the plasma, Bisnovatyi-Kogan and O. Yu. Tsupko solved the equation of hydrostatic equilibrium of a plasma in a gravitational field finding that the plasma density distribution has the form \cite{Kogan10}. \begin{equation} \label{plasma_distribution_in_cluster} \rho(r)=\rho_0\left(\frac{r}{r_0}\right)^{-s},s=\frac{2\sigma^2_v}{\mathfrak{R}T}, \end{equation} and the plasma frequency is equal to \begin{equation} \label{plasma_frequency_cluster} \omega^2_e=\frac{\rho_0K_e}{\kappa m_p}\left(\frac{r}{r_0}\right)^{-s}. \end{equation} Hence, using Eqs.~(\ref{non_uniform_plasma}) and (\ref{plasma_frequency_SIS}) once again, the contributions to the deflection angle are (see Appendix~II) \begin{eqnarray} \label{contributions_PGC} \hat{\alpha}_{S2}&=&\frac{\sqrt{\pi}}{2^{s+1}(s+1)}\frac{\overline{r}^s_0\omega^2_f}{\overline{b}^2\omega^2}\frac{\Gamma(\frac{s}{2}+1)}{\Gamma(\frac{s+1}{2})}\ ,\\ \hat{\alpha}_{S3}&=&-\frac{\sqrt{\pi}}{2^s}\frac{\omega^2_f}{\omega^2}\frac{\Gamma(\frac{s}{2}+1)}{\Gamma(\frac{s}{2})}\left(\frac{\overline{r}_0}{\overline{b}}\right)^s\ ,\\ \hat{\alpha}_{B2}&=&-\frac{\pi}{2^{s+2}(s+1)}\frac{\widetilde{J}_r\overline{r}^2_0\omega^2_f}{\overline{b}^{s+2}\Lambda\omega^2}\frac{\Gamma(\frac{s}{2}+1)}{\Gamma(\frac{s+1}{2})}\ ,\\ \hat{\alpha}_{B3}&=&\frac{3\sqrt{\pi}}{2^{s+2}(s+3)}\frac{\widetilde{J}_r \overline{r}^s_0\omega^2_f}{b^{s+2}\Lambda\omega^2}\frac{\Gamma(\frac{s+4}{2})}{\Gamma(\frac{s+1}{2})}\ , \end{eqnarray} where $\omega^2_f=\frac{K_e\rho_0}{\kappa m_p}$, $\overline{r}_0=r_0/M$, $\widetilde{J}_r=J_r/M^2$, and $\overline{b}=b/2M$.\\ \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_b2M_PGC_diff_Lambda.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{PGC}$ vs. $\overline{b}$ for $\Lambda=1$ (continuous line), $\Lambda=0.25$ (dashed line), and $\Lambda=0.1$ (dot-dashed line). We used, $\widetilde{J}_r=0.25$, $\overline{r}_0=10$, $\sin\chi=1$, $s=0.03$, and $\omega^2_f/\omega^2=0.5$.\label{fig12} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_Lambda_PGC_diff_J.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}_{PGC}$ vs. $\Lambda$ for $\widetilde{J}_r=0.1$ (continuous line), $\widetilde{J}_r=0.2$ (dashed line), and $\widetilde{J}_r=0.3$ (dot-dashed line). We used $\overline{r}_0=10$, $\sin\chi=1$, $s=0.03$, $\overline{b}=100$ and $\omega^2_f/\omega^2=0.5$. Note the scale used for the deflection angle: each value is multiplied by $1e-3=1\times 10^{-3}$\label{fig13} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_J_PGC_diff_Lambda.png} \caption{Plot of $\hat{\alpha}_{PGC}$ vs. $\widetilde{J}_r$ for $\Lambda=1$ (continuous line), $\Lambda=0.25$ (dashed line), and $\Lambda=0.1$ (dot-dashed line). We used $\overline{r}_0=1.2$, $\sin\chi=1$, $s=0.03$, $\overline{b}=100$ and $\omega^2_f/\omega^2=0.5$. Note the scale used for the deflection angle: each value is multiplied by $1e-3=1\times 10^{-3}$\label{fig14} \end{figure} In Figs.~\ref{fig12}, \ref{fig13}, and \ref{fig14} we plot $\hat{\alpha}_{PGC}$ as a function of $\overline{b}$, $\Lambda$, and $\widetilde{J}_r$, respectively. In order to obtain these plots we considered the case $s<<1$ \cite{Kogan10}. According to Figs.~\ref{fig12} and \ref{fig13}, differences in the deflection angle can be seen clearly for the \textbf{PGC} distribution when compared with the previous distributions. Furthermore, Fig.~\ref{fig14} shows that the deflection angle increases due to the dragging and small values of $\Lambda$.\\ On the other hand, in Fig.~\ref{fig15}, we plotted the behavior of the deflection angle for all distributions as a function of the impact parameter $\overline{b}$. Note that the values of $\hat{\alpha}$ for the \textbf{PGC} distribution are grater than the other two distributions. In the figure there is a small difference between \textbf{SIS} and \textbf{NSIS} distributions for small values of $b/2M$.\\ Finally, in Fig.~\ref{fig16} we plotted $\hat{\alpha}$ as a function of $\omega^2_c/\omega^2$ (for \textbf{SIS} and \textbf{NSIS}) and $\omega^2_f/\omega^2$ (for \textbf{PGC}). This figure clearly show that the deflection angle is more affected by the plasma for the \textbf{PGC} distribution than the other two for values of $\omega^2_f/\omega^2$ greater than $0.4$. \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_b2M_all_distributions.png} \label{alpha_vs_b2M_uniform_plasma_rotating_case} \caption{Plot of $\hat{\alpha}$ vs. $\overline{b}$ for \textbf{SIS} (continuous line), \textbf{NSIS} (dashed line), and \textbf{PGC} (dot-dashed line). We used $\Lambda=0.1$, $\overline{r}_c=10$, $\overline{r}_0=10$, $\sin\chi=1$, $s=0.03$, and $\omega^2_f/\omega^2=\omega^2_c/\omega^2=0.5$. For \textbf{NSIS} we use $\Lambda=1$ since no difference from \textbf{SIS} was found.\label{fig15} \end{figure} \begin{figure}[h!] \includegraphics[scale=0.38]{alpha_vs_w_wf_all_cases.png} \caption{Plot of $\hat{\alpha}$ vs. $\omega^2_f/\omega^2$, $\omega^2_c/\omega^2$ for \textbf{SIS} (continuous line), \textbf{NSIS} (dashed line), and \textbf{PGC} (dot-dashed line). We used $\Lambda=0.1$, $\overline{r}_c=10$, $\overline{r}_0=10$, $\sin\chi=1$, $s=0.03$, and $\overline{b}=9$\label{fig16} \end{figure} \section{\label{sec:magnification} Lens equation and magnification in the presence of plasma} In this section, we compute the magnification for the boosted Kerr metric in the presence of plasma. We consider the uniform and the \textbf{SIS} plasma distributions discussed previously in sections \ref{sec:Deflection_angle_for_the_slowly_rotating_case} and \ref{sec:Models} respectively.\\ The magnification of brightness of the star is defined by the relation\cite{Morozova13} \begin{equation} \label{Magnification} \begin{array}{cc} \mu_\Sigma=\frac{I_{tot}}{I_{\ast}}=\sum_k \left|\left(\frac{\theta_k}{\beta}\right)\left(\frac{d\theta_k}{d\beta}\right)\right|,&k=1,2,...,m, \end{array} \end{equation} where $m$ is the number of images, $I_{tot}$ is the total brightness of the images, $I_{\ast}$ is the unlensed brightness of the source, $\theta_k$ is the position of the image, and $\beta$ is the angular position of the source (see figure \ref{esquema}). In this sense, in order to compute the contribution of the boosted parameter $\Lambda$ to magnification, we have to solve the lens equation; which is given by the relation\cite{Morozova13} \begin{equation} \label{lens_equation} \theta D_s=\beta D_s+\hat{\alpha}D_{ls}, \end{equation} here $D_s$ is the distance from the observer to the source, $D_{ls}$ is the distance from the lens to the source, $\hat{\alpha}$ is the deflection angle, and $\theta$, $\beta$ the positions of the image and the source respectively (see figure \ref{esquema}). \subsection{Uniform plasma} In the case of small angles, it is well known that the impact parameter can be expressed as \begin{equation} \label{small_angles} b\approx D_l\theta, \end{equation} where $D_l$ is the distance from the observer to the lens. Therefore, after using equation (\ref{deflection_angle_rotating_case_homgeneous_plasma}), the lens equation for the slowly rotating case in the presence of uniform plasma takes the form \begin{equation} \label{Lens_slowly_rotating} \theta^3-\beta\theta^2-\frac{\theta^2_E}{2}\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)\theta-\frac{\theta^2_E\tilde{J}_r}{4\overline{D}_l\Lambda}\frac{1}{\sqrt{1-\frac{\omega^2_e}{\omega^2}}}=0. \end{equation} In the last expression, in order to be consistent with the notation, we use $\overline{b}\approx\overline{D}_l\theta$, where $\overline{D}_l=D_l/2M$. Furthermore, we have defined \begin{equation} \label{Einstein_ring} \theta^2_E=\frac{4MD_{ls}}{D_{l}D_{s}}=\frac{2\overline{D}_{ls}}{\overline{ D}_l\overline{D}_s}, \end{equation} with $\overline{D}_{ls}=D_{ls}/2M$ and $\overline{D}_s=D_s/2M$. $\theta_E$ is known as the Einstein angle. Note that equation (\ref{Lens_slowly_rotating}) reduces to that obtained by \cite{Morozova13} for $\Lambda=1$ ($v=0$).\\ In order to solve equation (\ref{Lens_slowly_rotating}) we introduce a new variable $x$ by the relation (see \cite{Morozova13,Abramowitz72} for details.) \begin{equation} \label{new_variable} \theta=x+\frac{\beta}{3}; \end{equation} form which equation (\ref{Lens_slowly_rotating}) reduces to \begin{equation} \label{new_lens_equation_rotating_case} x^3+px+q=0, \end{equation} where \begin{equation} \label{p_and_q} \begin{aligned} p&=-\frac{\beta^2}{3}-\frac{\theta^2_E}{2}\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)\\ q&=-\frac{2\beta^3}{27}-\frac{\beta\theta^2_E}{6}\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)-\frac{\theta^2_E\tilde{J}_r}{4\overline{D}_l\Lambda}\frac{1}{\sqrt{1-\frac{\omega^2_e}{\omega^2}}}.\\ \end{aligned} \end{equation} Note that the variable $q$, in contrast with the result obtained by \cite{Morozova13}, depends on the boosted parameter $\Lambda$.\\ Equation (\ref{new_lens_equation_rotating_case}) has three different real roots if \begin{equation} \frac{q^2}{4}+\frac{p^3}{27}<0. \end{equation} Therefore, the solution has the form \begin{equation} \begin{array}{cc} x=2\sqrt[3]{r}\cos\frac{\phi+2k\pi}{3},&\hspace{1cm}k=0,1,2 \end{array} \end{equation} with \begin{equation} \begin{array}{cc} r=\sqrt{-\frac{p^3}{27}},\hspace{1cm}&\cos\phi=-\frac{q}{2r}. \end{array} \end{equation} Hence, after using equations (\ref{Magnification}) and (\ref{new_variable}), we obtain \begin{equation} \label{magnification_uniform_plasma} \begin{aligned} \mu_{\Sigma tot}&=\sum_k \left|\frac{\theta_k}{\beta} \frac{d\theta_k}{d\beta}\right|=\sum_k \left|\frac{x_k+\beta/3}{\beta}\left(\frac{dx_k}{d\beta}+\frac{1}{3}\right)\right|\\ &=\sum_k \left|\frac{1}{3\beta}\left(2\sqrt[3]{r}\cos\frac{\phi+2k\pi}{3}+\frac{\beta}{3}\right)\right|\\ &\times\left|\left[\frac{2r_\beta}{\sqrt[3]{r^2}}\cos\frac{\phi+2k\pi}{3}-2 \sqrt[3]{r}\phi_\beta\sin\frac{\phi+2k\pi}{3}+1\right]\right| \end{aligned} \end{equation} for $k=0,1,2$. The subscript $\beta$ denotes the derivatives of the corresponding variables with respect to $\beta$.\\ In reference \cite{Bisnovatyi2010} the authors found that the magnification for small values of $\beta$ has the form (see equation (32) in \cite{Morozova13})\\ \begin{equation} \label{Magnification_Bisnov} \mu=\frac{1}{2}\frac{\sqrt{2\theta^2_E\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)}}{\beta}. \end{equation} Therefore, in order to study the behaviour of the magnification for small values of $\beta$, and compare with the case of uniform plasma studied by Bisnovatyi-Kogan and Tsupko (2010), it is necessary to express equation (\ref{magnification_uniform_plasma}) in the limit $\beta\rightarrow 0$. Hence, for small values of $\beta$ we have that \begin{equation} \label{beta_zero} \begin{aligned} \sqrt[3]{r}&\rightarrow\sqrt{\frac{1}{6}\theta^2_E\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)},\\ r_\beta&\rightarrow 0,\\ \phi_\beta&\rightarrow\frac{1}{\sqrt{1-\left(\frac{q}{2r}\right)^2}}\frac{q_\beta}{2r}\\ q_\beta&\rightarrow-\frac{\beta\theta^2_E}{6}\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)\\ \cos\phi&=-\frac{q}{2r}\rightarrow\frac{\sqrt{27}}{\sqrt{2}}\frac{1}{\theta_E}\frac{\tilde{J}_r}{\overline{D}_l\Lambda}\frac{1}{\sqrt{1-\frac{\omega^2_e}{\omega^2}}}\frac{1}{\sqrt{\left(1+\frac{1}{1-\frac{\omega^2_e}{\omega^2}}\right)^3}}.\\ \end{aligned} \end{equation} Where we have followed the same analysis done in \cite{Morozova13}. Note that $-q/2r$, in our case, depends on $\Lambda$. Thus, after using equations (\ref{magnification_uniform_plasma}), (\ref{Magnification_Bisnov}), and (\ref{beta_zero}), we found that $\mu_{\Sigma tot}/\mu$, in the limit $\beta\rightarrow0$, takes the form \begin{equation} {\frac{\mu_{\Sigma tot}}{\mu}}=\frac{1}{3\sqrt{3}}\sum_{k}\left|\frac{\sin\frac{2(\phi+2k\pi)}{3}}{\sqrt{1-\left(\frac{q}{2r}\right)^2}}+2\cos\frac{\phi+2k\pi}{3}\right|. \end{equation} Now, setting $\tilde{J}_r=0$ and $\Lambda=1$, the last expression reduces to \begin{equation} \label{matching_32_84} \begin{aligned} \frac{\mu_{\Sigma tot}}{\mu}&=\frac{1}{3\sqrt{3}}\sum_{k}\left|\sin\frac{(1+4k)\pi}{3}+2\cos\frac{(1+4k)\pi}{6}\right|\\ &=\frac{1}{3\sqrt{3}}\left| 2 \cos \left(\frac{\pi }{6}\right)+\sin \left(\frac{\pi }{3}\right)\right|\\ &+\frac{1}{3\sqrt{3}}\left| 2 \cos \left(\frac{5 \pi }{6}\right)+\sin \left(\frac{5 \pi }{3}\right)\right|\\ &+\frac{1}{3\sqrt{3}}\left| 2 \cos \left(\frac{9 \pi }{6}\right)+\sin (3 \pi )\right|=1. \end{aligned} \end{equation} With this result, we have shown that the ratio $\mu_{\Sigma tot}/\mu$ is equal to unity when $\tilde{J}_r=0$ and $\Lambda=1$; this means that equation (\ref{magnification_uniform_plasma}) reduces to equation (\ref{Magnification_Bisnov}) in the limit $\beta\rightarrow 0$.\\ In Figs. \ref{fig17}\textcolor{blue}{.a} and \ref{fig17}\textcolor{blue}{.c}, we plotted the behaviour of the total magnification as a function of the boosted parameter $\Lambda$ for $\beta=0.001$ and $\beta=0.0001$ respectively. According to Fig. \ref{fig17}\textcolor{blue}{.a}, when $\beta=0.001$, the total magnification decreases as $\Lambda$ increases. This means that $\mu_{\Sigma tot}$ decreases as the boosted velocity $v$ of the black hole decreases. A similar behaviour can be seen from Fig. \ref{fig17}\textcolor{blue}{.c} when $\beta=0.0001$. Note that for small values of $\beta$, the magnitude of the total magnification increases. For example: when $\beta=0.001$ the total magnification is about $\mu_{\Sigma tot}\approx 52.2$. However, when $\beta=0.0001$, the value increases to $\mu_{\Sigma tot}\approx 522.2$. \subsection{Singular isothermal sphere} In a similar way, in order to compute the magnification for \textbf{SIS}, we also use the approximation of small angles described in equation (\ref{small_angles}). Hence, after using equation (\ref{alpha_SIS}), the lens equation for \textbf{SIS} takes the form \begin{equation} \label{lens_SIS} \begin{aligned} \theta^3-\beta\theta^2-\frac{2D_{ls}}{D_lD_s}\theta-\frac{\overline{D}_{ls}}{\overline{D}^2_l\overline{D}_s}\left(\frac{\tilde{J}_r}{2\Lambda}-\frac{\omega^2_c}{16\omega^2}\right)=0 \end{aligned} \end{equation} In the last equation, as an approximation, we neglected the second and the last two terms of equation (\ref{alpha_SIS}) since they are very small in the weak field limit. Then, using equation (\ref{Einstein_ring}), equation (\ref{lens_SIS}) can be expressed in terms of the Einstein angle as: \begin{equation} \label{lens_SIS_I} \theta^3-\beta\theta^2-\theta^2_E\theta-\frac{\delta\theta^2_E}{\Lambda}=0, \end{equation} where we defined: \begin{equation} \label{delta} \begin{aligned} \delta&=\frac{1}{\overline{D}_l}\left(\frac{\tilde{J}_r}{4}-\frac{\omega^2_c\Lambda}{32\omega^2}\right).\\ \end{aligned} \end{equation} Now, introducing the new variable $y=\theta+\beta/3$, the equation (\ref{lens_SIS_I}) reduces to \begin{equation} \label{lens_SIS_new_varable} y^3+my+n=0 \end{equation} with, \begin{equation} \label{PQR} \begin{aligned} m=&-\frac{\beta^2}{3}-\theta^2_E\\ n=&-\frac{2\beta^3}{27}-\frac{\beta\theta^2_E}{3}-\frac{\delta\theta^2_E}{\Lambda}.\\ \end{aligned} \end{equation} Equation (\ref{lens_SIS_I}) has three different real roots if \begin{equation} \frac{m^2}{4}+\frac{n^3}{27}<0. \end{equation} This condition is already satisfied in our case. Hence the solutions has the form \begin{equation} \begin{array}{cc} y=2\sqrt[3]{l}\cos\frac{\epsilon+2k\pi}{3},&k=0,1,2 \end{array} \end{equation} with \begin{equation} \begin{array}{cc} l=\sqrt{-\frac{m^3}{27}},\hspace{1cm}&\cos\epsilon=-\frac{n}{2l} \end{array} \end{equation} Therefore, after using equations (\ref{Magnification}) and the new variable $y$, we obtain \begin{equation} \label{magnification_SIS} \begin{aligned} \mu_\Sigma&=\sum_k \left|\frac{1}{3\beta}\left(2\sqrt[3]{l}\cos\frac{\epsilon+2k\pi}{3}+\frac{\beta}{3}\right)\right|\\ &\times\left|\left[\frac{2l_\beta}{\sqrt[3]{l^2}}\cos\frac{\epsilon+2k\pi}{3}-2 \sqrt[3]{l}\epsilon_\beta\sin\frac{\epsilon+2k\pi}{3}+1\right]\right| \end{aligned} \end{equation} for $k=0,1,2$. The subscript $\beta$ has the same meaning as in equation (\ref{magnification_uniform_plasma}).\\ \begin{figure*}[t] \begin{center} a.\includegraphics[scale=0.65]{Magnification_uniform_001.pdf} b.\includegraphics[scale=0.65]{Magnification_SIS_001.pdf} c.\includegraphics[scale=0.65]{Magnification_uniform_0001.pdf} d.\includegraphics[scale=0.65]{Magnification_SIS_0001.pdf} \caption{(\textbf{a}) Plot of $\mu_{\Sigma tot}$ vs. $\Lambda$ when $\beta=0.001$ for uniform plasma. (\textbf{b}) Plot of $\mu_{\Sigma tot}$ vs. $\Lambda$ when $\beta=0.001$ for the \textbf{SIS} distribution. (\textbf{c}) Plot of $\mu_{\Sigma tot}$ vs. $\Lambda$ when $\beta=0.0001$ for unifomr plasma. (\textbf{d}) Plot of $\mu_{\Sigma tot}$ vs. $\Lambda$ when $\beta=0.0001$ for the \textbf{SIS} distribution. In all the figures we considered $\overline{D}_{ls}=10$, $\overline{D}_l=100$, $\overline{D}_s=110$, $\omega^2_e/\omega^2=\omega^2_c/\omega^2=0.5$, $\theta_E=0.001818$, and $\tilde{J}_r=0.3$. \label{fig17} \end{center} \end{figure*} In Figs. \ref{fig17}\textcolor{blue}{.b} and \ref{fig17}\textcolor{blue}{.d}, we plotted the behaviour of $\mu_{\Sigma tot}$ as a function of the boosted parameter $\Lambda$ for $\beta=0.001$ and $\beta=0.0001$ respectively. In contrast with the previous case (uniform plasma), we see that the total magnification increases as $\Lambda$ increases. On the other hand, note that for small values of $\beta$, the magnitude of $\mu_{\Sigma tot}$ increases: it changes, for example, from $42.6$ to $426.3$ when $\beta$ changes from $0.001$ to $0.0001$ respectively. \section{Conclusion} \begin{figure*}[t] \begin{center} a.\includegraphics[scale=0.65]{Figure18a.pdf} b.\includegraphics[scale=0.65]{figure18b.pdf} \caption{(\textbf{a}) Plot of $\mu_{\Sigma tot}$ vs. $\Lambda$ when $\beta=0.001$ for uniform plasma. (\textbf{b}) Plot of $\mu_{\Sigma tot}$ vs. $\Lambda$ when $\beta=0.001$ for the \textbf{SIS} distribution. In all the figures we considered $\overline{D}_{ls}=10$, $\overline{D}_l=100$, $\overline{D}_s=110$, $\omega^2_e/\omega^2=\omega^2_c/\omega^2=0.5$, $\theta_E=0.001818$, and $\tilde{J}_r=0.3$. \label{fig18} \end{center} \end{figure*} In this work we have studied the deflection angle for the boosted Kerr metric in the presence of both homogeneous and non-homogeneous plasma, and in the latter case three different distributions have been considered.\\ In Subsection~\ref{sec:nonrotating_case_uniformplasma} we investigated the behavior of the deflection angle for the non-rotating case in the presence of uniform plasma ($\omega_e= \text{costant}$) by considering small values of $v$. According to Eq.~(\ref{deflection_angle_nonrotating_case}) we found that $\hat{\alpha}_b$ does not dependent, at least at first order, on the velocity $v$. It was also found that, after the approximation $1-n\ll \frac{\omega_e}{\omega}$, the deflection angle in Eq.~(\ref{delflection_angle_with_plasma_boosted_kerr_metric}) reduces to that obtained in~\cite{Kogan10} (see Eq.~(\ref{deflection_angle_nonrotating_case})). As a consequence, the optics for the non-rotating boosted Kerr metric is the same as Schwarzschild. In this sense, the bending of light, due to the presence of a uniform plasma, is greater than the Schwarzschild case in vacuum for values of $\omega^2_e/\omega^2$ smaller than unity.\\ In Subsection~\ref{sec:Deflection_angle_for_the_slowly_rotating_case}, we studied the rotating case by considering a uniform distribution. Following the ideas of \cite{Morozova13}, we found that the expression for the deflection angle $\hat{\alpha}_b$ in Eq.~(\ref{deflection_angle_rotating_case_homgeneous_plasma}) contains two terms: the Schwarzschild angle $\hat{\alpha}_{bS}$, and the contribution due to the dragging of the inertial frame ${\hat{\alpha}_{bD}}$. The result is quite similar to that of V.S Morozova et al.. However, in contrast with their result, Eq.~(\ref{deflection_angle_rotating_case_homgeneous_plasma}) also depends on the parameter $\Lambda$. This dependence is shown in Fig.~\ref{fig5}. Form this figure we found that the smaller the values of $\Lambda$ (constrained to the interval $0<\Lambda\leq 1$) the greater is the deflection angle. In this sense, not only the dragging and the presence of a plasma, but also the motion of the black hole will contribute to the lensing. Therefore, since no effect was found in the previous case, we may concluded that $\hat{\alpha}_b$ depends on $v$ only when the dragging of the inertial frame takes place.\\ In Section~\ref{sec:Models}, we consider the deflection angle in terms of $\overline{b}$, $\Lambda$, and $\widetilde{J}_r$ for different distributions. As shown in our figures, $\hat{\alpha}$ is affected by the presence of plasma and is greater when compared with vacuum and uniform distributions. Furthermore, we found again that $\hat{\alpha}$ increases not only due to the dragging, but also when small values of the boosted parameter $\Lambda$ are considered.\\ In this work, we also found some important constraints for two of the models. In the case of \textbf{NSIS}, for example, the radius of the core $r_c$ must have values greater than $6M$. If the core radius is smaller than this limit the deflection angle becomes negative at some point and will not agree with the usual behavior when $b\rightarrow\infty$. On the other hand, regarding the \textbf{PGC}, we found that $s$ must be different from $-1$ or $-3$ as can be seen from Eq.~(\ref{contributions_PGC}). Nevertheless, this condition is fulfilled since we consider positive values of $s<<1$. \\ No important difference between the models was found when the deflection angle was considered. In the case of \textbf{SIS} and \textbf{NSIS}, for example, the behavior was very similar. Therefore, under the weak field approximation, it is not possible to distinguish these two distributions. Nevertheless, the deflection angle is affected considerably when we consider a plasma in a galaxy cluster. The values of the deflection angle are greater than those obtained with the other two models. This behavior is clearly shown in Fig.~\ref{fig15}. Furthermore, according to Fig.~\ref{fig16}, we found that the deflection angle is affected by the plasma when the \textbf{PGC} distribution is considered.\\ Finally, in section \ref{sec:magnification}, as an application, we compute the total magnification for uniform and \textbf{SIS} plasma distributions. According to Fig. \ref{fig17} we conclude that, for small values of $v$ ($0.7\leq\Lambda\leq1$), the the total magnification is grater when the uniform plasma distribution is considered. For example, in the case of uniform distribution (considering $\beta=0.001$), we see that $\mu_{\Sigma tot}\approx52.22$. Nevertheless, for the \textbf{SIS} distribution, we found that $\mu_{\Sigma tot}\approx42.64$. A similar behaviour occurs when $\beta=0.0001$. Furthermore, it is important to point out that the total magnification has small changes in both distributions: $\mu_{\Sigma tot}$ ranges from $52.2285$ to $52.2305$ for the uniform plasma, and from $42.643938$ to $42.643944$ in the \textbf{SIS}. The change is very small for the last distribution. \\ On the other hand, when we compare both models (uniform and \textbf{SIS} plasma distributions), we see that the behaviour of the total magnification is different (see figures \ref{fig18}\textcolor{blue}{.a} and \ref{fig18}\textcolor{blue}{.b}). In the case of the uniform plasma distribution, for example, when the boosted Kerr Black hole is moving towards ($\Lambda>0$) or away ($\Lambda<0$) from the observer the behaviour is very similar (there is a small difference when $\Lambda\rightarrow -1$ and $\Lambda\rightarrow 1$). However, when we consider the \textbf{SIS} distribution, the behaviour is not symmetric. This behaviour is due to cinematic effects. In this sense, when the magnification is considered, it would be possible to distinguish both models. \begin{acknowledgements} Authors thank the anonymous Referees for carefully reading the manuscript and for their value suggestions. We also want to thank Prof. N. Dadhich for value discussion. This work was supported by the National Natural Science Foundation of China (Grant No. U1531117) and Fudan University (Grant No. IDH1512060). C.A.B.G. also acknowledges support from the China Scholarship Council (CSC), Grant No. 2017GXZ019022. C.B. also acknowledges the support from the Alexander von Humboldt Foundation. The research is supported in part by Grant No. VA-FA-F-2-008 and No.YFA-Ftech-2018-8 of the Uzbekistan Ministry for Innovation Development, by the Abdus Salam International Centre for Theoretical Physics through Grant No. OEA-NT-01 and by Erasmus+ exchange grant between Silesian University in Opava and National University of Uzbekistan. \end{acknowledgements} \section*{Appendix I: Transformation to cartesian coordinates} The transformation relations for the non rotating case ($a=0$) are (see \cite{Visser07}) \begin{equation} \label{transformation_relation_nonrotating} \begin{aligned} \overline{t}&=t\\ \overline{x}&=r\sin\theta\cos\phi\\ \overline{y}&=r\sin\theta\sin\phi\\ \overline{z}&=r\cos\theta.\\ \end{aligned} \end{equation} Therefore, the Jacobian matrix has the form \begin{equation} \label{Jacobian_Matrix} \mathbf{J}=\left(\begin{array}{cccc} 1&0&0&0\\ 0&\cos\phi\sin\theta&r\cos\phi\cos\theta&-r\sin\phi\sin\theta\\ 0&\sin\phi\sin\theta&r\sin\phi\cos\theta&r\cos\phi\sin\theta\\ 0&\cos\theta&-r\sin\theta&0 \end{array} \right). \end{equation} We are seeking for expressions of the form \begin{equation} \label{transformation} dx^\mu=\frac{\partial x^\mu}{\partial \overline{x}^\nu}d\overline{x}^\nu. \end{equation} In the last expression, $x^\mu$ denotes the Boyer-Lindquist coordinates ($t$, $r$, $\theta$, $\phi$) and $\overline{x}^\nu$ denotes the Cartesian coordinates ($\overline{t}$, $\overline{x}$, $\overline{y}$, $\overline{z}$). According to Eq.~(\ref{transformation}), the Jacobian for the inverse transformation has the form \begin{equation} \label{inverse_Jacobian} \mathbf{J}^{-1}=\left(\frac{\partial x^\mu}{\partial \overline{x}^\nu}\right). \end{equation} In order to find $\overline{J}$, we use the well known relation (see \cite{Levi-Civita61,Lovelock12}.) \begin{equation} \mathbf{J}\times \mathbf{J}^{-1}=\mathbf{I}. \end{equation} Thus, the inverse transformation is \begin{equation} \label{inverse_Jacobian_Matrix} \mathbf{J}^{-1}=\left(\begin{array}{cccc} 1&0&0&0\\\\ 0&\cos\phi\sin\theta&\sin\phi\sin\theta&\cos\theta\\\\ 0&\frac{\cos\phi\cos\theta}{r}&\frac{\sin\phi\cos\theta}{r}&-\frac{\sin\theta}{r}\\\\ 0&-\frac{\sin\phi}{r\sin\theta}&\frac{\cos\phi}{r\sin\theta}&0 \end{array} \right), \end{equation} and, \begin{equation} \label{inverse_transformation} \begin{aligned} dt&=d\overline{t}\\ dr&=\cos\phi\sin\theta d\overline{x}+\sin\phi\sin\theta d\overline{y}+\cos\theta d\overline{z}\\ d\theta&=\frac{\cos\phi\cos\theta}{r} d\overline{x}+\frac{\sin\phi\cos\theta}{r} d\overline{y}-\frac{\sin\theta}{r} d\overline{z}\\ d\phi&= -\frac{\sin\phi}{r\sin\theta} d\overline{x}+\frac{\cos\phi}{r\sin\theta} d\overline{y}.\\ \end{aligned} \end{equation} Then, after substitution in Eq.~(\ref{line_element_in_weak_limit_in_Boyer_lindquist_coordinates}) and taking into account that $dt=d\overline{t}$, the line element reduces to equation \begin{equation} \begin{aligned} \label{line_element_non_rotating_case_weak_fieldI} ds^2&=ds^2_0+h_{11}d\overline{x}^2+h_{12}d\overline{x}d\overline{y} +h_{13}d\overline{x}d\overline{z}\\ &+h_{22}d\overline{y}^2+h_{23}d\overline{y}d\overline{z}+\underbrace{\frac{2M}{r}}_{h_{00}}dt^2\\ &+d\overline{z}^2\underbrace{\left(\frac{2M}{r}\cos^2\theta-2v\cos\theta\sin^2\theta\right)}_{h_{33}}, \end{aligned} \end{equation} where \begin{equation} \begin{aligned} h_{11}&=-2v(\cos^2\phi\cos^3\theta+\sin^2\phi\cos\theta)\\ h_{12}&=4v(2\cos\phi\sin\phi\cos\theta-\cos\phi\sin\phi\cos^3\theta)\\ h_{13}&=4\left(\frac{M\cos\phi\cos\theta\sin\theta}{r}+v\cos\phi\cos^2\theta\sin\theta\right)\\ h_{22}&=2\left[\frac{M\sin^2\phi\sin^2\theta}{r}-v\left(\sin^2\phi\cos^3\theta+2\cos^2\phi\cos\theta\right)\right]\\ h_{23}&=4v\sin\phi\cos^2\theta\sin\theta\\ \end{aligned} \end{equation} For $v=0$, the line element in Eq.~(\ref{line_element_non_rotating_case_weak_fieldI}) reduces to the Schwarzschild case obtained in~\cite{Kogan10}. \section*{Appendix II: Plasma distributions integrals} \subsection*{Integrals in uniform plasma non-rotating case:} The first integral in equation (\ref{deflection_angle_nonrotating_case}) is \begin{equation} \int^\infty_{-\infty}\frac{dz}{(b^2+z^2)^\frac{3}{2}}=2\int^\infty_{0}\frac{dz}{(b^2+z^2)^\frac{3}{2}}=\frac{2}{b^2} \end{equation} \subsection*{Integrals in SIS:} From Eqs.~(\ref{non_uniform_plasma}) and (\ref{plasma_frequency_SIS}) and the well known property of the $\Gamma$-function \cite{Gradshteyn07} \begin{equation} \label{propertie_gamma_function} \int^\infty_0\frac{dz}{(z^2+b^2)^{\frac{h}{2}+1}}=\frac{1}{hb^{h+1}}\frac{\sqrt{\pi}\Gamma\left(\frac{h}{2}+\frac{1}{2}\right)}{\Gamma\left(\frac{h}{2}\right)}, \end{equation} the integrals of $\hat{\alpha}_{S2}$, $\hat{\alpha}_{S3}$, $\hat{\alpha}_{B2}$ and $\hat{\alpha}_{B3}$ are respectively \begin{equation} \begin{aligned} I_{S2}&=\int^{\infty}_0\frac{dz}{(b^2+z^2)^{\frac{3}{2}+1}}=\frac{\sqrt{\pi}}{3b^4}\frac{\Gamma(2)}{\Gamma(3/2)}=\frac{2}{3b^4}\\ I_{S3}&=\int^\infty_0\frac{dz}{(b^2+z^2)^2}=\frac{\sqrt{\pi}}{2b^3}\frac{\Gamma(3/2)}{\Gamma(1)}=\frac{\pi}{4b^3}\\ I_{B2}&=\int^{\infty}_0\frac{dz}{(b^2+z^2)^{\frac{3}{2}+1}}=\frac{2}{3b^4}\\ I_{B3}&=\int^{\infty}_0\frac{dz}{(b^2+z^2)^{\frac{5}{2}+1}}=\frac{\sqrt{\pi}}{5b^6}\frac{\Gamma(3)}{\Gamma(5/2)}=\frac{8}{15b^6}\\ \end{aligned} \end{equation} \subsection*{Integrals in NSIS:} The integrals of $\hat{\alpha}_{S2}$, $\hat{\alpha}_{S3}$, $\hat{\alpha}_{B2}$ and $\hat{\alpha}_{B3}$, after substitution of Eq.~(\ref{plasma_frequency_SIS}) in (\ref{non_uniform_plasma}), are respectively \begin{equation} \label{alpha_6_nonsingular_isothermal_sphere} \begin{aligned} \overline{I}_{S2}&=\int^\infty_0\frac{dz}{(z^2+b^2+r^2_c)(b^2+z^2)^\frac{3}{2}}\\ &=\frac{1}{b^2r^2_c}-\frac{{\rm arctanh}\left(\frac{r_c}{\sqrt{b^2+r^2_c}}\right)}{r^3_c\sqrt{b^2+r^2_c}}\\\\ \overline{I}_{S3}&=-\int^\infty_0\frac{dz}{(z^2+b^2+r^2_c)^2}\\ &=-\frac{\sqrt{\pi}}{2(b^2+r^2_c)^\frac{3}{2}}\frac{\Gamma(3/2)}{\Gamma(1)}\\\\ \overline{I}_{B2}&=\overline{I}_{S2}\\\\ \overline{I}_{B_3}&=\int^\infty_0\frac{dz}{(z^2+b^2+r^2_c)(b^2+z^2)^\frac{5}{2}}\\ &=\frac{2r^2_c-3b^2}{3r^4_cb^4}+\frac{{\rm arctanh}\left(\frac{r_c}{\sqrt{b^2+r^2_c}}\right)}{r^5_c\sqrt{r^2_c+b^2}} \end{aligned} \end{equation} \subsection*{Integrals in PGC:} The integrals of $\hat{\alpha}_{S2}$, $\hat{\alpha}_{S3}$, $\hat{\alpha}_{B2}$ and $\hat{\alpha}_{B3}$, after substitution of Eq.~(\ref{plasma_frequency_SIS}) in (\ref{non_uniform_plasma}), are respectively \begin{equation} \label{alpha_6_nonsingular_isothermal_sphere} \begin{aligned} \widetilde{I}_{S2}&=\int^\infty_0\frac{dz}{(b^2+z^2)^{\frac{s+1}{2}+1}}=\frac{\sqrt{\pi}}{(s+1)b^{s+2}}\frac{\Gamma(\frac{s}{2}+1)}{\Gamma(\frac{s+1}{2})}\\\\ \widetilde{I}_{S3}&=-\int^\infty_0\frac{dz}{(b^2+z^2)^{\frac{s}{2}+1}}=\frac{\sqrt{\pi}}{sb^{s+1}}\frac{\Gamma(\frac{s+1}{2})}{\Gamma(\frac{s}{2})}\\\\ \widetilde{I}_{B2}&=\widetilde{I}_{S2}\\\\ \widetilde{I}_{B_3}&=\int^\infty_0\frac{dz}{(b^2+z^2)^{\frac{s+3}{2}+1}}=\frac{\sqrt{\pi}}{(s+3)b^{s+4}}\frac{\Gamma(\frac{s+4}{2})}{\Gamma(\frac{s+3}{2})}\\ \end{aligned} \end{equation} \bibliographystyle{spphys}
proofpile-arXiv_067-11569
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{SUPPLEMENTAL MATERIAL} \section*{A: Derivation of the extended $\mathbf{k}\cdot\mathbf{p}$ model} A central part of our study is the derivation of a minimal $\mathbf{k}\cdot\mathbf{p}$ model, based on the 4-band model in Ref.~\cite{HLL12}, for systems in the SnTe material class taking into account all symmetry-allowed terms up to second order in $\mathbf{k}$. For this purpose, we have used an algorithm, provided in the Python package Qsymm~\cite{VRA18,qsymm}, that systematically generates all possible terms of a Hamiltonian up to a given order in $\mathbf{k}$ respecting a given set of symmetries. In the following, we are going to provide all the ingredients required for the application of the symmetry algorithm. Our model has two orbital degrees of freedom, spanned by $p$ orbitals on Sn and Te sites, represented by Pauli matrices $\sigma_i$, and two spin degrees of freedom represented by Pauli matrices $s_i$~\cite{HLL12}. The $\mathbf{k}\cdot\mathbf{p}$ model is derived with respect to an $L$ point of the, initially, face-centered cubic BZ. The initial symmetry group of the $L$ point is $D_{3d}$ which is generated by inversion $I$, a rotation $R_3$ about the $C_3$ axis along $\Gamma L$, and a reflection $M$ about the mirror plane containing $\Gamma$ and two $L$ points. Furthermore, the model should be invariant under time reversal $\Theta$. The corresponding representations of the symmetry operators are as follows, \begin{eqnarray} M &=& -i\,s_1,\:\: k_1\rightarrow -k_1,\\ R_3 &=& e^{i\frac{\varphi}{2}s_3},\:\mathbf{k}\rightarrow\!\! \begin{pmatrix} \cos\varphi & -\sin\varphi & 0\\ \sin\varphi & \cos\varphi & 0\\ 0 & 0 & 1 \end{pmatrix}\! \mathbf{k},\: \varphi=\frac{2\pi}{3}\\ I &=& \sigma_z,\:\: \mathbf{k}\rightarrow -\mathbf{k},\\ \Theta &=& is_2\,K,\:\: \mathbf{k}\rightarrow -\mathbf{k}, \end{eqnarray} where $k_i$ are momentum components with respect to a local coordinate system at $L$ spanned by $\hat{\mathbf{k}}_1$ perpendicular to the mirror plane, $\hat{\mathbf{k}}_3$ pointing along the $C_3$ axis, and $\hat{\mathbf{k}}_2$ such that $\lbrace \hat{\mathbf{k}}_1,\hat{\mathbf{k}}_2,\hat{\mathbf{k}}_2\rbrace$ form a right-handed coordinate system. The $\sigma_z$ in the inversion operator is a result of expanding around an $L$ point: because the inversion center is located at one of the lattice sites in the unit cell of the rocksalt structure, the other site is translated by a lattice vector under inversion and, thus, acquires a phase factor at nonzero momentum. By providing \emph{all} operators above as input, we apply the $\mathbf{k}\cdot{\mathbf{p}}$ Hamiltonian generator algorithm of the Qsymm package~\cite{VRA18,qsymm} and find 8 symmetry-allowed terms. Ignoring the 3 terms that are proportional to the identity and do not influence the band topology, we obtain the following Hamiltonian \begin{eqnarray} H_0(\mathbf{k}) &=& m\sigma_z + \nu(k_1 s_2 - k_2 s_1) \sigma_x + \nu_3 k_3 \sigma_y \nonumber\\ &&{}+ c k_3^2 \sigma_z + f (k_1^2 + k_2^2)\sigma_z, \label{eq:full_symmetry_Hamiltonian_supp} \end{eqnarray} We proceed with the first step of symmetry reduction process during which the $C_3$ symmetry of the model is broken. By repeating the Hamiltonian generator algorithm with only the symmetry operators $M$, $I$, and $\Theta$, we find 8 additional terms, 6 of which are not proportional to the identity: \begin{eqnarray} H_1(\mathbf{k}) &=& \delta\nu (k_1 s_2 + k_2 s_1)\sigma_x + \lambda_1 k_1 s_3 \sigma_x + \lambda_2 k_2 \sigma_y \nonumber\\ &&{}+ \lambda_3 k_3 s_1 \sigma_x + \delta f (k_1^2 - k_2^2)\sigma_z + g k_2 k_3 \sigma_z. \label{eq:symmetry_breaking_terms_supp} \end{eqnarray} Finally, breaking inversion symmetry is incorporated by repeating the algorithm with only the symmetry operators $M$ and $\Theta$. This leads to 10 additional symmetry-allowed terms up to leading order in $\mathbf{k}$: \begin{eqnarray} H_2(\mathbf{k}) &=& \alpha \sigma_x + \beta(k_1s_2-k_2s_1) + \delta\beta(k_1s_2 + k_2s_1)\nonumber\\ &&{} + \gamma(k_1s_2 - k_2s_1)\sigma_z + \delta\gamma(k_1s_2 + k_2s_1)\sigma_z \nonumber\\ &&{}+ \eta_1 k_1s_z + \eta_2 k_1s_z\sigma_z + \eta_3 k_3s_x \nonumber\\ &&{}+ \eta_4 k_3s_x\sigma_z + \eta_5 s_x\sigma_y \end{eqnarray} In the main text we have shown that the terms parametrized by $\alpha$ and $\beta$ give rise to Weyl points and nodal lines. Moreover, it can be checked straightforwardly that also the other terms give rise to the same features. \section*{B: Effective Hamiltonians around the Dirac point} In the main text we derive a condition on the existence of Dirac points in the $\mathbf{k}\cdot\mathbf{p}$ Hamiltonian given by \begin{eqnarray} H(\mathbf{k}) &=& m\sigma_z + \nu(k_1 s_2 - k_2 s_1) \sigma_x + \nu_3 k_3 \sigma_y\nonumber\\ &&{}+ c k_3^2 \sigma_z + f (k_1^2 + k_2^2)\sigma_z \nonumber\\ &&{}+ \lambda_2 k_2 \sigma_y + \lambda_3 k_3 s_1 \sigma_x. \label{eq:kp_Hamiltonian} \end{eqnarray} In particular, under the condition $\lambda_2\lambda_3 = -\nu\nu_3$ there exist isolated, four-fold degenerate zero-energy states at \begin{eqnarray} \mathbf{k}_0 &=& (k_{0,1}, k_{0,2}, k_{0,3}) \nonumber\\ &=& (0, \sqrt{-m\lambda_3/(c\nu^2 + f\lambda_3^2)}, \nu/\lambda_3\, k_{0,2}). \end{eqnarray} In the following, we are going to have a closer look at the structure of the Hamiltonian close to $\mathbf{k}_0$. \subsection*{B.1 Dirac Hamiltonian} Let us expand the Hamiltonian of Eq.~\eqref{eq:kp_Hamiltonian} around $\mathbf{k}_0$ up to leading order in the momentum $\kappa=\mathbf{k}-\mathbf{k}_0$. The resulting effective Hamiltonian is \begin{eqnarray} H_\mathrm{eff}(\kappa) &=& (2c \frac{\nu k_{0,2}}{\lambda_3}\sigma_z + \lambda_3 s_1 \sigma_x + \nu_3 \sigma_y)\, \kappa_3\nonumber\\ &&{} + (2f k_{0,2}\sigma_z - \nu s_1\sigma_x - \frac{\nu\nu_3}{\lambda_3}\sigma_y)\, \kappa_2\nonumber\\ &&{} +\nu s_2\sigma_x \kappa_1. \label{eq:effective_Dirac_Hamiltonian} \end{eqnarray} The effective Hamiltonian has no zeroth order terms and is indeed linear. Moreover, the involved matrices $\gamma_0= \sigma_y\otimes\mathbb{1}$, $\gamma_1= \sigma_x\otimes s_2$, $\gamma_2= \sigma_z\otimes\mathbb{1}$, $\gamma_3= \sigma_x\otimes s_1$ satisfy $\lbrace \gamma_i, \gamma_j\rbrace = 0$ for $i \neq j$ and $\gamma_i^2=\mathbb{1}$. Therefore, they form a Clifford algebra. We can further form the chiral operator $\gamma_5=\gamma_0\gamma_1\gamma_2\gamma_3=\sigma_x\otimes s_3$ which satisfies $\gamma_5^2=\mathbb{1}$ and which anticommutes with the Hamiltonian, i.e., $\gamma_5 H_\mathrm{eff}(\kappa)\gamma_5 = -H_\mathrm{eff}(\kappa)$. Thus, the effective Hamiltonian has a chiral symmetry. By defining new $\gamma$ matrices through linear combinations of $\gamma_0$ and $\gamma_3$, namely \begin{eqnarray} \tilde{\gamma}_0 &=& \frac{1}{\sqrt{\lambda_3^2 + \nu_3^2}}(\lambda_3\gamma_0 - \nu_3\gamma_3),\\ \tilde{\gamma}_3 &=& \frac{1}{\sqrt{\lambda_3^2 + \nu_3^2}}(\nu_3\gamma_0 + \lambda_3\gamma_3), \end{eqnarray} we can rewrite $H_\mathrm{eff}$ in a more suggestive form, \begin{eqnarray} H_\mathrm{eff}(\kappa) &=& \frac{\sqrt{\lambda_3^2 + \nu_3^2}}{\lambda_3} (\lambda_3\kappa_3 - \nu\kappa_2)\,\tilde{\gamma}_3\nonumber\\ &&{}+ \frac{2k_{0,2}}{\lambda_3} (c\nu\kappa_3 + f\lambda_3\kappa_2)\,\gamma_2 \nonumber\\ &&{}+ \nu\kappa_1\gamma_1. \end{eqnarray} In addition, we define new coordinates $\tilde{\kappa}_2 = \lambda_3\kappa_3 - \nu\kappa_2$ and $\tilde{\kappa}_3 = c\nu\kappa_3 + f\lambda_3\kappa_2$, and collect the prefactors in new factors $v_i$. With that, the effective Hamiltonian becomes \begin{equation} H_\mathrm{eff}(\kappa) = v_1 \kappa_1 \gamma_1 + v_2 \tilde{\kappa_2} \gamma_2 + v_3 \tilde{\kappa_3} \tilde{\gamma}_3, \end{equation} which finally shows that $H_\mathrm{eff}$ has indeed the structure of a massless Dirac Hamiltonian. \subsection*{B.2 Effective Hamiltonians for Weyl points and nodal lines} By breaking inversion symmetry and thereby lowering the symmetry group of the $L'$ point from $C_{2h}$ to $C_s$, additional symmetry-allowed terms can be added to the Hamiltonian in Eq.~\eqref{eq:kp_Hamiltonian}. These terms are \begin{eqnarray} H_\mathrm{break} &=& \alpha\sigma_x + \beta(k_1 s_2 - k_2 s_1) + \delta\beta(k_1 s_2 + k_2 s_1)\nonumber\\ &&{} + \xi(k_1 s_2 - k_2 s_1)\sigma_z + \delta\xi(k_1 s_2 + k_2 s_1)\sigma_z\nonumber\\ &&{} + \chi_1 k_1 s_3 \sigma_z + \chi_2 k_1 s_3 + \chi_3 k_3 s_1\nonumber\\ &&{}+ \chi_4 k_3 s_1\sigma_z + \chi_5 s_1\sigma_y. \end{eqnarray} For simplicity, in the main text we consider only the first two terms. We note, however, that all symmetry-allowed terms give rise to Weyl semimetal and nodal line phases, as can be explicitly checked numerically using the SnTe model. Let us now add the term $\alpha\sigma_x$ to the $\mathbf{k}\cdot\mathbf{p}$ Hamiltonian of Eq.~\eqref{eq:kp_Hamiltonian}. The energies of the resulting Hamiltonian can be written as \begin{eqnarray} E^2 &=& (f_1 k_1^2 + f_2 k_2^2 + g k_2 k_3 + c k_3^2 + m)^2\nonumber\\ &&{}+ \Big[\alpha \pm \sqrt{k_1^2 (\lambda_1^2 + \nu_1^2) + (k_3 \lambda_3 - k_2 \nu_2)^2} \Big]^2\nonumber\\ &&{}+ (k_2 \lambda_2 + k_3 \nu_3)^2, \end{eqnarray} Since Weyl points will be located at $E=0$, their position in momentum space is determined by requiring that the \emph{three} binomials above are identical to zero. Hence, there are three polynomial equations for the three momentum space coordinates $k_1,k_2,k_3$. We stress that, contrary to the existence condition for the Dirac points, there are no longer any conditions on the relation between the Hamiltonian parameters. This reflects the fact, that a Weyl point is a stable topological feature. To simplify the discussion, we are now going to consider an effective Hamiltonian obtained by expanding around a Dirac point at $\mathbf{k}_0$. It reads \begin{equation} H_{\alpha,\mathrm{eff}}(\kappa) = H_\mathrm{eff}(\kappa) + \alpha\sigma_x, \end{equation} with the effective Dirac Hamiltonian $H_\mathrm{eff}$ from Eq.~\eqref{eq:effective_Dirac_Hamiltonian}. If we instead add $\beta(k_1 s_2 - k_2 s_1)$, the effective Hamiltonian around $\mathbf{k}_0$ takes the form \begin{eqnarray} H_{\beta,\mathrm{eff}}(\kappa) &=& H_\mathrm{eff}(\kappa) + \beta(s_1 \kappa_2 + s_2 \kappa_1 - k_{0,2} s_1). \label{eq:effective_nodal_line_Hamiltonian} \end{eqnarray} Both effective Hamiltonians are used in the main text to derive approximate positions of Weyl nodes and nodal lines, respectively. \section{C: Additional analysis of the nodal lines} The zero-energy states of Hamiltonian $H_{\beta,\mathrm{eff}}$ from Eq.~\eqref{eq:effective_nodal_line_Hamiltonian} are located at \begin{eqnarray} \mathbf{k}_{N,\eta} &=& \frac{-\lambda_3\beta k_{0,2}}{\lambda_3\beta \pm \sqrt{A_\eta}}\, (0,1,\eta), \label{eq:nodal_lines_position_supp} \end{eqnarray} where \begin{equation} A_\eta = 4\lambda_3^2 (f k_{0,2} + c k_{0,3}\eta)^2 + (\nu - \lambda_3\eta)^2 (\lambda_3^2 + \nu_3^2). \end{equation} In the following, we are going to show that the set of zero energy states parametrized by $\mathbf{k}_{N,\eta}$ forms a closed line topologically equivalent to a circle. First of all, it is clear from the definition of $A_\eta$ that $A_\eta\geq 0$. Furthermore, $A_\eta$ is even strictly nonzero for all $\eta$ as we infer from solving $A_\eta=0$: the solutions are \begin{eqnarray} \eta_{1,2} &=& \frac{1}{\lambda_3(4c^2 k_{0,3}^2 + \lambda_3^2 + \nu_3^2)} \bigg[ -4cf k_{0,2}^2 \nu + (\lambda_3^2 + \nu_3^2)\nu \nonumber\\ &&{} \pm 2\sqrt{-(f k_{0,2} \lambda_3 + c k_{0,3}\nu)^2 (\lambda_3^2 + \nu_3^2)} \bigg]. \end{eqnarray} We immediately see that the term under the root is always negative. Hence, there are no real solutions of the equation $A_\eta=0$. Consequently, $A_\eta>0\: \forall\eta$ implies that the two solutions in Eq.~\eqref{eq:nodal_lines_position_supp} are always distinct, i.e. there are no crossings between the two branches of solutions. Let us now look at Eq.~\eqref{eq:nodal_lines_position_supp} in the limits $\eta\rightarrow \pm\infty$. We obtain \begin{eqnarray} \lim_{\eta\rightarrow \pm\infty} \mathbf{k}_{N,\eta}^{(+)} &=& \pm \frac{\beta k_{0,2}}{\sqrt{4c^2 k_{0,3}^2 + \lambda_3^2 + \nu_3^2}}\, (0,0,1),\\ \lim_{\eta\rightarrow \pm\infty} \mathbf{k}_{N,\eta}^{(-)} &=& \mp \frac{\beta k_{0,2}}{\sqrt{4c^2 k_{0,3}^2 + \lambda_3^2 + \nu_3^2}}\, (0,0,1), \end{eqnarray} implying $\lim_{\eta\rightarrow \pm\infty} \mathbf{k}_{N,\eta}^{(+)} = \lim_{\eta\rightarrow \mp\infty} \mathbf{k}_{N,\eta}^{(-)}$. In other words, the two distinct solutions for the location of zero-energy states parametrized by $\eta$ are connected at $\eta=\pm\infty$. Hence, the solutions form indeed a closed line. \end{document}
proofpile-arXiv_067-11625
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \IEEEPARstart{D}{eep} neural networks trained on large-scale labeled datasets could achieve excellent performance across varieties of tasks, such as sentiment analysis~\cite{D14-1181,dos2014deep}, image classification~\cite{He_2016_CVPR,krizhevsky2012imagenet,ren2018deep} and semantic segmentation~\cite{long2015fully}. Yet they usually fail to generalize well on novel tasks because the transferability of features decreases as the distance between the base and target tasks increases~\cite{yosinski2014transferable}. A convincing explanation is that there exists a domain shift between training data and testing one~\cite{ben2010theory,ben2007analysis}. To alleviate the negative effect caused by a domain shift, domain adaptation (DA) is proposed to utilize labeled data from a source domain to generalize models generalize well on a target domain~\cite{pan2010survey,shao2015transfer}. Domain adaptation, which is a field belonging to transfer learning, has long been utilized to make it possible to exploit the knowledge learned in one specific domain to effectively improve the performance in a related but different domain. Earlier methods of DA aim to learn domain-invariant feature representations from data by jointly minimizing a distance metric that actually measures the adaptability between a pair of source and target domains, such as Transfer Component Analysis~\cite{pan2011domain}, Geodesic Flow Kernel~\cite{gong2012geodesic}, and Transfer Kernel Learning ~\cite{long2015domain}. In order to learn transferable features well, researchers apply deep neural networks to DA models~\cite{bengio2012deep,mesnil2011unsupervised,hinton2015distilling}. A feature extractor neural network is trained by reducing ``distance" between distributions of two different domains, on the assumption that the classifier trained by source data also works well in a target domain. In this kind of methods, Maximum Mean Discrepancy (MMD) loss is widely used for mapping different distributions~\cite{gretton2012optimal}. For example, Deep Adaptation Networks (DAN)~\cite{long2015learning}, Joint Adaptation Networks~\cite{pmlr-v70-long17a} and Residual Transfer Networks~\cite{Long:2016:UDA:3157096.3157112} apply MMD loss to several layers whereas Large Scale Detection through Adaptation~\cite{hoffman2014lsda} adds a domain adaptation layer that is updated based on MMD loss. Recently, the idea of Generative Adversarial Networks (GANs)~\cite{goodfellow2014generative,wang2017generative} has been widely applied to DA. The methods of using GANs~\cite{liu2016coupled,bousmalis2017unsupervised} to transform source images to target ones are proposed and their classifiers are trained with the generated target images. However, when distributions of source and target domains are totally different, adversarial training has poor performance because of a gradient vanishing phenomenon. Alternative methods train GANs on features of source and target domains. Their generator is acted as a feature extractor, and discriminator as a domain classifier. There are symmetric and asymmetirc adaptation architectures in adversarial domain adaptation, which can effectively adapt source and target distributions. The former's features in the source and target domains are generated from the same network~\cite{tzeng2015simultaneous,ganin2016domain}, while the latter's from different networks~\cite{tzeng2017adversarial}. It is well-recognized that the former is poor at generalization whereas the latter is difficult to train. \begin{figure*}[htp] \centering \includegraphics[width = \textwidth]{flowchart.pdf} \caption{The architecture of the proposed model. First feature extractor $G$ distills the feature representations of source and target samples. Then, transform network $T$ projects source features ${\bf f}^s=G({\bf x}^s)$ to the space of target features ${\bf f}^t=G({\bf x}^t)$. Finally, the label classifier $C$ is trained with the fake target features $T(G({\bf x}^s))$ and predicts the labels of target features $G({\bf x}^t)$ during a test period. In addition, domain classifier $D$ is trained to distinguish fake target features $T(G({\bf x}^s))$ and real target features $G({\bf x}^t)$, which can minimize the discrepancy between source and target domains through adversarial optimization. The regularization term $r(G({\bf x}^s),T(G({\bf x}^s)))$ measures the distance between ${\bf f}^s=G({\bf x}^s)$ and $T(G({\bf x}^s))$.} \label{fig:flowchart} \end{figure*} To solve the above problems, in this work, we propose a novel feature-shared model for adversarial domain adaptation, which achieves the flexibility of asymmetric architecture and can be easily trained. In the proposed framework as shown in Fig.~\ref{fig:flowchart}, a weight-shared feature extractor distills features from different domains, and a feature-shared transform network maps features from the source domain to the space of target features. Adversarial learning is completed with the losses from the label and domain classifiers. Note that we design residual connections between the extractor and the network to ease the learning of distribution mapping by sharing features. In addition, in order to avoid getting stuck into local minima, we construct a regularization term to ensure that the model at least knows a vague, if not exact, direction to match different distributions and overcome a gradient vanishing problem. The main contributions of this work are as follows: \begin{enumerate} \item A novel adversarial model that learns a non-linear mapping from a source domain to a target one is proposed. By using the features generated from the source domain, this model owns high generalization ability in the target domain. \item During training, concise regularization that ensures the model to select the shortest path from all the transfer paths is constructed, thereby helping stabilize an adversarial optimization process. \item This work extensively evaluates the proposed method on several standard benchmarks. The results demonstrate that our model outperforms the state-of-the-art methods in accuracy. Notably, our model can maintain excellent generalization and anti-noise abilities. \end{enumerate} Section~\ref{section2} reviews some related work on unsupervised domain adaptation. In Section~\ref{section3}, the proposed method is described. Several experiments are reported in Section~\ref{section4}. Section~\ref{section5} concludes this paper. \section{Related Work} \label{section2} DA has been extensively studied in recent years. Several studies give theoretical analyses of error bound when training and testing data are drawn from different distributions~\cite{ben2010theory,ben2007analysis}. This means that it is feasible to utilize the knowledge across domains. Moreover, the most important problem in DA is how to reduce the discrepancy between the source and target domains. Therefore, many methods modify a classifier to match different distributions~\cite{8334809,7999259,7723898,8337102}. However, the performances of these shallow methods are limited. Recently, because deep neural networks can learn feature representations including information identification, and these features are also transferable~\cite{yosinski2014transferable}, they are widely used in DA methods. Most of them focus on how to measure the distance between different domains and how to design a network structure, and achieve remarkable performance. \subsection{Traditional Domain Adaptation} A general idea is to reweight instances in the source domain, where instances similar to the target distribution are considered with more importance. This kind of method has proven effective in adaptation for the differences between the source and target distributions. In detail, the calculated weight is taken as the loss coffiecient of each source instance. Some methods reweight instances with direct importance estimation algorithms, such as~\cite{kanamori2009least,huang2007correcting,sugiyama2008direct,li2017prediction} and~\cite{7723898}. Other methods reweight instances with noisy labels to reduce the marginal and conditional shifts~\cite{zhang2013domain,liu2016classification}. Another DA idea is to explicitly make source and target distributions similar. Many statistic characteristics are chosen to be the metrics to align subspaces of different distributions. MMD, which measures expectation difference in reproducing kernel Hilbert space (RKHS) of source and target distributions, is widely used in many methods~\cite{pan2011domain,long2015domain,8337102,8334809}. In order to align more complex statistic characteristics, the studies~\cite{fernando2013unsupervised,sun2015subspace,sun2016return} calculate statistic moments of different order to match different distributions, which are easy to implement with low computational complexity. Instead of exploiting statistic characteristics, some methods~\cite{gong2012geodesic,cheng2014semi,courty2017optimal,courty2017joint} utilize manifold learning methods to transform a source distribution to a target one. In these methods, feature spaces are refined into low-dimensional spaces and feature distortion is thus avoided. \subsection{Deep Domain Adaptation} Recent, development of deep neural networks promotes deep domain adaptation. In~\cite{donahue2014decaf}, the experimental results demonstrate that features of deep neural networks instead of hand-crafted ones alleviate the negative influences of a domain shift even without any adaptations. However, a main limitation of using pre-trained deep features is that they severaly restrict the range of application. Later, a number of methods combine statistic characteristics with deep neural networks to a unified framework, which greatly improve performance on different tasks. In~\cite{long2015learning,pmlr-v70-long17a,Long:2016:UDA:3157096.3157112, hoffman2014lsda}, MMD is embedded in deep convolutional neural networks. In~\cite{sun2016return} and~\cite{DBLP:journals/corr/ZellingerGLNS17}, high order moments are utilized to align feature spaces of source and target domains. Instead of designing fancy regularizers, some methods design special architectures to minimize the discrepancy between source and target domains. In~\cite{chopra2005learning}, a Siamese architecture is introduced to adapt pairs of source and target instances. In~\cite{glorot2011domain,kan2015bi}, auto-encoders are suggested to learn the transformation from a source domain to a target one. Some methods choose adversarial loss to learn manifest invariant factors underlying different populations with a domain discriminator subnetwork. In these models, deep features are learned to confuse a domain discriminator such that they could capture the most discriminative information about classification instead of characteristics of domains. Domain-Adversarial Neural Networks (DANN)~\cite{ganin2016domain} consist of a symmetric feature generator, label discriminator, and domain discriminator. The whole model can be directly optimized via a gradient reversal algorithm. Deep Reconstruction-Classification Networks~\cite{ghifary2016deep} also take adversarial learning and add a reconstruction step for target images. Adversarial Discriminative Domain Adaptation (ADDA)~\cite{tzeng2017adversarial} uses an asymmetric feature generator that is trained alternatively with a domain classifier. The above-mentioned domain adversarial networks fall into two categories. Some methods, such as domain confusion networks~\cite{tzeng2015simultaneous} and DAN~\cite{ganin2016domain}, share weights between source and target feature extractors. They use the same network to learn representations from different inputs, which learns a symmetric transformation to utilize the transferability of features generated from deep neural networks and reduces parameters in the model. Other methods construct two networks for source and target domains, respectively~\cite{tzeng2017adversarial,liu2016coupled,bousmalis2017unsupervised}. They can learn an asymmetric transformation, allowing networks to learn parameters for each domain individually. In theory, asymmetric transformation can lead to more effective adaptation~\cite{rozantsev2018beyond}. Adversarial domain adaptation has also been explored in generative adversarial networks (GANs). Coupled Generative Adversarial Networks (CoGANs)~\cite{liu2016coupled} apply GANs to DA. Two GANs are trained to generate source and target images, respectively. Pixel-Level Domain Adaptation~\cite{bousmalis2017unsupervised} uses a conditional GAN model to synthesize target images to facilitate training a label classifier. Methods based on GANs can improve the performance of digits datasets, but their downside is a difficult training process as caused by gradient vanishing when facing more natural image datasets according to~\cite{arjovsky2017wasserstein}. In this work, we focus on learning the mapping of different feature spaces instead of synthesizing target images, and propose a discriminative model aiming to adapt distinct domains. \section{Adversarial Residual Transform Networks} \label{section3} In this section, we describe the details of our proposed model. We first define unsupervised domain adaptation and preliminary domain adversarial networks, and then demonstrate the key innovations of our model, which can well handle the problems encountered by the previous models. At last, we give a complete algorithm of matching the distributions of target and source domains using our model. \subsection{Definitions} When it comes to a machine learning task, a domain $D$ corresponds to four parts: feature space $\mathcal{X}$, label space $\mathcal{Y}$, marginal probability distribution $P({\bf X})$ and conditional probability distribution $P({\bf X}|{\bf Y})$, where ${\bf X}\in \mathcal{X}$, ${\bf Y}\in\mathcal{Y}$. Subscript $s$ and $t$ are used to denote the source and target distributions. In a traditional machine learning task, training data are drawn from source domain $D_s$ and testing data are drawn from target domain $D_t$, where their marginal and conditional probability distributions are the same ($P_s({\bf X}^s)=P_t({\bf X}^t), P_s({\bf X}^s|{\bf Y}^s)=P_t({\bf X}^t|{\bf Y}^t)$). Thus, models trained in the source domain are feasible to the target one. However, in unsupervised DA, these assumptions are not valid, which leads us to a more difficult problem as follows. Given a source domain as $D_s\{{\bf x}^s_i,{\bf y}^s_i\}_{i=1}^{n_s}$, where $n_s$ is the number of source domain samples, ${\bf x}^s_i$ is the $i$th instance in $D_s$ and ${\bf y}^s_i$ is the label of ${\bf x}^s_i$. Similarly, a target domain is denoted as $D_t\{{\bf x}^t_i\}_{i=1}^{n_t}$, where $n_t$ is the number of target domain samples, ${\bf x}^t_i$ is the $i$th instance in $D_t$. The source and target domains are drawn from distribution $P_s({\bf X}^s)$ and $P_t({\bf X}^t)$, respectively, which are different. In most cases, conditional probability distributions are also different ($P_s({\bf X}^s|{\bf Y}^s)\neq P_t({\bf X}^t|{\bf Y}^t)$) The goal is to learn a feature extractor $G_t$ and a classifier $C_t$ for $D_t$. $G_t$ distills feature representations ${{\bf f}^t=G_t({\bf x}^t)}$ from target samples, and $C_t$ correctly predicts the labels of target samples receiving ${{\bf f}^t=G_t({\bf x}^t)}$. Because of lacking annotations in $D_t$, DA learns $G_s$ and $C_s$ with samples from $D_s$, and tries to adapt them to be useful in $D_t$. \subsection{Adversarial Domain Adaptation} To solve an unsupervised DA problem, a number of methods have been proposed. Among the most effective ones is adversarial domain adaptation. This work aims to modify this kind of framework to improve its generalization and anti-noise ability. In DA problems, it is difficult to train $G_t$ and $C_t$ for a target domain without labels. However, because there exists a correlation between the source and target domains, it is common to utilize $G_s$ and $C_s$ to predict labels of target samples. In order to make $G_s$ and $C_s$ valid in a target domain, adversarial DA models are usually used to train a feature extractor $G$, a label classifier $C$ and a domain classifier $D$ for both domains. In details, these models set $G=G_t=G_s$, and $C=C_t=C_s$, which means the feature extractor and label classifier are used for both source and target domains. Specifically, $D$ also receives feature representations from $G$ and is trained to minimize the discrepancy between source and target feature distributions: $G({\bf x}^s)$ and $G({\bf x}^t)$. An adversarial training procedure is a minimax two-player game~\cite{goodfellow2014generative}. One player $D$ learns to distinguish whether features are from a source or target domain, whereas the other $G$ tries to generate domain-invariant features. They have contradictory optimization objectives, and their objectives are optimized alternately in this minmax game. To train the whole network in an end-to-end way, DANN~\cite{ganin2016domain} adopts the following loss function: \setlength{\arraycolsep}{0.0em} \begin{align} {\cal L}(\theta_d,\theta_g,\theta_c)=&\frac{1}{n_s}\sum_{{\bf x}_i\in D_s}{\cal L}_c(C(G({\bf x}_i)),y_i)- \nonumber \\ &\frac{\lambda}{n}\sum_{{\bf x}_i\in D_s\cup D_t}{\cal L}_d(D(G({\bf x}_i)),d_i)\label{eq1} \end{align} where $n=n_s+n_t$, and ${\cal L}_c$ and ${\cal L}_d$ denote losses of $C$ and $D$, respectively. $\lambda$ is a trade-off parameter between ${\cal L}_c$ and ${\cal L}_d$. $\theta_d$, $\theta_g$ and $\theta_c$ are the parameters of $D$, $G$ and $C$, respectively. $y_i$ and $d_i$ denote the class and domain labels of images. After convergence, optimal parameters $\hat{\theta}_d$, $\hat{\theta}_c$ and $\hat{\theta}_g$ can deliver a saddle point given as: \setlength{\arraycolsep}{0.0em} \begin{align} \hat{\theta}_d=& \argmin_{\theta_d}{\cal L}_d(\theta_d,\theta_g)\label{eq2}\\ \hat{\theta}_c=& \argmin_{\theta_c}{\cal L}_c(\theta_g,\theta_c)\label{eq3}\\ \hat{\theta}_g=& \argmin_{\theta_g}{\cal L}(\theta_d, \theta_g,\theta_c)\label{eq4} \end{align} \setlength{\arraycolsep}{5pt} In such framework, a DA model can be trained in an end-to-end way. The intuitive idea behind this model is that with the minmax two-player game going, $D$ and $G$ strengthen each other. When the training procedure converges, the features of different domains generated from $G$ are very hard to distinguish by $D$. In this condition, features are domain-invariant and the feature distributions of different domains are adapted. Theoretically, adversarial domain adaptation is based on $\mathcal{H}$-divergence in~\cite{ben2007analysis,ben2010theory}. However, it is almost impossible to apply $\mathcal{H}$-divergence in real-world algorithms. Because it is defined in a binary classification problem and requires a global search in all hypothesis. In~\cite{ben2007analysis}, an approximate algorithm is given. Given a generalization error $\epsilon$ of discriminating between source and target instances, $\mathcal{H}$-divergence is computed as: \begin{equation} \hat{d}_{\mathcal{A}}=2(1-2\epsilon) \end{equation} The value of $\hat{d}_{\mathcal{A}}$ is called the {\em Proxy $\mathcal{A}$-distance} (PAD). In adversarial domain adaptation, the domain classifier composed of neural networks is trained to directly decrease PAD. Cooperating with the domain classifier, the feature extractor learns domain-invariant features from different domains, implying that discrepancy between source and target distributions is decreased. Several models based on this kind of architecture have achieved the top performances in different visual tasks~\cite{ganin2016domain,ghifary2016deep}. \subsection{Residual Connections} The proposed method does not rely on only feature extractor $G$ to map different distributions. Instead, we construct an adversarial residual transform network (ARTN) $T$ to project source features ${\bf f}^s=G({\bf x}^s)$ to the space of target features. The network is trained to generate fake target features $T(G({\bf x}^s))$, which are in the same distribution as real target features ${\bf f}^t=G({\bf x}^t)$. Then, we use the fake target features $T(G({\bf x}^s))$ and corresponding labels ${\bf y}^s$ to train a classifier $C$ for the target domain. After training, the labels of target samples are predicted by $C$. In previous unsupervised DA methods, the weights of feature extractor $G$ for source and target domains are shared~\cite{long2015learning,pmlr-v70-long17a,Long:2016:UDA:3157096.3157112, hoffman2014lsda}. However, regarding matching different distributions, the generalization ability of asymmetric transformation is better than that of symmetric one~\cite{tzeng2017adversarial}. If the networks are trained to capture domain-invariant information from source features and utilize them to classify target samples, there would be a boost to their generalization ability. However, the asymmetric architecture proposed in~\cite{tzeng2017adversarial} is hard to obtain such enhancement and the feature extractor for a target domain is easy to collapse, because there exists no relationship between the feature extractors of source and target domains. In order to make our model learn domain-invariant information and avoid diverging during its training, we propose a transform network that builds connections between source and target domain features. The detailed architecture of residual connections between a feature extractor and a transform network is shown in Fig.~\ref{fig:residual}. The weight-tied feature extractor $G$ is trained to capture representations from source samples ${\bf x}^s$ and target samples ${\bf x}^t$. The transform network stacks a few layers by using the same architecture with a feature extractor. Unlike the symmetric transformation, the proposed network shares features with the feature extractor instead of parameters. Our network is also different from asymmetric transformation where two networks have no relationship. We add residual connections between the feature extractor and the transform network to share features. Therefore, with a carefully designed architecture, our model is able to alleviate the drawbacks of both symmetric and asymmetric models, which is never seen in the literature to our best knowledge. \begin{figure} \centering \subfigure[Source samples]{ \label{fig:subfig:residual1} \includegraphics[width=0.85\columnwidth]{residual1.pdf}} \hspace{1in} \subfigure[Target samples]{ \label{fig:subfig:residual2} \includegraphics[width=0.85\columnwidth]{residual2.pdf}} \caption{Residual connections between the feature extractor and transform network. When the inputs are source samples, the feature extractor and transform network are both activated (the solid line represents that this network is activated, whereas the dashed line is opposite) and the features distilled from a feature extractor would be conveyed to a transform network. When the inputs are target samples, only the feature extractor is activated. The features are not shared between the feature extractor and transform network.} \label{fig:residual} \end{figure} Theoretically, by denoting a desired underlying mapping between source and target distributions as $M$ and letting $G({\bf x}^t)=M(G({\bf x}^s))$, we intend to train transform network $T$ to fit a mapping of $T(G({\bf x}^s))=M(G({\bf x}^s))-G({\bf x}^s)$. In details, for an $N$ layer transform network, the $i$th layer of a transform network where $i<N$ is defined as: \begin{equation} T_i(G_i({\bf x}^s))=T_{i}(T_{i-1}(G_{i-1}({\bf x}^s)))+G_{i}({\bf x}^s) \end{equation} where $T_i(\cdot)$ and $G_i(\cdot)$ denote the $i$th layer of the transform network and feature extractor individually. The inputs of $T$ are original data, and the output is $T_N(G_N({\bf x}^s))$. Please note that residual connections in different methods tend to realize different purposes. Firstly, the effect of residual connections in our paper is different from others. For example, residual connections in ResNet~\cite{He_2016_CVPR} are used to shorten their training process by making gradients flow well. However, residual connections in U-Net~\cite{ronneberger2015u} are used to enable precise localization for segmentation. Although these papers all utilize skip connections, they still make novel contributions. In our paper, residual connections are used to capture semantic information from a source domain. This modification has not been seen in other domain adaptation methods and the effect is also different from papers in other research areas. Secondly, the detailed modification is different from skip connections in other papers. Skip connections in ResNet and U-Net are constructed in a single network across different layers whereas ours are constructed in the same layer across different networks. \subsection{Vanishing Gradient Problem in Adversarial Training} The detailed theoretical derivation and training process of adversarial DA has been described in~\cite{ganin2016domain}. Yet there exists a vanishing gradient problem in adversarial training. In this section, its theoretical analysis is presented. Once we adopt a transform network in adversarial DA and utilize cross entropy loss function for $D$, the adversarial nets-based DA framework of $D$ and $G$ needs the following minmax optimization: \begin{align} \min_{G,T}\max_{D}{\cal L}(D,G,T)=&{\mathbb{E}}_{{\bf x}\thicksim P_s}[logD(T(G({\bf x})))]+\nonumber\\ &{\mathbb{E}}_{{\bf x}\thicksim P_t}[log(1-D(G({\bf x})))]\label{eqd-1} \end{align} where maximizing the loss with respect to $D$ yields a tighter lower bound on the true domain distribution divergence, whereas minimizing the loss with respect to $G$ and $T$ minimizes the distribution divergence in the feature space. For any given $G$ and $T$, the optimal $D^{*}$ is obtained at: \begin{equation} D^{*}({\bf z})=\frac{P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})} \end{equation} where ${\bf z}$ is the sample in the feature space. For ${\bf z}\thicksim P_s$, ${\bf z}=T(G(\bf x))$, while for ${\bf z}\thicksim P_t$, ${\bf z}=G({\bf x})$. Similar to~\cite{goodfellow2014generative}, we give the proof as follows. \emph{Proof.} For any given $G$ and $T$, the training criterion for $D$ is to maximize ${\cal L}(D,G,T)$: \begin{align} \max_{D}{\cal L}(D,G,T)&=\int_{\bf x}P_s({\bf x})logD(T(G({\bf x})))+\nonumber\\ &P_t({\bf x})log(1-D(G({\bf x})))d{\bf x}\nonumber\\ &=\int_{\bf z}P_s({\bf z})logD({\bf z})+\nonumber\\ &P_t({\bf z})log(1-D({\bf z}))d{\bf z} \end{align} We take the partial differential of ${\cal L}(D,G,T)$ with respect to $D$, and achieve its maximum in $[0,1]$ at $D^{*}({\bf z})=\frac{P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}$. Given $D^{*}$, the minmax optimization can now be reformulated as: \begin{align} \min_{G,T}{\cal L}(D^*,G,T)&={\mathbb{E}}_{{\bf z}\thicksim P_s}[logD^*({\bf z})]+\nonumber\\ &{\mathbb{E}}_{{\bf z}\thicksim P_t}[log(1-D^*({\bf z}))]\nonumber\\ &={\mathbb{E}}_{{\bf z}\thicksim P_s}[log\frac{P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}]+\nonumber\\ &{\mathbb{E}}_{{\bf z}\thicksim P_t}[log\frac{P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}]\nonumber\\ &={\mathbb{E}}_{{\bf z}\thicksim P_s}[log\frac{2P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}]+\nonumber\\ &{\mathbb{E}}_{{\bf z}\thicksim P_t}[log\frac{2P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}]-2log2\nonumber\\ &=2\cdot JSD(P_s||P_t)-2log2\label{eq12} \end{align} where $JSD(\cdot)$ is the Jensen-Shannon divergence. Since the Jensne-Shannon divergence between two distributions is always non-negative, and zero if they are equal, ${\cal L^*}=-2log2$ is the global minimum of ${\cal L}(D,G,T)$ where the only solution is $P_s=P_t$. In this case, the distributions of source and target domains are the same and the goal of DA is well achieved. However, in practice, adversarial DA remains remarkably difficult to train. It is sensitive to the initializatoin of parameters and its training process tend to be unstable, i.e., ${\cal L}(D,G,T)$ does not converge. These problems are caused by a vanishing gradient phenomenon. In theory, Jensen-Shannon divergence measures the difference between source and target distritbutions are different. By minimizing it, source and target distributions in the feature space tend to be the same. However, if we utilize a gradient descent algorithm to optimize ${\cal L}(D,G,T)$ which is the most common algorithm for nerual networks, Jensen-Shannon divergence is difficult to converge because its gradient is easily stuck into zero, to be proved next. According to~\cite{arjovsky2017wasserstein}, $P_s$ and $P_t$ can be regarded as two distributions that have support contained in two closed manifolds ${\mathcal M}$ and ${\mathcal N}$ that do not have full dimension, respectively. $P_s$ and $P_t$ are continuous in their respective manifolds, which means that if a set $A$ with measure 0 in ${\mathcal M}$, then $P_s(A)=0$. In this case, $JSD(P_s||P_t)=log2$ for almost any $P_s$ and $P_t$. We need to use Lemma 3.1~\cite{arjovsky2017wasserstein} to prove it: \newtheorem{lemma}{Lemma}[section] \begin{lemma} \label{lemma1} \emph {Let ${\mathcal M}$ and ${\mathcal P}$ be two regular submanifolds that do not perfectly align and do not have full dimension. Let ${\mathcal L}={\mathcal M}\cap{\mathcal P}$. If ${\mathcal M}$ and ${\mathcal P}$ do not have boundary, then ${\mathcal L}$ is also a manifold. and has strictly lower dimension than both of ${\mathcal M}$ and ${\mathcal P}$. If they have boundary, ${\mathcal L}$ is a union of at most 4 strictly lower dimensional manifolds. In both cases, ${\mathcal L}$ has measure 0 in both ${\mathcal M}$ and ${\mathcal P}$} \end{lemma} \emph {Proof.} By Lemma 3.1, we know that ${\mathcal L}={\mathcal M}\cap{\mathcal P}$ has strictly lower dimensional than both ${\mathcal M}$ and ${\mathcal P}$ do, such that $P_s({\mathcal L})=0$ and $P_t({\mathcal L})=0$. \begin{align} 2\cdot JSD(P_s||P_t)&=\int_{\bf z}P_s({\bf z})log\frac{2P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}+\nonumber\\&P_t({\bf z})log\frac{2P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}d{\bf z}\nonumber\\ &=\int_{\bf z\in {\mathcal M}\setminus{\mathcal L}}P_s({\bf z})log\frac{2P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}+\nonumber\\&P_t({\bf z})log\frac{2P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}d{\bf z}\nonumber\\ &+\int_{\bf z\in {\mathcal N}\setminus{\mathcal L}}P_s({\bf z})log\frac{2P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}+\nonumber\\&P_t({\bf z})log\frac{2P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}d{\bf z}\nonumber\\ &+\int_{\bf z\in {\mathcal L}}P_s({\bf z})log\frac{2P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}+\nonumber\\&P_t({\bf z})log\frac{2P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}d{\bf z}\nonumber\\ &+\int_{\bf z\in \left({\mathcal M}\cup{\mathcal N}\right)^c}P_s({\bf z})log\frac{2P_s({\bf z})}{P_s({\bf z})+P_t({\bf z})}+\nonumber\\&P_t({\bf z})log\frac{2P_t({\bf z})}{P_s({\bf z})+P_t({\bf z})}d{\bf z} \end{align} where $\left({\mathcal M}\cup{\mathcal N}\right)^c$ is the complement of $\left({\mathcal M}\cup{\mathcal N}\right)$. For ${\bf z\in {\mathcal M}\setminus{\mathcal L}}$, $P_s({\bf z})=1$ and $P_t({\bf z})=0$. Similarly, for ${\bf z\in {\mathcal N}\setminus{\mathcal L}}$, $P_t({\bf z})=1$ and $P_s({\bf z})=0$. When ${\bf z\in ({\mathcal M}\cup{\mathcal L})^c}$ and ${\bf z\in {\mathcal L}}$, $P_s({\bf z})$ and $P_t({\bf z})$ are equal to zero. Therefore, $JSD(P_s||P_t)=log2$. Note that when $JSD(P_s||P_t)$ is a constant, gradients for all the parameters in an adversarial DA network are zeros. Therefore, if a gradient descent algorithm is adopted, the vanishing gradient problem appears and divergence between source and target domains is difficult and sometimes impossible to be minimized. \subsection{Regularizer Based on Transport Theory} Once we have parametrized $G$ and $T$, we employ adversarial loss to adapt different distributions. The architecture modification requires us to revise our loss function. Instead of measuring the distance between source features ${\bf f}^s=G({\bf x}^s)$ and target features ${\bf f}^t=G({\bf x}^t)$ generated from one feature extractor, the proposed model lets domain classifier $D$ discriminate source features ${\bf f}^s=T(G({\bf x}^s))$ from the transform network and target features ${\bf f}^t=G({\bf x}^t)$ from the feature extractor. Thus, the loss function is modified from (\ref{eq1}) into: \begin{align} {\cal L}(\theta_d,\theta_g,\theta_c, \theta_t)=&\frac{1}{n_s}\sum_{{\bf x}_i\in D_s}{\cal L}_c(C(T(G({\bf x}_i))),y_i)- \nonumber \\ &\frac{\lambda}{n}(\sum_{{\bf x}_i\in D_s}{\cal L}_s(D(T(G({\bf x}_i))),d_i^s)+ \nonumber \\ &\sum_{{\bf x}_i\in D_t}{\cal L}_t(D(G({\bf x}_i))),d_i^t))\label{eq6} \end{align} where $d_i^s$ and $d_i^t$ denote the domain labels of the $i$th source and target samples, respectively. ${\cal L}_s$ and ${\cal L}_t$ denote the domain loss of source and target samples, respectively. $\theta_t$ denotes the parameters of $T$. This objective funtion replaces $G({\bf x}_i)$ in (\ref{eq1}) with $T(G({\bf x}_i))$, indicating that our model uses features generated from transform network $T$ to be the input of label classifier $C$ and domain classifier $D$. As our proof, if we optimize ${\cal L}(\theta_d,\theta_g,\theta_c, \theta_t)$ as general adversarial DA framework, a vanishing gradient problem would disturb us. To address this issue, we add a regularization term to the loss function based on the optimal transport problem as defined by Monge~\cite{courty2017optimal}. DA's goal is to find a mapping from a source domain to a target one, while the optimal transport problem gives a solution that transfers one distribution to another. Therefore, that problem can be represented in the form of Monge's formulation of the optimal transport problem~\cite{courty2017optimal,courty2017joint}. If we denote the probability measures over $P_s$ and $P_t$ as $\mu_s$ and $\mu_t$, respectively, Monge's formulation of the optimal transport problem is: \begin{equation} \label{eq7} T_0=\argmin_T\int_{{\bf x}\in P_s}r({\bf x},T({\bf x}))d\mu({\bf x}), s.t. T\#(\mu_s)=\mu_t \end{equation} where $r(\cdot)$ denotes some kind of distance metric, $T$ denotes a transport mapping from $P_s$ to $P_t$, and $T_0$ is the optimal solution of $T$. $T\#(\mu_s)$ denotes the push forward of $\mu_s$ by a measureable function $T$. ${\bf x}$ denotes the samples drawn from $P_s$. DA's goal is to find a transport mapping $T_0$ satifying $T\#(\mu_s)=\mu_t$, which means that a transformation from source distribution $P_s$ to target distribution $P_t$ should be found. Specifically, in our model, we use transform network $T$ to fit the transport mapping to meet $T\#(\mu_s)=\mu_t$ via adversarial training. By fitting $r(\cdot)$, we can construct a regularization term that measures the distance between $G({\bf x}^s)$ and $T(G({\bf x}^s))$. In our model, according to our empirical evaluation results, $r(\cdot)$ is the cosine distance between them: \begin{equation} \label{eq8} r(G({\bf x}^s),T(G({\bf x}^s)))=-\frac{\langle G({\bf x}^s)\cdot T(G({\bf x}^s))\rangle}{\lvert G({\bf x}^s)\rvert \cdot \lvert T(G({\bf x}^s))\rvert} \end{equation} where $\langle \cdot \rangle$ denotes an inner product, and $\lvert \cdot \rvert$ denotes $L_2$ norm. \begin{figure}[htp] \centering \includegraphics[width = \columnwidth]{distributions.pdf} \caption{When transferring features from the source to target domain, the regularization term proposed forces our model choosing the shortest path (red line).} \label{fig:reg} \end{figure} For a transport problem, there are usually a few practical paths as shown in Fig.~\ref{fig:reg}. The optimal transport theory seeks the most efficient way of transforming one distribution of mass to another, just like the red line in Fig.~\ref{fig:reg}. In detail, $\int r({\bf x},T({\bf x}))d\mu({\bf x})$ in (\ref{eq7}), which indicates the expected cost of transportation, is to be minimized. If it reaches the minimum, the most efficient path would be found. Once we refine the optimal transport theory into unsupervised DA, the regularization term $r(G({\bf x}^s),T(G({\bf x}^s)))$ leads our model to select the most efficient way of transforming a source to target distribution. Specifically, the term attempts to constrain the distance between the features before and after the transformation. If we regard the distance as the cost of transformation, similar to the cost of transportation, the term attempts to select the shortest path from a number of transfer paths that map a source to target distribution. On one hand, when the distributions of source and target domains are totally different, domain classifier $D$ can so easily distinguish samples from different domains that ${\cal L}_s$ and ${\cal L}_t$ backpropagate very small gradients. In this situation, the regularization term $r(G({\bf x}^s),T(G({\bf x}^s)))$ can still provide gradients to the target mapping. On the other hand, when $D$ directs updating parameters, the term would constrain the range of updating to prevent features from changing too rapidly, because it constrains differences between the features before and after the transformation. Thus, the stability of a training procedure of the proposed model is guaranteed via such added regularization term. Consequently, our objective function becomes \begin{align} \displaystyle{\cal L}(\theta_d,\theta_g,\theta_c, \theta_t)=&\frac{1}{n_s}\sum_{{\bf x}_i\in D_s}{\cal L}_c(C(T(G({\bf x}_i))),y_i)-\nonumber\\ \displaystyle&\frac{\lambda}{n}(\sum_{{\bf x}_i\in D_s}{\cal L}_s(D(T(G({\bf x}_i))),d_i^s)+\nonumber\\ \displaystyle&\sum_{{\bf x}_i\in D_t}{\cal L}_t(D(G({\bf x}_i)),d_i^t))+\nonumber\\ \displaystyle&\beta \cdot r(G({\bf x}^s),T(G({\bf x}^s))) \end{align} where $\beta$ denotes the coefficient parameter of a regularization term. In addition, the optimization problem is to find parameters $\hat{\theta}_g$, $\hat{\theta}_d$, $\hat{\theta}_c$ and $\hat{\theta}_t$, where $\hat{\theta}_c$, $\hat{\theta}_g$, $\hat{\theta}_d$ and $\hat{\theta}_t$ satisfy: \begin{align} \hat{\theta}_d=& \argmin_{\theta_d}({\cal L}_s(\theta_d,\theta_g,\theta_t)+{\cal L}_t(\theta_d, \theta_g))\\ \hat{\theta}_c=& \argmin_{\theta_c}{\cal L}_c(\theta_g,\theta_c,\theta_t)\\ \hat{\theta}_g=& \argmin_{\theta_g}{\cal L}(\theta_d, \theta_g,\theta_c,\theta_t)\\ \hat{\theta}_t=& \argmin_{\theta_t}({\cal L}_c(\theta_g,\theta_c,\theta_t)+{\cal L}_s(\theta_d,\theta_g, \theta_t)+r(\theta_g, \theta_t)) \end{align} In this case,~(\ref{eq12}) is reformulated into: \begin{align} \min_{G,T}{\cal L}(D^*,G,T)=2\cdot JSD(P_s||P_t)-2log2+r(\theta_g, \theta_t) \end{align} where $\frac{\partial r(\theta_g, \theta_t)}{\theta_g}$ and $\frac{\partial r(\theta_g, \theta_t)}{\theta_t}$ would not be zeros. Since, $JSD(P_s||P_t)=0$ appears easily, this regularization term provides gradients for parameters in $G$ and $T$, thereby alleviating the adverse effects of the vanishing gradient problem. \subsection{Framework of Proposed Method} In a training period for our model, we have two stages. In the first one, feature extractor $G$ and transform network $T$ receive labeled source samples from $D_s\{{\bf x}^s_i,y^s_i,d^s_i\}_{i=1}^{n_s}$, and outputs ${\bf f}^s$ and $T({\bf f}^s)$. With class labels $y^s$ and domain labels $d^s$, ${\cal L}_c$ is computed by label classifier $C$, and ${\cal L}_s$ is computed by domain classifier $D$. At the same time, the regularization term $r(G({\bf x}^s),T(G({\bf x}^s)))$ is also obtained according to ${\bf f}^s$ and $T({\bf f}^s)$. In the second stage, $G$ receives unlabeled samples from $D_t\{{\bf x}^t_i,d^s_i\}_{i=1}^{n_t}$, and outputs ${\bf f}^t$. Similarly, ${\cal L}_t$ is computed by domain classifier $D$. At last, all the above losses are multiplied by their corresponding coefficients, and then the model is optimized using these losses. As for optimizing adversarial networks, previous studies have carried out a number of explorations~\cite{ganin2016domain,tzeng2017adversarial}. In~\cite{tzeng2017adversarial}, an iterative optimization strategy is proposed, where a feature extractor and domain classifier update their parameters iteratively. Specifically, it alternates between $k$ steps of optimizing a domain classifier and one step of optimizing a feature extractor. This is the most common training strategy which is also widely used in GANs~\cite{goodfellow2014generative,wang2017generative}. One of the obstacles to it is that tuning the hyperparameter $k$. Unsuitable $k$ may cause a failure of model training. As a result we have to tune this hyperparameter for each model carefully. Instead, in~\cite{ganin2016domain}, the proposed gradient reversal layer (GRL) replaces iterative optimization. During forward propagation, GRL has no difference from normal layers, whereas during backpropagation, GRL reverses the gradient from the subsequent layer, multiplies it by a coefficient $\gamma$ and passes it to the previous layer. Based on a large number of experiments,~\cite{ganin2016domain} adjusts $\gamma$ using the following formula: $\gamma=\frac{2}{1+e^{-10p}}-1$, where $p$ is the training progress linearly changing from 0 to 1. In the implementation of ARTN, we choose GRL to optimize our model. With this strategy, there is no need to tune the hyperparameter $k$, and parameters of the feature extractor and domain classifier are updated in one backpropagation. Algorithm~\ref{alg:alg1} provides the pseudo-code of our proposed learning procedure. With stochastic gradient descent (SGD), parameters $\theta_d$, $\theta_c$ and $\theta_g$ are updated. When the loss converges, the training stops. \begin{algorithm}[!h] \caption{Learning Procedure of ARTN} \label{alg:alg1} \begin{algorithmic}[1] \REQUIRE~~\\ Labeled source samples $D_s\{{\bf x}^s_i,y^s_i,d^s_i\}_{i=1}^{n_s}$\\ Unlabeled target samples $D_t\{{\bf x}^t_i,d^s_i\}_{i=1}^{n_t}$\\ Learning rate $\alpha$, Coefficient parameters $\lambda, \beta$ \ENSURE~~\\ Model parameters \{$\theta_d$,$\theta_g$,$\theta_c$,$\theta_t$\} \WHILE {not converge} \FOR {$i$ from 1 to $n_s$} \STATE ${\bf f}_i^s=G({\bf x}^s_i)$ \STATE ${\cal L}_c=crossentropy(C(T({\bf f}_i^s)),y^s_i)$ \STATE ${\cal L}_s=crossentropy(D(T({\bf f}_i^s)), d^s_i)$ \STATE $reg=r({\bf f}_i^s, T({\bf f}_i^s))$ \ENDFOR \FOR {$i$ from 1 to $n_t$} \STATE ${\bf f}_i^t=G({\bf x}^t_i)$ \STATE ${\cal L}_t=crossentropy(D({\bf f}_i^t), d^t_i)$ \ENDFOR \STATE ${\cal L}_d\leftarrow {\cal L}_s+{\cal L}_t$ \STATE $\theta_d\leftarrow \theta_d - \alpha\cdot\frac{\partial {\cal L}_d}{\partial \theta_d}$ \STATE $\theta_c\leftarrow \theta_c - \alpha\cdot\frac{\partial {\cal L}_c}{\partial \theta_c}$ \STATE $\theta_g\leftarrow \theta_g - \alpha\cdot\frac{\partial ({\cal L}_c-\lambda {\cal L}_d + \beta reg)}{\partial \theta_g}$ \STATE $\theta_t\leftarrow \theta_t-\alpha\cdot\frac{\partial ({\cal L}_c-\lambda {\cal L}_s + \beta reg)}{\partial \theta_g}$ \ENDWHILE \end{algorithmic} \end{algorithm} If $G$, $T$ and $D$ have enough capacity, and at each loop of Algorithm~\ref{alg:alg1}, $D$ is allowed to reach its optimal ${D^*}$ given $G$ and $T$, and $P_t$ is updated so as to improve the following criterion: \begin{align} {\mathbb{E}}_{{\bf x}\thicksim P_s}[logD^*(T(G({\bf x})))]+{\mathbb{E}}_{{\bf x}\thicksim P_t}[log(1-D^*(G({\bf x})))] \end{align} Then $P_t$ converges to $P_s$. Similar to~\cite{goodfellow2014generative}, we give a brief proof as follows. \emph{Proof.} Consider that $U(P_t, D)=\int_{\bf z}P_s({\bf z})logD({\bf z})+P_t({\bf z})log(1-D({\bf z}))d{\bf z}$ as a function of $P_t$. Note that $U(P_t, D)$ is convex in $P_t$. The subderivatives of a supremum of convex functions include the derivative of the function at the point where the maximum is attained. In other words, if $f(x) = sup_{\alpha\in\mathcal A} f_{\alpha}(x)$ and $f_{\alpha}(x)$ is convex in $x$ for every $\alpha$, then $\partial f_{\beta}(x)\in \partial f$ if $\beta = arg sup_{\alpha\in\mathcal A} f_{\alpha}(x)$. This is equivalent to computing a gradient descent update for $P_t$ at $D^*$ given the corresponding $G$ and $T$. $sup_{D} U(P_t, D)$ is convex in $P_t$ with a unique global optimum as proven in~(\ref{eq12}). Hence, with sufficiently small updates of $P_t$, $P_t$ converges to $P_s$. \section{Experiments} \label{section4} In order to evaluate the effectiveness of the proposed method, we test the proposed ARTN for unsupervised DA in several experiments that are recognized to be difficult. For the first experiment, we test our model in a sentiment analysis task. Second, to test its performance when source and target domains are relatively similar, the model is evaluated on several digits datasets. Third, to test it when facing a large discrepancy between source and target domains, the model is evaluated on a natural image dataset. Fourth, to test its anti-noise and generalization abilities, we test it when target images are added with varying noise. Fifth, to test the effectiveness of regularization in the proposed method, we compare the performance of ARTN with and without regularization on a natural image dataset. Finally, we investigate the effects of parameter $\lambda$ on the performance of the proposed method. In all experiments, we implement models with Pytorch, and employ the learning strategy GRL mentioned in Section~\ref{section3}, which reverses and propagates gradients to the feature extractors. \subsection{Sentiment Analysis} We use the {\bf Amazon reviews} dataset with the same pre-processing used in mSDA~\cite{Chen:2012:MDA:3042573.3042781} and DANN~\cite{ganin2016domain}. It contains reviews of four different categories of products: {\tt Books}, {\tt DVDs}, {\tt Kitchen Appliances} and {\tt Electronics}, which means that this dataset includes four domains and we can set up twelve domain adaptation tasks across them. Reviews are encoded in 5 000 dimensional feature vectors of unigrams and bigrams, and labels are binary: 0 if a product is ranked up to 3 stars, and 1 if it is ranked 4 or 5 stars. In all twelve tasks, we use 2000 labeled source samples and 2000 unlabeled target samples to train our model. In a testing period, we test our model on separate target test sets (between 3000 and 6000 examples). To evaluate the effectiveness of our model, we compare it with DANN~\cite{ganin2016domain}, DAN~\cite{long2015learning}, Central Moment Discrepancy (CMD) for Domain-Invariant Representation Learning~\cite{DBLP:journals/corr/ZellingerGLNS17}, Variational Fair Autoencoder (VFAE)~\cite{louizos2015variational} and the model with no adaptation. The results are directly cited from the original pulication~\cite{ganin2016domain}. In this experiment, we use the same neural network as DANN~\cite{ganin2016domain}. Both domain and label classifiers consist of just one layer with 100 hidden units followed by the final output layer. Because there is only one hidden layer in the neural network, we build just one residual connection. ReLU activation function and batch normalization are employed. We choose SGD as the optimizer with its learning rate 0.001 and momentum 0.9. Parameters $\lambda$ is set to 0.5, and $\beta$ is set to 0.1. The batch size is set to 128. All the results are recorded after convergence. Results are shown in Table~\ref{tab:table3}. The accuracy of ARTN is the highest in three out of twelve domain adaptation tasks. The accuracy of CMD-based model is the highest in six tasks and VFAE achieves the highest accuracy in three tasks. Therefore, in the experiment of sentiment analysis, ARTN is comparable with VFAE and slightly worse than CMD. \begin{table*}[htbp] \centering \caption{Classification accuracy percentage of sentiment analysis experiment among all twelve tasks. The first column corresponds to the performance if no adaption is implemented. The proposed method outperforms the others in three of twelve tasks.} \label{tab:table3} \begin{center} \begin{small} \begin{sc} \begin{tabular}{lcccccc} \hline\hline Source$\rightarrow$Target & SOURCE ONLY& DAN~\cite{long2015learning} & DANN~\cite{ganin2016domain}&ARTN&CMD~\cite{DBLP:journals/corr/ZellingerGLNS17}&VFAE~\cite{louizos2015variational}\\ \hline books$\rightarrow$dvd&78.7&79.6 &78.4&{\bf 81.4}&80.5&79.9\\ books$\rightarrow$electronics&71.4&75.8 &73.3& 77.5&78.7&{\bf 79.2}\\ books$\rightarrow$kitchen&74.5&78,7 &77.9&78.8&81.3&{\bf 81.6}\\ dvd$\rightarrow$books&74.6&78.0 &72.3&78.8&{\bf 79.5}&75.5\\ dvd$\rightarrow$electronics&72.4&76.6 &75.4&77.0&{\bf 79.7}&78.6\\ dvd$\rightarrow$kitchen&76.5&79.6 &78.3&79.3&{\bf 83.0}&82.2\\ electronics$\rightarrow$books&71.1&73.3 &71.3&72.4&{\bf 74.4}&72.7\\ electronics$\rightarrow$dvd&71.9&74.8 &73.8&73.9& 76.3&{\bf 76.5} \\ electronics$\rightarrow$kitchen&84.4&85.7 &85.4 &{\bf 86.4}&86.0&85.0 \\ kitchen$\rightarrow$books&69.9&74.0 &70.9 &73.8&{\bf 75.6}&72.0 \\ kitchen$\rightarrow$dvd&73.4&76.3 &74.0 & 75.7&{\bf 77.5}&73.3\\ kitchen$\rightarrow$electronics&83.3&84.4 &84.3 &{\bf 86.1}&85.4&83.8 \\ \hline\hline \end{tabular} \end{sc} \end{small} \end{center} \end{table*} \subsection{Digits} \begin{figure}[htp] \centering \includegraphics[width = \columnwidth]{digits.jpg} \caption{Samples of digits dataset. The first to last rows correspond to MNIST, MNIST-M, SVHN and SYN NUMS.} \label{fig:digits} \end{figure} In order to evaluate the performance when the discrepancy between source and target domains is relatively small, we experimentally test ARTN in several pairs of unsupervised domain adaptation tasks whose images are from the {\bf MNIST}, {\bf MNIST-M}, {\bf SVHN} and {\bf SYN NUMS} digits datasets. All these datasets consist of 10 classes, and we use the full training sets in all datasets. Example images from each dataset are shown in Fig.~\ref{fig:digits}. In this experiment, we set three transfer tasks: MNIST$\rightarrow$MNIST-M, SVHN$\rightarrow$MNIST, and SYN NUMS$\rightarrow$SVHN. As is shown in Fig.~\ref{fig:digits}, images in SYN NUMS and SVHN are similar, whereas images in MNIST are much different from the other digits datasets. We choose several unsupervised DA approaches to compare with the proposed one. CORAL~\cite{sun2016return}, CMD~\cite{DBLP:journals/corr/ZellingerGLNS17} and DAN~\cite{long2015learning} rely on the distance metric between source and target distributions. DANN~\cite{ganin2016domain}, CoGAN~\cite{liu2016coupled}, Domain Transfer Network (DTN)~\cite{taigman2016unsupervised}, CYCADA~\cite{pmlr-v80-hoffman18a} and ADDA~\cite{tzeng2017adversarial} are based on adversarial learning. For MNIST$\rightarrow$MNIST-M, we use a simple modified LeNet~\cite{lecun1998gradient}. As for a domain classifier, we stack two fully connected layers: one layer with 100 hidden units followed by the final output layer. Each hidden unit uses a ReLU activation function. For SVHN$\rightarrow$MNIST and SYN NUMS$\rightarrow$SVHN, we use a three-layer convolutional network as a feature extractor, and a three-layer fully connected network as a domain classifier. In all tasks, batch normalization is employed. We employ SGD with 0.01 learning rate and the momentum 0.9. $\lambda$ is set to 1, and $\beta$ is set to 0.2. The batch size is set to 128. Prediction accuracy in the target domain is reported after convergence. Results of our digits experiment are shown in Table~\ref{tab:table1}. Note that the basic networks for DTN and CYCADA are different from others, and the accuracy with no adaptation is included in the bracket. In MNIST$\rightarrow$MNIST-M, the proposed model's accuracy is 85.6\% which outperforms the best of other methods by 0.6\%. In SYN NUMS$\rightarrow$SVHN, its accuracy achieves 89.1\%, which is comparable to DANN's. In SYN NUMS$\rightarrow$SVHN, the accuracy of ARTN is 85.8\%, which is just lower than CYCADA's. Note that, CYCADA achieves the higher accuracy with better basic networks. For a fair comparison, the improvements compared with the adaptation-free models of ARTN, DTN, CyCADA pixel only and CyCADA pixel+feat are 30.9\%, 8.3\%, 3.2\% and 23.3\%, respectively. It is obvious that ARTN brings a bigger boost to the adaptation-free model. Totally, in two of three tasks, our approach outperforms other methods, and in the task whose source and target datasets are similar, it can achieve the same competitive results as the others. \begin{table*}[htbp] \centering \caption{Classification accuracy percentage of digits classifications among MNIST, MNIST-M, SVHN and SYN NUMS. The first row corresponds to the performance if no adaption is implemented. The proposed method outperforms the others in two of three tasks when it comes to improvement compared with the basic network. In addtiont, the results are cited from literature. } \label{tab:table1} \begin{center} \begin{small} \begin{sc} \begin{tabular}{lccc} \hline\hline Method& MNIST$\rightarrow$MNIST-M& SYN NUMS$\rightarrow$SVHN& SVHN$\rightarrow$MNIST\\ \hline Source only&51.4&86.7&54.9\\ CORAL~\cite{sun2016return}&57.7&85.2&63.1\\ DAN~\cite{long2015learning}&76.9&88.0&71.1\\ DANN~\cite{ganin2016domain}&76.7&{\bf 91.1}&73.9\\ CMD~\cite{DBLP:journals/corr/ZellingerGLNS17}&85.0&85.5&84.5\\ CoGAN~\cite{liu2016coupled}&-&-&diverge\\ ADDA~\cite{tzeng2017adversarial}&-&-&76.0\\ DTN~\cite{taigman2016unsupervised}&-&-&84.4(76.1)\\ CyCADA pixel only~\cite{pmlr-v80-hoffman18a}&-&-&70.3(67.1)\\ CyCADA pixel+feat~\cite{pmlr-v80-hoffman18a}&-&-&{\bf 90.4}(67.1)\\ ARTN&{\bf 85.6}&89.1&85.8\\ \hline\hline \end{tabular} \end{sc} \end{small} \end{center} \end{table*} \begin{table*}[!t] \renewcommand{\arraystretch}{1.3} \centering \caption{Classification accuracy percentage of experiment on the Office-31 dataset. The first column corresponds to the performance if no adaption is implemented. The second to last columns correspond to the performance of different DA methods and the proposed method.} \label{tab:table2} \begin{tabular}{lcccc} \hline\hline Method & DSLR$\rightarrow$AMAZON & WEBCAM$\rightarrow$AMAZON & AMAZON$\rightarrow$WEBCAM & AMAZON$\rightarrow$DSLR\\ \hline AlexNet&51.1&49.8&61.6&63.8\\ DDC~\cite{tzeng2015simultaneous}&52.1&52.2&61.8&64.4\\ Deep CORAL~\cite{sun2016return}&52.8&51.5&66.4&66.8\\ DAN~\cite{long2015learning}&54.0&53.1&68.5&67.0\\ \hline InceptionBN&60.1&57.9&70.3&70.5\\ LSSA~\cite{aljundi2015landmarks}&57.8&57.8&67.7&71.3\\ CORAL~\cite{sun2016return}&59.0&60.2&70.9&71.9\\ AdaBN~\cite{li2016revisiting}&59.8&57.4&74.2&73.1\\ \hline VGG16&58.2&57.8&67.6&73.9\\ CMD~\cite{DBLP:journals/corr/ZellingerGLNS17}&{\bf 63.8}&{\bf 63.3}&{\bf 77.0}&{\bf 79.6}\\ \hline ResNet34 & 57.5 & 55.5 & 68.4 & 68.9 \\ DANN~\cite{ganin2016domain} & 58.1 & 56.3 & 73.7 & 75.3\\ ARTN & 60.9 & 61.0 & 76.2 & 76.1 \\ \hline\hline \end{tabular} \end{table*} \subsection{Image Classification} \begin{figure}[htp] \centering \includegraphics[width = \columnwidth]{office-31.pdf} \caption{Samples of Office-31 dataset. The first to last rows correspond to AMAZON, DSLR and WEBCAM.} \label{fig:office} \end{figure} We further evaluate our model in a more complex setting. The proposed model is tested on a natural image dataset {\bf Office-31}, which is a standard benchmark for visual domain adaptation, comprising 4,110 images and 31 categories collected from three domains: {\tt AMAZON} ({\bf A}, images downloaded from amazon.com) with 2,817 images, {\tt DSLR} ({\bf D}, high-resolution images captured by a digital SLR camera) with 498 images and {\tt WEBCAM} ({\bf W}, low-resolution images captured by a Web camera) with 795 images. Samples of {\bf Offfice-31} dataset are shown in Fig.~\ref{fig:office}. In order to test the generalization ability of different methods, we focus on the most difficult four tasks~\cite{long2015learning}: {\bf A}$\rightarrow${\bf D}, {\bf A}$\rightarrow${\bf W}, {\bf D}$\rightarrow${\bf A} and {\bf W}$\rightarrow${\bf A}. In {\bf A}$\rightarrow${\bf W} and {\bf A}$\rightarrow${\bf D}, models are easier to train because images in source domain {\bf A} are adequate. In {\bf W}$\rightarrow${\bf A} and {\bf D}$\rightarrow${\bf A}, there are only hundreds of images in the source domain but about 2,900 images in the target one. Thus models are very difficult to train. In addition, we test our model without regularization to analyze how the regularization term of our model affects its performance. In this experiment, we evaluate the effectiveness of our approach by comparing it with different models trained on the {\bf Office-31} dataset. Note that some of the methods, such as DDC~\cite{tzeng2015simultaneous}, Deep CORAL~\cite{sun2016return} and DAN~\cite{long2015learning}, are based on AlexNet, some of the methods, such as LSSA~\cite{aljundi2015landmarks}, CORAL~\cite{sun2016return} and AdaBN~\cite{li2016revisiting}, are based on InceptionBN, and CMD~\cite{DBLP:journals/corr/ZellingerGLNS17} is based on VGG16. Results of these methods are cited from original papers. Moreover, we implement DANN~\cite{ganin2016domain} and the model with no adaptation to be the baselines. Because of lacking sufficient images, we implement our model based on ResNet34~\cite{He_2016_CVPR} which is pre-trained on an ImageNet dataset, and fine-tune the model on Office-31. Different from the digits experiment, we build a residual connection for every three layers in ResNet34 instead of every layer. As for the domain classifier, we use a network with three fully connected layers. In addition, we replace the last layer of ResNet34 with a three-layer fully connected network, and use it to predict the labels of inputs. In all tasks, we employ the same SGD and parameter setting as before except that $\lambda$ is set to 0.6 and batch size is 40. All the prediction accuracy results are recorded after training for 30 epochs. Results of the experiment on Office-31 are shown in Table~\ref{tab:table2}. In {\bf D}$\rightarrow${\bf A}, {\bf W}$\rightarrow${\bf A}, {\bf A}$\rightarrow${\bf W} and {\bf A}$\rightarrow${\bf D}, the proposed model achieves the accuracy of 60.9\%, 61.0\%, 76.2\% and 76.1\%, respectively. Thus, in all four tasks, the proposed model achieves the second highest accuracy. Note that these methods are based on different basic networks, besides the accuracy, improvement compared with the corresponding basic network is a fairer metric. The improvements of ARTN in four tasks are 3.4\%, 5.5\%, 7.8\% and 7.2\%, respectively. CMD, which outperforms all the related state-of-the-art methods on all four tasks, achieves 5.6\%, 5.5\%, 9.4\% and 5.7\% improvements. ARTN achieves a higher improvement in {\bf A}$\rightarrow${\bf D} and same improvement in {\bf W}$\rightarrow${\bf A} compared with the state-of-art method, CMD. The intuitive interpretation of CMD's excellent performance is that it minimizes the sum of differences of higher order central moments of the corresponding activation distributions. Higher-order statistics describe the differences between distributions more comprehensively, but they also incur significantly more computational overhead than such methods as our proposed one. In practice, the number of moments is pre-set to be no more than five. In summary, ARTN outperforms all the other methods except CMD. \subsection{Generalization Analysis} A generalization test is taken by adding Gaussian noise to images in a target domain. In this way, the discrepancy between source and target domains is larger and discriminative information in a target domain is more difficult to capture. In this experiment, we test the anti-noise and generalization abilities of our model based on the digits experiment. For images in the source domain, we follow the settings in MNIST$\rightarrow$MNIST-M, SYN NUMS$\rightarrow$SVHN and SVHN$\rightarrow$MNIST respectively, however, for images in the target domain, we add varying Gaussian noise. For MNIST$\rightarrow$MNIST-M and SYN NUMS$\rightarrow$SVHN, the standard deviation of Gaussian noise is selected from $\{0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 1.0\}$. That in SVHN$\rightarrow$MNIST is from $\{1.0, 1.5, 2.0, 2.5, 3.0\}$. The means of Gaussian noise in all tasks are 0. Results are plotted in Fig.~\ref{fig:noise}. The baseline method is a model without adaptation. We also compare the proposed method with DANN~\cite{ganin2016domain}. Comparing the proposed model with the adaptation-free model, we can see that although noises are added to the test images, the proposed model exhibits a great advantage over the adaptation-free model. In MNIST$\rightarrow$MNIST-M, when the standard deviation is 0.4, the accuracy of the adaptation-free model is 45.83\%, whereas ours improves it by 36.6\%. When standard deviation is 1.0, the accuracy of the adaptation-free model is 24.55\%, whereas ours improves is by 76.4\%. Similar results appear in SYN NUMS$\rightarrow$SVHN and SVHN$\rightarrow$MNIST, where the rate of improvement generally shows an upward trend in the case of a gradual increase of noise. Therefore, as the discrepancy between source and target domains increases, the performance advantage of the proposed model is becoming more and more obvious in comparison with a adaptation-free model. At the same time, the improvement percentage of our model is higher than DANN in almost all tasks, which means that the proposed method has better anti-noise abilities than DANN. This result demonstrates that even if there exists noise in a target domain, the proposed model can maintain excellent generalization and anti-noise abilities. \begin{figure} \centering \subfigure[MNIST$\rightarrow$MNIST-M]{ \label{fig:subfig:a} \includegraphics[width=\columnwidth]{reg.jpg}} \hspace{1in} \subfigure[SYN NUMS$\rightarrow$SVHN]{ \label{fig:subfig:b} \includegraphics[width=\columnwidth]{reg2.jpg}} \hspace{1in} \subfigure[SVHN$\rightarrow$MNIST]{ \label{fig:subfig:c} \includegraphics[width=\columnwidth]{reg1.jpg}} \caption{Accuracy of the adaptation-free method and improvement of DANN and our model in MNIST$\rightarrow$MNIST-M, SYN NUMS$\rightarrow$SVHN and SVHN$\rightarrow$MNIST, where we add gaussian noise to images in the target domain. X-axis represents the standard deviation of noise, and Y-axis represents the accuracy of adaptation-free method and improvement percentage of DANN and our model in the target domain.} \label{fig:noise} \end{figure} \begin{figure}[htp] \centering \includegraphics[width = \columnwidth]{reg3.jpg} \caption{Classification accuracy percentage of experiment on the Office-31 dataset.The red line corresponds to the proposed method with regularization and the blue line corresponds to the one without regularization. The regularizaton term shows a positive effect on the performance improvement.} \label{fig:reg3} \end{figure} \subsection{Regularization Analysis} We next analyze how the regularization term of our model affects the performance of our model. We test our model without regularization on Office-31 by setting $\beta =0$. In this way, ${\cal L}$ consists of ${\cal L}_c$, ${\cal L}_s$ and ${\cal L}_t$ only. Except for the regularization term, this experiment has same settings as the image classification experiment does. Results of this experiment are shown in Fig.~\ref{fig:reg3}. In {\bf D}$\rightarrow${\bf A}, {\bf W}$\rightarrow${\bf A}, {\bf A}$\rightarrow${\bf W} and {\bf A}$\rightarrow${\bf D}, the proposed model without regularization achieves the accuracy of 59.5\%, 59.8\%, 76.0\% and 75.9\%, which is lower by 1.4\%, 1.2\%, 0.2\% and 0.2\% of the proposed one with such term, respectively. The model with regularization outperforms DANN and the model without regularization in all tasks, which demonstrates the effectiveness of regularization. In another word, the regularization term strengthens the generalization ability of the proposed model. It should be noted that in {\bf D}$\rightarrow${\bf A}, {\bf W}$\rightarrow${\bf A} and {\bf A}$\rightarrow${\bf W}, the proposed model without regularization still outperforms DANN. This means that the improvement is not only from the regularization but also the modification of its architecture. Besides performance improvement, we analyze how the regularization term affects the gradients during training. Because displaying every gradient of a parameter is impossible, we calculate $||\nabla_{\theta}{\cal L}(\theta)||$ to capture the overall statistics which is a metric adopted in~\cite{arjovsky2017wasserstein}. We record $||\nabla_{\theta}{\cal L}(\theta)||$ of the model with and without a regularization term on Office-31. Moreover, we record the minimum, maximum and standard deviation of $||\nabla_{\theta}{\cal L}(\theta)||$ during the training period. $||\nabla_{\theta}{\cal L}(\theta)||$ in {\bf D}$\rightarrow${\bf A}, {\bf W}$\rightarrow${\bf A}, {\bf A}$\rightarrow${\bf W} and {\bf A}$\rightarrow${\bf D} are drawn in Fig.~\ref{fig:grad} and related statistical data are shown in Table~\ref{tab:table4}. Please note that even if the gradient vanishing issue occurs during training, $||\nabla_{\theta}{\cal L}(\theta)||$ would not be very close to zero because it involves gradients of all parameters. According to Fig.~\ref{fig:grad}, especially in Fig.~\ref{fig:subfig:gAW}, (b) and (d), we can see that gradients of ARTN without a regularization term are easier to be unstable. There are more extreme large gradients in ARTN without a regularization term. The results are shown in Table~\ref{tab:table4}. In all four tasks, ARTN with a regularization term gets smaller standard deviation than ARTN without it. This directly indicates that a regularization term promotes the stability of adversarial training in our model. In detail, the maximum and minimum gradients are also recorded. We find that maximum gradient of ARTN with a regularization term in four tasks are smaller than that of ARTN without it. Meanwhile, except in {\bf W}$\rightarrow${\bf A}, the gap between maximum and minimum gradients suggests that ARTN with a regularization term is more stable than ARTN without it with a large margin. This fact can also be observed in Fig.~\ref{fig:grad}. Thus, the effect of the proposed regularization term to stabilize adversarial training in our model is considered being verified in this experiment. The reason why the {\bf W}$\rightarrow${\bf A} case is an exception needs to be explored as future research. \begin{table}[!t] \renewcommand{\arraystretch}{1.3} \centering \caption{Statistical data of $||\nabla_{\theta}{\cal L}(\theta)||$ are recorded. Both the models with and without regularization term are evaluated on Office-31. For each row, the upper line corresponds to ARTN with regularization and the lower line corresponds to ARTN without regularization.} \label{tab:table4} \begin{tabular}{lcccc} \hline\hline Method & A$\rightarrow$W & A$\rightarrow$D & W$\rightarrow$A & D$\rightarrow$A\\ \hline \multirow{2}{*}{max}& 14.51& 15.88&4.64 & 6.16\\ & 18.50&20.35 &4.81 &7.02 \\ \hline \multirow{2}{*}{min}&3.24 &2.96 &2.07 &2.32 \\ &3.17 &3.04 &2.20 &2.32 \\ \hline \multirow{2}{*}{max-min}&11.27 &12.92 &2.57 &3.84 \\ &15.33 &17.31 &2.61 &4.70 \\ \hline \multirow{2}{*}{std}&1.25 &1.36 &0.38 &0.59\\ &1.30 &1.37 &0.40 &0.61\\ \hline\hline \end{tabular} \end{table} \begin{figure} \centering \subfigure[A$\rightarrow$W]{ \label{fig:subfig:gAW} \includegraphics[width=0.48\columnwidth]{grad_A_W_30.png}} \subfigure[A$\rightarrow$D]{ \label{fig:subfig:gAD} \includegraphics[width=0.48\columnwidth]{grad_A_D_30.png}} \subfigure[W$\rightarrow$A]{ \label{fig:subfig:gWA} \includegraphics[width=0.48\columnwidth]{grad_W_A_30.png}} \subfigure[D$\rightarrow$A]{ \label{fig:subfig:gDA} \includegraphics[width=0.48\columnwidth]{grad_D_A_30.png}} \caption{$||\nabla_{\theta}{\cal L}(\theta)||$ in task {\bf D}$\rightarrow${\bf A}, {\bf W}$\rightarrow${\bf A}, {\bf A}$\rightarrow${\bf W} and {\bf A}$\rightarrow${\bf D}. Red line corresponds to our model without regularization term while blue line corresponds to our model with regularization term.} \label{fig:grad} \end{figure} \begin{figure} \centering \subfigure[A$\rightarrow$W]{ \label{fig:subfig:AW} \includegraphics[width=0.48\columnwidth]{A_W.jpg}} \subfigure[A$\rightarrow$D]{ \label{fig:subfig:AD} \includegraphics[width=0.48\columnwidth]{A_D.jpg}} \subfigure[W$\rightarrow$A]{ \label{fig:subfig:WA} \includegraphics[width=0.48\columnwidth]{W_A.jpg}} \subfigure[D$\rightarrow$A]{ \label{fig:subfig:DA} \includegraphics[width=0.48\columnwidth]{D_A.jpg}} \caption{Sensitivity of $\lambda$ in task A$\rightarrow$W, A$\rightarrow$D,W$\rightarrow$A, and D$\rightarrow$A. Dashed lines show results of adaptation-free method.} \label{fig:sens} \end{figure} \subsection{Parameter Sensitivity} In this experiment, we investigate how parameter $\lambda$ affects the performance of our model. In order to make the results convincing, we test our model on tasks {\bf A}$\rightarrow${\bf W}, {\bf A}$\rightarrow${\bf D}, {\bf W}$\rightarrow${\bf A}, and {\bf D}$\rightarrow${\bf A} to acquire the variation of transfer classification performance as $\lambda\in\{0.4, 0.5, 0.6, 0.7, 0.8, 0.9\}$. Note that other settings are the same with those of the image classification experiment. In Fig.~\ref{fig:sens}, a detailed illustration is given. The results in three of four tasks, {\bf A}$\rightarrow${\bf W}, {\bf A}$\rightarrow${\bf D} and {\bf W}$\rightarrow${\bf A} exhibit the same trend that the accuracy of ARTN is almost stable as $\lambda$ varies. Only in {\bf W}$\rightarrow${\bf A}, the accuracy fluctuates slightly with the variation of $\lambda$. Moreover, in the range of our settings, ARTN is always better than the model without adaptation and also better than most methods in Table~\ref{tab:table2}. This confirms the belief that ARTN is robust as $\lambda$ changes, which means the proposed method is no need for tuning hyper parameters subtly. \section{Conclusion} \label{section5} We propose a novel unsupervised domain adaptation model based on adversarial learning. Different from previous adversarial adaptation models which rely on extracting domain-invariant representations, our model adds a feature-shared transform network to directly map features from a source domain to the space of target features. Furthermore, we add a regularization term to help strengthen its performance. Experimental results clearly demonstrate that the proposed model can match different domains effectively and is comparable with the state-of-the-art methods. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_067-11684
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The duality between the $W_N$ minimal model conformal field theories and the higher spin theory of Vasiliev on the $AdS_3$ has been proposed by Gaberdiel and Gopakumar in \cite{GG}. Very recently, in \cite{GG1}, this proposal has been clarified further and they claim that the $W_N$ minimal model conformal field theory is dual, in the 't Hooft $\frac{1}{N}$ expansion, to the higher spin theory coupled to one complex scalar. The duality can hold at finite $N$ because of the nontrivial truncation of the quantum algebra of the higher spin theory. In \cite{CHR}, the ${\cal N}=2$ supersymmetric extension of \cite{GG}, the higher spin $AdS_3$ supergravity, has been studied where the dual conformal field theory is given by ${\cal N}=2$ ${\bf CP}^N$ Kazama-Suzuki(KS) model in two dimensions. The supergravity partition function is computed and agrees with the partition function from the superconformal field theory side. Moreover, this superconformal partition function in the KS model in the 't Hooft limit is described, in detail, in \cite{CG}. Recently, in \cite{HP}, the asymptotic symmetry of the higher spin $AdS_3$ supergravity is obtained, by following the work of \cite{GH}, and one of the nontrivial checks for the duality \cite{CHR} is to identify the operator product expansions between the lower higher spin currents in the KS model, in the 't Hooft limit, with the corresponding algebra in the classical ${\cal N}=2$ ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ algebra, where $\lambda$ is a free parameter, in higher spin $AdS_3$ supergravity. Some time ago, Kazama and Suzuki \cite{KSNPB,KSPLB} have found a new class of unitary ${\cal N}=2$ superconformal field theories via coset space method. They classified the list of Hermitian symmetric spaces and the Virasoro central charges for the associated ${\cal N}=2$ superconformal field theories. Moreover, Hull and Spence \cite{HS} studied the description of ${\cal N}=2$ supersymmetric extension of the Kac-Moody algebra in ${\cal N}=2$ superspace. It turns out that the operator product expansions between the ${\cal N}=2$ currents are nonlinear and this fact produces exactly the same conditions in \cite{KSNPB,KSPLB}. Romans \cite{Romans} has found the ${\cal N}=2$ ${\cal W}_3$ algebra where the higher spin multiplet has spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ \footnote{We will use this notation for the spins of ${\cal N}=2$ multiplet. The first and last one are bosonic currents while the middle ones are fermionic. The spin contents we are dealing with for the multiplet in this paper take the form $(s, s+\frac{1}{2}, s+ \frac{1}{2}, s+1)$ where the spin $s$ is an integer. That is $s= 1, 2, \cdots$. The lowest case $(1, \frac{3}{2}, \frac{3}{2}, 2)$ corresponds to the usual ${\cal N}=2$ stress energy tensor. For $s \geq 2$, one has higher spin currents. For the ${\cal N}=2$ ${\cal W}_{N+1}$ algebra, the highest spin is $s_{max}=N$ and there are $N$-multiplets whose first component spins are $s =1, 2, \cdots, N$. }. One of the discrete series for the central charge matches with the central charge of KS model on ${\bf CP}^2$ coset model. See also the work of \cite{Odake}. By applying the ${\cal N}=2$ current algebra in \cite{HS,RASS} to the supersymmetric WZW conformal field theory, the explicit ${\cal N}=2$ ${\cal W}_3$ current with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ in the above ${\bf CP}^2$ KS model has been found in \cite{Ahn94}. The free field realization was discussed in \cite{Ahn93}. Moreover, the ${\cal N}=2$ ${\cal W}_4$ algebra was constructed in \cite{BW} by adding one more higher spin current with spins $(3, \frac{7}{2}, \frac{7}{2},4)$ and they predicted the self-coupling constant, for the lowest higher spin current above, which is valid for any ${\cal N}=2$ ${\cal W}_{N+1}$ algebra. In this paper, we would like to see the $AdS_3/CFT_2$ correspondence initiated by \cite{CHR} more detail in the context of supersymmetric WZW model. Contrary to the purely bosonic case where the operator product expansion between the spin $3$ and itself does not contain the spin $3$ current in the right hand side, the ${\cal N}=2$ supersymmetric model has the operator product expansion between the multiplet with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ and itself where the right hand side contains this multiplet itself. For the bosonic case, it is obvious that the spin $3$ current can occur in the $\frac{1}{(z-w)^3}$ term. However, due to the symmetry in the operator product expansion of this current and itself, one can also obtain the same operator product expansion by interchanging the arguments between $z$ and $w$ and it turns out in this case, there exists the same spin $3$ current in the above singular term but minus sign. Then it automatically becomes zero. This implies that the nontrivial self-coupling in the bosonic case occurs for the next spin $4$ where the right hand side has this spin $4$ current in the $\frac{1}{(z-w)^4}$ singular term and we do not see any trivial condition like as above because of the even power of this singular term \footnote{ \label{foot} Some time ago, the self-coupling constant for the spin $4$ current was obtained in \cite{Hornfeck} for $W_N$ minimal model using the free field realization. It depends on the central charge $c$ and the $N$ explicitly. See also the recent paper by Gaberdiel and Gopakumar \cite{GG1} where one can find other relevant papers. As far as I know, so far, there is no direct construction for this self-coupling constant from the operator product expansion between the $SU(N)$ Casimir spin $4$ operator \cite{Ahn2011} and itself. It would be interesting to see this feature although there will be lots of work to be done for this computations.}. What happens if there are ${\cal N}=2$ supersymmetries in two dimensions? One sees the presence of self-coupling even in the operator product expansion for the lowest higher spin current. One simple example is the operator product expansion for the spin $2$ current and itself that is the first component of the above multiplet $(2, \frac{5}{2}, \frac{5}{2}, 3)$. One can analyze this situation as above. The spin $2$ current occurs in the $\frac{1}{(z-w)^2}$ term in the right hand side. Due to the even power of this singular term, there is a nontrivial spin $2$ current in this singular term. It is easy to see that there are no other self-coupling terms except this spin $2$-spin $2$ operator product expansion. Note that there exists usual spin $2$ stress energy tensor and in ${\cal N}=2$ KS ${\bf CP}^N$ model, the higher spin current contains other spin $2$ current which is contained in the above multiplet. Of course, the spin $3$-spin $3$ operator product expansion can generate the self-coupling constant term but as we explained in the previous paragraph, this does not give us the self-interacting term. Fortunately, it is known, in \cite{BW}, that the self-coupling constant for the spin $2$ current in ${\cal N}=2$ ${\cal W}_{N+1}$ algebra depends on the $N$ and $k$ explicitly, as in the bosonic case(the footnote \ref{foot}). As $N$ increases, one expects that there exist new primary fields in the right hand side of operator product expansion. For example, the spin $3$-spin $3$ operator product expansion in $SU(N)$ Casimir algebra leads to other spin $4$ current and its descendant in the right hand side \cite{BBSS1,BBSS2}. The relative coefficient functions appearing in the descendant fields for given primary field are also fixed by conformal invariance. In section 2, we rewrite the ${\cal N}=2$ current algebra in terms of the currents living in the subgroup $H$ and the currents living in the coset $\frac{G}{H}$ separately. The constraints for the currents are rewritten similarly. The Sugawara stress energy tensor is given in terms of the currents linearly or quadratically. For $N=2$, we describe the lowest higher spin current with explicit group index contraction and the corresponding self-coupling constant is given in terms of the central charge or the level. For $N=4$, the most of the material is new. We present also the lowest higher spin current in terms of composite Kac-Moody currents and explain the overall normalization constant which depends on either the level $k$ or the central charge. We also observe the presence of a new primary current with spins $(3, \frac{7}{2}, \frac{7}{2}, 4)$ whose structure can be described from the conformal invariance. For the general $N$, we notice that the self-coupling constant for arbitrary $N$ was determined form unitarity arguments in \cite{BW}. In section 3, we take the large $(N,k)$ limit of the operator product expansion between the lowest higher spin current and itself in the context of ${\cal N}=2$ ${\cal W}_{N+1}$ algebra. In section 4, based on the section 3, we compare the result of section 3 with the classical ${\cal N}=2$ ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ algebra developed in \cite{HP}. We will present the three bosonic operator product expansions. At linear order, one sees an agreement between the boundary and bulk theories. In section 5, We summarize what we have found in this paper and make some comments on the future directions. In the Appendix, we describe some details discussed in the sections $2$, $3$, $4$. There exist some related works in \cite{Vasiliev}-\cite{GGHR}, along the line of \cite{GG}. \section{The ${\cal N}=2$ current algebra, Kazama-Suzuki coset model and ${\cal W}_{N+1}$ algebra} Let us consider the hermitian symmetric space where the complex structure is preserved \footnote{Following the procedure in \cite{CG}, the ${\cal N}=1$ supersymmetric coset can be written in terms of the bosonic coset \cite{CHR,CG,HP} by introducing the $SO(2N)$ factor in the numerator related to the free fermions. } \begin{eqnarray} {\bf CP}^N = \frac{SU(N+1)}{SU(N) \times U(1)}. \label{cosetcpn} \end{eqnarray} Let $G=SU(N+1)$ be an even-dimensional Lie group with complex structure and let $H=SU(N) \times U(1)$ be an even-dimensional subgroup. This implies that the $N$ should be even. We introduce a complex basis for the Lie algebra in which the complex structure is diagonal and let us label the index of the group generators by $A$ and $\bar{A}$ where $A=1, 2, \cdots, \frac{1}{2} \mbox{dim} \, G = \frac{1}{2} [(N+1)^2-1]$(similarly $ \bar{A} = \bar{1}, \bar{2}, \cdots, \overline{\frac{1}{2} \mbox{dim} \, G} = \overline{\frac{1}{2} [(N+1)^2-1]}$). For the Hermitian generators, one has $T_{\bar{A}} = T_A^{\dagger}$ and the structure constants appear in the standard commutation relations $[T_A, T_B] = f_{AB}^{\;\;\;\;C} T_C, [T_A, T_{\bar{B}}] = f_{A\bar{B}}^{\;\;\;\;C} T_C+f_{A\bar{B}}^{\;\;\;\;\bar{C}} T_{\bar{C}}$ and $[T_{\bar{A}}, T_{\bar{B}}] = f_{\bar{A}\bar{B}}^{\;\;\;\;\bar{C}} T_{\bar{C}}$. In other words, the structure constants $f_{AB}^{\;\;\;\;\bar{C}}$ and $f_{\bar{A}\bar{B}}^{\;\;\;\;C}$ vanish. Furthermore, there are relations, $\mbox{Tr} (T_A T_B)=0$, $\mbox{Tr} (T_A T_{\bar{B}})=\delta_{A\bar{B}}$, and $\mbox{Tr} (T_{\bar{A}} T_{\bar{B}})=0$. Then the ${\cal N}=2$ current algebra can be described by the ${\cal N}=2$ currents $Q^A(Z)$ and $Q^{\bar{A}}(Z)$ with nonlinear constraints where $Z$ stands for ${\cal N}=2$ superspace coordinates, one real bosonic coordinate $z$, and pair of two conjugate Grassman coordinates $\theta, \bar{\theta}$: $Z=(z, \theta, \bar{\theta})$. We consider the chiral currents where they are annihilated by $D_{-}$ and $\overline{D}_{-}$ and for simplicity we use $D$ for $D_{+}$ and $\overline{D}$ for $\overline{D}_{+}$. We present the ${\cal N}=2$ current algebra in the Appendix $A$. In order to obtain the generalization of Sugawara construction, it is convenient to decompose the group $G$ indices into the subgroup $H$ indices and the coset $\frac{G}{H}$ indices explicitly. Let lower case middle roman indices $m, n, p, \cdots $, running from $1$ to $\frac{N^2}{2}$, refer to the Lie algebra of $H$, lower case top roman indices $a, b, c, \cdots$, running from $\frac{N^2}{2}+1$ to $\frac{1}{2} \left[ (N+1)^2-1 \right]$, refer to the remaining Lie algebra generators corresponding to the coset $\frac{G}{H}$. The complex conjugated indices $\bar{m}, \bar{n}, \bar{p}, \cdots $ and $\bar{a}, \bar{b}, \bar{c}, \cdots$ hold similarly. That is, \begin{eqnarray} m, n, p, \cdots & = & 1, 2, 3, \cdots, \frac{N^2}{2}, \qquad a, b, c, \cdots = \frac{N^2}{2}+1, \cdots, \frac{1}{2} \left[ (N+1)^2-1 \right], \nonumber \\ \bar{m}, \bar{n}, \bar{p}, \cdots & = & \bar{1}, \bar{2}, \bar{3}, \cdots, \overline{\frac{N^2}{2}}, \qquad \bar{a}, \bar{b}, \bar{c}, \cdots = \overline{\frac{N^2}{2}+1}, \cdots, \overline{\frac{1}{2} \left[ (N+1)^2-1 \right]}. \label{indices} \end{eqnarray} The indices $A, B, C, \cdots$ corresponding to the group $G$ are grouped into $m, n, p, \cdots$ of the subgroup $H$ and $a, b, c, \cdots$ corresponding to the coset $\frac{G}{H}$. For the currents $Q^A(Z)$ and $Q^{\bar{A}}(Z)$, one uses $J^a(Z), J^{\bar{a}}(Z)$ that live in the coset $\frac{G}{H}$, and $K^{m}(Z), K^{\bar{m}}(Z)$ that live in the subgroup $H$: \begin{eqnarray} Q^A(Z), Q^{\bar{A}}(Z) \rightarrow K^{m}(Z), \,\,\, K^{\bar{m}}(Z), \,\,\, J^a(Z), \,\,\, J^{\bar{a}}(Z). \label{newfields} \end{eqnarray} Then the original operator product expansions (\ref{OPEQQ}) can be reexpressed in terms of the currents (\ref{newfields}) where the subgroup index structure and remaining index structure are manifest. The ten(the all possibility among four currents) operator product expansions between these currents are \begin{eqnarray} K^m (Z_{1}) K^n (Z_{2}) & = & -\frac{\bar{\theta}_{12}}{z_{12}} f_{\bar{m}\bar{n}}^{\;\;\;\;\bar{p}} K^p(Z_2) -\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{\bar{m} r}^{\;\;\;\;\bar{p}} f_{\bar{n} \bar{r}}^ {\;\;\;\;\bar{q}} K^p K^q(Z_2) +\cdots, \nonumber \\ J^a (Z_{1}) J^b (Z_{2}) & = & -\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{\bar{a} m}^{\;\;\;\;\bar{c}} f_{\bar{b} \bar{m}}^ {\;\;\;\;\bar{d}} J^c J^d(Z_2)+\cdots, \nonumber \\ K^m (Z_{1}) J^a (Z_{2}) & = & -\frac{\bar{\theta}_{12}}{z_{12}} f_{\bar{m} \bar{a}}^{\;\;\;\;\bar{b}} J^b(Z_2) -\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{\bar{m} n}^{\;\;\;\;\bar{p}} f_{\bar{a} \bar{n}}^ {\;\;\;\;\bar{b}} K^p J^b(Z_2)+\cdots, \nonumber \\ K^{\bar{m}} (Z_{1}) K^{\bar{n}} (Z_{2}) & = & -\frac{\theta_{12}}{z_{12}} f_{m n}^{\;\;\;\;p} K^{\bar{p}}(Z_2) +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{m \bar{p}}^{\;\;\;\;q} f_{n p}^ {\;\;\;\;r} K^{\bar{q}} K^{\bar{r}}(Z_2)+\cdots, \nonumber \\ J^{\bar{a}} (Z_{1}) J^{\bar{b}} (Z_{2}) & = & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{a \bar{m}}^{\;\;\;\;c} f_{b m}^ {\;\;\;\;d} J^{\bar{c}} J^{\bar{d}}(Z_2)+\cdots, \nonumber \\ K^{\bar{m}} (Z_{1}) J^{\bar{a}} (Z_{2}) & = & -\frac{\theta_{12}}{z_{12}} f_{m a}^{\;\;\;\;b} J^{\bar{b}}(Z_2) +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{m \bar{p}}^{\;\;\;\;n} f_{a p}^ {\;\;\;\;b} K^{\bar{n}} J^{\bar{b}}(Z_2)+\cdots, \nonumber \\ K^m (Z_{1}) K^{\bar{n}} (Z_{2}) & = & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} \frac{1}{2} \left[(k+N+1) \delta^ {m \bar{n}} + f_{\bar{m} p}^{\;\;\;\;\bar{q}} f_{n \bar{p}}^{\;\;\;\;q} \right] -\frac{1}{z_{12}} (k+N+1) \delta^{m \bar{n}} \nonumber \\ &- & \frac{\bar{\theta}_{12}}{z_{12}} f_{\bar{m} n}^{\;\;\;\; p} K^{\bar{p}}(Z_2) - \frac{\theta_{12}}{z_{12}} f_{\bar{m} n}^{\;\;\;\; \bar{p}} K^{p}(Z_2) \nonumber \\ &- & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \left[ f_{\bar{m} n}^{\;\;\;\;\bar{p}} \overline{D} K^p + \frac{1}{(k+N+1)} f_{\bar{m} p}^{\;\;\;\;\bar{q}} f_{n \bar{p}}^{\;\;\;\;r} K^q K^{\bar{r}} \right](Z_2) + \cdots, \nonumber \\ K^m (Z_{1}) J^{\bar{a}} (Z_{2}) & = & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} \frac{1}{2} f_{\bar{m} n}^{\;\;\;\;\bar{p}} f_{a \bar{n}}^{\;\;\;\;p} - \frac{\bar{\theta}_{12}}{z_{12}} f_{\bar{m} a}^{\;\;\;\;b} J^{\bar{b}}(Z_2) \nonumber \\ & - & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \frac{1}{(k+N+1)} f_{\bar{m} p}^{\;\;\;\;\bar{q}} f_{a \bar{p}}^{\;\;\;\;b} K^q J^{\bar{b}}(Z_2) + \cdots, \nonumber \\ J^a (Z_{1}) K^{\bar{m}} (Z_{2}) & = & - \frac{\theta_{12}}{z_{12}} f_{\bar{a} m}^{\;\;\;\;\bar{b}} J^{b}(Z_2) \nonumber \\ &- & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \left[ f_{\bar{a} m}^{\;\;\;\;\bar{b}} \overline{D} J^b + \frac{1}{(k+N+1)} f_{\bar{a} p}^{\;\;\;\;\bar{b}} f_{m \bar{p}}^{\;\;\;\;n} J^b K^{\bar{n}} \right](Z_2) + \cdots, \nonumber \\ J^a (Z_{1}) J^{\bar{b}} (Z_{2}) & = & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} \frac{1}{2} \left[(k+N+1) \delta^ {a \bar{b}} + f_{\bar{a} m}^{\;\;\;\;\bar{c}} f_{b \bar{m}}^{\;\;\;\;c} +f_{\bar{a} c}^{\;\;\;\;\bar{m}} f_{b \bar{c}}^{\;\;\;\;m} \right] \nonumber \\ &- & \frac{1}{z_{12}} (k+N+1) \delta^{a \bar{b}} - \frac{\theta_{12}}{z_{12}} f_{\bar{a} b}^{\;\;\;\;\bar{m}} K^{m}(Z_2)- \frac{\bar{\theta}_{12}}{z_{12}} f_{\bar{a} b}^{\;\;\;\;m} K^{\bar{m}}(Z_2) \nonumber \\ &- & \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \left[ f_{\bar{a} b}^{\;\;\;\;\bar{m}} \overline{D} K^m + \frac{1}{(k+N+1)} \left( f_{\bar{a} m}^{\;\;\;\;\bar{c}} f_{b \bar{m}}^{\;\;\;\;d} J^c J^{\bar{d}} + f_{\bar{a} c}^{\;\;\;\;\bar{m}} f_{b \bar{c}}^{\;\;\;\;n} K^m K^{\bar{n}} \right) \right](Z_2) \nonumber \\ &+ & \cdots, \label{basicOPE} \end{eqnarray} where \footnote{There is a mathematica package \cite{KT} on ${\cal N}=2$ superspace but one cannot use this because in our case the right hand side of the operator product expansion (\ref{basicOPE}) has nonlinear structure. This is the limitation of this package. We thank S. Krivonos for pointing out this. However, from time to time, we use this package in order to extract the component approach and are working on \cite{Thielemans} mainly for $N=4$ case. } the complex spinor covariant derivatives are given by \begin{eqnarray} D =\frac{\partial}{\partial \theta}-\frac{1}{2} \overline {\theta} \frac{\partial}{\partial z}, \qquad \overline{D} =\frac{\partial}{\partial \overline{\theta}}-\frac{1}{2} \theta \frac{\partial}{\partial z}, \label{DDbar} \end{eqnarray} and they satisfy the algebra \begin{eqnarray} D \overline{D} + \overline{D} D \equiv \{ D, \overline{D} \}=-\frac{\partial}{\partial z}. \label{anticomm} \end{eqnarray} We also use a simplified notation as \begin{eqnarray} {\theta}_{12}={\theta}_{1}-{\theta}_{2}, \qquad {\overline{\theta}}_{12}={\overline{\theta}}_{1}-{\overline{\theta}}_{2}, \qquad z_{12}=z_{1}-z_{2}+\frac{1}{2}({\theta}_{1} {\overline{\theta}}_{2} + {\overline{\theta}}_{1} {\theta}_{2}). \label{thetathetabar} \end{eqnarray} In the first equation of (\ref{basicOPE}), the property of $f_{\bar{m} \bar{n}}^{\;\;\;\;\bar{a}}=0=f_{ab}^{\;\;\;\;m}$ is used. In the second equation, one also uses $f_{\bar{a} \bar{b}}^{\;\;\;\;\bar{m}}=0= f_{\bar{a}\bar{b}}^{\;\;\;\;\bar{c}}=f_{m\bar{n}}^{\;\;\;\; a}$. One obtains the third equation after one uses $f_{m\bar{n}}^{\;\;\;\;a}=0=f_{ab}^{\;\;\;\;m}$. For the fourth-sixth equations, one also uses similar properties of structure constants $ f_{m \bar{n}}^{\;\;\;\;\bar{a}}=0= f_{m n}^{\;\;\;\;a}=f_{ab}^{\;\;\;\;c}$ with above vanishing structure constants. Also the identity $f_{a\bar{b}}^{\;\;\;\;c}=0$ is used in the remaining equations. In the Appendix $B$, we present the component operator product expansions for (\ref{basicOPE}). One can rewrite the constraints (\ref{constraint}), by expanding the $G$-indices into $H$-indices and $\frac{G}{H}$-indices as above, \begin{eqnarray} D K^m(Z) & = & -\frac{1}{2(k+N+1)} f_{\bar{m} n}^{\;\;\;\;\bar{p}} K^n K^p(Z), \nonumber \\ D J^a(Z) & = & -\frac{1}{(k+N+1)} f_{\bar{a} b}^{\;\;\;\;\bar{m}} J^b K^m(Z), \nonumber \\ \overline{D} K^{\bar{m}} (Z)& =& -\frac{1}{2(k+N+1)} f_{m \bar{n}}^{\;\;\;\;p} K^{\bar{n}} K^{\bar{p}}(Z), \nonumber \\ \overline{D} J^{\bar{a}}(Z) & = & -\frac{1}{(k+N+1)} f_{a \bar{b}}^{\;\;\;\;m} J^{\bar{b}} K^{\bar{m}}(Z), \label{Constraints} \end{eqnarray} where one uses $f_{a\bar{b}}^{\;\;\;\;\bar{c}}=0=f_{mn}^{\;\;\;\;a}=f_{ab}^{\;\;\;\;m}= f_{\bar{a}\bar{b}}^{\;\;\;\;\bar{m}}$. For example, the $\theta$ and $\bar{\theta}$ independent terms in the left hand side can be obtained from the corresponding quantities in the right hand side of (\ref{Constraints}). Note that the unconstrained ${\cal N}=2$ currents have too many components and we have to impose constaints in order to preserve the number of the independent ${\cal N}=1$ currents \cite{HS}. As we will see the component currents explicitly, the unconstrained ${\cal N}=1$ affine Kac-Moody currents(or its component currents) are relocated into the component currents in an extended ${\cal N}=2$ superspace. One also obtains, from (\ref{anticomm}), \begin{eqnarray} \left[ D, \overline{D} \right] K^m(Z) &= & -\partial K^m(Z) +\frac{1}{(k+N+1)} f_{\bar{m} n}^{\;\;\;\;\bar{p}} \left( \overline{D} K^n K^p- K^n \overline{D} K^p \right)(Z), \nonumber \\ \left[D, \overline{D} \right] K^{\bar{m}}(Z) &= & \partial K^{\bar{m}}(Z) - \frac{1}{(k+N+1)} f_{m \bar{n}}^{\;\;\;\;p} \left( D K^{\bar{n}} K^{\bar{p}}- K^{\bar{n}} D K^{\bar{p}} \right)(Z), \nonumber \\ \left[ D, \overline{D} \right] J^a(Z) &= & -\partial J^a(Z) +\frac{2}{(k+N+1)} f_{\bar{a} b}^{\;\;\;\;\bar{m}} \left( \overline{D} J^b K^m- J^b \overline{D} K^m \right)(Z), \nonumber \\ \left[D, \overline{D} \right] J^{\bar{a}}(Z) &= & \partial J^{\bar{a}}(Z) - \frac{2}{(k+N+1)} f_{a \bar{b}}^{\;\;\;\;m} \left( D J^{\bar{b}} K^{\bar{m}}- J^{\bar{b}} D K^{\bar{m}} \right)(Z). \label{ConstraintsDDB} \end{eqnarray} Also in this case, the quantities in the left hand side are not independent because they can be obtained from the quantities in the right hand side(cubic or linear terms) where all the derivative terms can be written in terms of quadratic currents via (\ref{Constraints}). The Sugawara stress energy tensor for the group $G=SU(N+1)$ can be written in terms of \begin{eqnarray} T_G=-\frac{1}{(k+N+1)} \left[J^a J^{\bar{a}} + K^m K^{\bar{m}} -\left( f_{\bar{m} \bar{a}}^{\;\;\;\;\bar{a}} + f_{\bar{m} \bar{n}}^{\;\;\;\;\bar{n}} \right) D K^{\bar{m}}-\left( f_{m \bar{a}}^{\;\;\;\; \bar{a}} +f_{m \bar{n}}^{\;\;\;\;\bar{n}} \right) \overline{D} K^{m} \right]. \label{Gstress} \end{eqnarray} Note that there are linear terms in the currents as well as the quadratic terms. As before, by using the $H$-indices and $\frac{G}{H}$-indices in (\ref{Sugawara}) explicitly, the vanishing of structure constants $f_{\bar{a}\bar{b}}^{\;\;\;\;\bar{c}}=0= f_{m\bar{n}}^{\;\;\;\;a}$ is used. For the metric, $\delta_{a\bar{m}}=0=\delta_{m\bar{a}}$. Similarly, the stress energy tensor for the subgroup $H=SU(N) \times U(1)$ can be written as \begin{eqnarray} T_H(Z)=-\frac{1}{(k+N+1)} \left[K^m K^{\bar{m}}(Z)-f_{\bar{m} \bar{n}}^{\;\;\;\; \bar{n}} D K^{\bar{m}}(Z)-f_{m \bar{n}}^{\;\;\;\;\bar{n}} \overline{D} K^{m}(Z) \right]. \label{Hstress} \end{eqnarray} Then the stress energy tensor $T(Z)$ for the supersymmetric coset model based on ${\cal N}=2$ ${\bf CP}^N$ model is obtained, by taking the difference between (\ref{Gstress}) and (\ref{Hstress}), \begin{eqnarray} T(Z)=T_G(Z)-T_H(Z)=-\frac{1}{(k+N+1)} \left[J^a J^{\bar{a}}-f_{\bar{m} \bar{a}}^{\,\,\,\,\,\,\,\,\bar{a}} D K^{\bar{m}}-f_{m \bar{a}}^{\,\,\,\,\,\,\,\, \bar{a}} \overline{D} K^{m} \right](Z). \label{stress} \end{eqnarray} From the defining operator product expansions (\ref{basicOPE}) between the currents, one obtains the standard operator product expansion of ${\cal N}=2$ superconformal algebra, together with (\ref{thetathetabar}), \begin{eqnarray} T (Z_{1}) T (Z_{2})= \frac{1}{z^{2}_{12}} \frac{c}{3} + \left[ \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} -\frac{\theta_{12}}{z_{12}} D +\frac{\bar{\theta}_{12}}{z_{12}} \overline{D} +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \partial \right] T (Z_2). \label{ttope} \end{eqnarray} One can easily check that there is no singular term in the operator product expansions between the currents $K^m(Z_1), K^{\bar{m}}(Z_1)$ and the stress tensor $T(Z_2)$. The corresponding central charge depends on $N$ and $k$ as follows \footnote{This can be described as $\frac{3 k_G}{2(k_G + \tilde{h}_G)} \mbox{dim} \left( \frac{G}{H} \right)$ \cite{KSNPB} where $k_G=k$, $\tilde{h}_G=N+1$ is the dual Coxeter number of group $G$ and $\mbox{dim} \left( \frac{G}{H} \right)=2N$.}: \begin{eqnarray} c(N,k) & = & c_G -c_H \nonumber \\ & = & \frac{3}{2} ((N+1)^2-1) \left[ 1-\frac{2(N+1)}{3(k+N+1)} \right]- \frac{3}{2} (N^2-1) \left[ 1-\frac{2 N}{3(k+1+N)} \right]-\frac{3}{2} \nonumber \\ & = & \frac{3 N k}{N+k+1}. \label{centralcharge} \end{eqnarray} Note that the coefficients of the stress energy tensors (\ref{Gstress}) and (\ref{Hstress}) are the same. This is different feature from the coset construction for the bosonic theory where the diagonal subalgebra exists and the coefficients of the various stress energy tensors are different. Also, the level of the group $SU(N+1)$ and the level of $SU(N)$ are same as each other. In other words, each $\frac{1}{(z-w)^2}$ term of spin $1$-spin $1$ operator expansion has the same factor $(k+N+1)$. For the explicit form, see the Appendix $B$. This shift $(k+N+1)$ rather than $k$ arises from the ${\cal N}=1$ supersymmetrization. There are two requirements on the ${\cal N}=2$ current $W(Z)$ of generators with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$. 1) The operator product expansions between the $H$-currents $K^m(Z), K^{\bar{m}}(Z)$ and the $\frac{G}{H}$-current $W(Z)$ should not contain any singular terms: \begin{eqnarray} K^m(Z_1) W(Z_2) =0, \qquad K^{\bar{m}}(Z_1) W(Z_2)=0. \label{KWvanishing} \end{eqnarray} In practice, one uses the component approach and due to the constraints (\ref{Constraints}), only after some of the operator product expansions(among $16$ operator product expansions for each case in (\ref{KWvanishing})) are checked, the coefficient fucntions appearing in the unknown higher spin current $W(Z)$ are determined completely except the overall constant. 2) The current $W(Z)$ with vanishing $U(1)$ charge is a ${\cal N}=2$ primary field under the stress energy tensor (\ref{stress}) \begin{eqnarray} T (Z_{1}) W (Z_{2})= \left[ \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} 2 -\frac{\theta_{12}}{z_{12}} D +\frac{\bar{\theta}_{12}}{z_{12}} \overline{D} +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \partial \right] W (Z_2). \label{TW} \end{eqnarray} Here there is no term like $\frac{1}{z_{12}}$ due to the zero $U(1)$ charge. The coefficient $2$ in $\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2}$ implies the lowest spin of $W(Z)$. We present the component results for (\ref{TW}) in the Appendix $C$. In general, there exist $\frac{1}{z_{12}^3}$-, $\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^3}$-, $\frac{\theta_{12}}{z_{12}^3}$-, and $\frac{\bar{\theta}_{12}}{z_{12}^3}$-terms with appropriate composite currents. The requirement 2) implies that these extra terms should vanish by choosing the correct coefficient functions. For the ${\cal N}=2$ currents, the component currents are given by \begin{eqnarray} K^m(Z) & = & K^m(z)+ \theta \,\, D K^m(z) + \bar{\theta} \,\, \overline{D} K^m(z)+ \theta \bar{\theta}\,\, (-1) \frac{1}{2} [ D, \overline{D} ] K^m(z), \nonumber \\ K^{\bar{m}}(Z) & = & K^{\bar{m}}(z)+ \theta \,\, D K^{\bar{m}}(z) +\bar{\theta} \,\, \overline{D} K^{\bar{m}}(z) +\theta \bar{\theta} \,\, (-1) \frac{1}{2} [ D, \overline{D} ] K^{\bar{m}}(z), \nonumber \\ J^a(Z) & = & J^a(z) +\theta \,\, D J^a(z) +\bar{\theta} \,\, \overline{D} J^a(z)+ \theta \bar{\theta} \,\, (-1) \frac{1}{2} [ D, \overline{D} ] J^a(z), \nonumber \\ J^{\bar{a}}(Z) & = & J^{\bar{a}}(z) +\theta \,\, D J^{\bar{a}}(z) +\bar{\theta} \,\, \overline{D} J^{\bar{a}}(z) +\theta \bar{\theta} \,\, (-1) \frac{1}{2} [ D, \overline{D} ] J^{\bar{a}}(z). \label{components} \end{eqnarray} Due to the constraints (\ref{Constraints}) and (\ref{ConstraintsDDB}), the $\theta$- and $\theta \bar{\theta}$ components of $K^m(Z)$ and $J^a(Z)$ are not independent but they can be written in terms of other independent terms. Similarly, the $\bar{\theta}$- and $\theta \bar{\theta}$ components of $K^{\bar{m}}(Z)$ and $J^{\bar{a}}(Z)$ can be written in terms of other independent terms. Let us emphasize how one applies the above two conditions 1) and 2). For the general ${\cal N}=2$ ${\cal W}_{N+1}$ algebra, we use them in ${\cal N}=2$ superspace but for fixed $N=4$ case, we use the package \cite{Thielemans} where the component result is necessary to obtain the operator product expansions. Therefore, one should apply the two conditions in the component approach. The component result for (\ref{TW}) is summarized in the Appendix $C$. For the regularity condition, given in 1), among $16$ operator product expansions for each case, only the half of them are independent from the arguments in (\ref{components}). Once we have checked the regularity condition for the independent components, then the condition 1) satisfies automatically, by construction. We do not need to check the other remaining half of the equations. For the stress energy tensor \footnote{Our notation corresponds to the one in \cite{Romans} as follows: $T(z) \leftrightarrow J_{ro}(z), D T(z) \leftrightarrow G_{ro}^{+}, \overline{D} T(z) \leftrightarrow G_{ro}^{-}$, and $-\frac{1}{2} [D, \overline{D}] T(z) \leftrightarrow T_{ro}(z)$.}, one has \begin{eqnarray} T(Z) & = & T(z)+ \theta \,\, D T(z) + \bar{\theta} \,\, \overline{D} T(z)+ \theta \bar{\theta}\,\, (-1) \frac{1}{2} [ D, \overline{D} ] T(z), \label{compstress} \end{eqnarray} where the component fields can be obtained from (\ref{stress}) by using the covariant derivatives (\ref{DDbar}) with (\ref{anticomm}) and putting the $\theta, \bar{\theta}$'s to vanish. $T(z)$ is a $U(1)$ current of spin $1$, $D T(z)$ and $\overline{D} T(z)$ are fermionic currents of spin $\frac{3}{2}$ and $- \frac{1}{2} [ D, \overline{D} ] T(z)$ is the stress energy tensor of spin $2$. In next subsections, we will construct the higher spin currents explicitly. Starting with $N=2$ case, one considers the $N=4$ case and wants to generalize for arbitrary $N$ which corresponds to ${\cal N}=2$ ${\cal W}_{N+1}$ algebra. \subsection{The $N=2$ case: ${\bf CP}^2 (= \frac{SU(3)}{SU(2) \times U(1)})$ coset model} The ${\cal N}=2$ ${\cal W}_3$ algebra has one additional extra higher spin current with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$, as well as the ${\cal N}=2$ superconformal algebra (\ref{ttope}), where one can write down the following component currents explicitly \footnote{ Similarly, our fields correspond to the ones in \cite{Romans} as follows: $W(z) \leftrightarrow V_{ro}(z), D W(z) \leftrightarrow U_{ro}^{+}, \overline{D} W(z) \leftrightarrow U_{ro}^{-}$, and $-\frac{1}{2} [D, \overline{D}] W(z) \leftrightarrow W_{ro}(z)$.} \begin{eqnarray} W(Z) & = & W(z)+ \theta \,\, D W(z) + \bar{\theta} \,\, \overline{D} W(z)+ \theta \bar{\theta}\,\, (-1) \frac{1}{2} [ D, \overline{D} ] W(z). \label{n2W} \end{eqnarray} In this case, the number of independent WZW currents is $8$ and it is not so complicated to write down all the possible terms for the current (\ref{n2W}). However, the explicit form for this current in \cite{Ahn94} is not convenient to generalize to the arbitrary ${\cal N}=2$ ${\cal W}_{N+1}$ algebra because there are no any contractions between the $SU(3)$ group indices. For given result for the expression of (\ref{n2W}) in \cite{Ahn94}, one can think of the equivalent expression as follows \footnote{There are $48$ nonzero structure constants.}: \begin{eqnarray} W(Z) & = & a_1 \, f_{\bar{p} a}^{\;\;\;\;b} f_{p m}^{\;\;\;\;n} J^a J^{\bar{b}} K^m K^{\bar{n}}(Z) + a_2 \, f_{p a}^{\;\;\;\;b} f_{\bar{p}m}^{\;\;\;\;n} J^a J^{\bar{b}} K^m K^{\bar{n}}(Z)+ a_3 \, J^a J^b J^{\bar{a}} J^{\bar{b}}(Z) \nonumber \\ & + & a_4 \, f_{\bar{m} a}^{\;\;\;\;b} J^{a} J^{\bar{b}} D K^{\bar{m}}(Z) + a_5 \, f_{m a}^{\;\;\;\;b} J^{a} J^{\bar{b}} \overline{D} K^{m}(Z) +a_6 \, f_{\bar{m} \bar{n}}^{\;\;\;\;\bar{n}} D K^{\bar{m}} J^a J^{\bar{a}}(Z) \nonumber \\ & + & a_7 \, f_{m \bar{n}}^{\;\;\;\;\bar{n}} \overline{D} K^{m} J^a J^{\bar{a}}(Z) +a_{8} \, \overline{D} J^a D J^{\bar{a}}(Z) + a_{9} \, \overline{D} K^m D K^{\bar{m}}(Z) \nonumber \\ & + & a_{10} \, J^a \partial J^{\bar{a}}(Z) + a_{11} \, \partial J^a J^{\bar{a}}(Z) + a_{12} \, K^m \partial K^{\bar{m}}(Z) + a_{13} \, \partial K^m K^{\bar{m}}(Z) \nonumber \\ & + & a_{14} \, J^a [D, \overline{D}] J^{\bar{a}}(Z) + a_{15} \, [D, \overline{D}] J^a J^{\bar{a}}(Z) +a_{16} \, K^m [D, \overline{D}] K^{\bar{m}}(Z) \nonumber \\ & + & a_{17} \, [D, \overline{D}] K^m K^{\bar{m}}(Z) +a_{18} \, D J^a \overline{D} J^{\bar{a}}(Z)+ a_{19} \, D K^m \overline{D} K^{\bar{m}}(Z) \nonumber \\ & + & a_{20} \, f_{m\bar{n}}^{\;\;\;\;\bar{n}} \partial \overline{D} K^{m}(Z) + a_{21} \, f_{\bar{m} \bar{n}}^{\;\;\;\;\bar{n}} \partial D K^{\bar{m}}(Z) + a_{22} \, f_{m \bar{p}}^{\;\;\;\;\bar{p}} f_{n\bar{q}}^{\;\;\;\;\bar{q}} \overline{D} K^m \overline{D} K^{n}(Z) \nonumber \\ & + & a_{23} \, f_{\bar{m} \bar{p}}^{\;\;\;\;\bar{p}} f_{\bar{n} \bar{q}}^{\;\;\;\;\bar{q}} D K^{\bar{m}} D K^{\bar{n}}(Z) + a_{24} \, f_{m \bar{p}}^{\;\;\;\;\bar{p}} f_{\bar{n} \bar{q}}^{\;\;\;\;\bar{q}} \overline{D} K^{m} D K^{\bar{n}}(Z), \label{n2terms} \end{eqnarray} where all the coefficient functions are present in the Appendix (\ref{coeffn2one}). This explicit structure (\ref{n2terms}) was obtained from the two conditions (\ref{KWvanishing}) and (\ref{TW}). We also present the operator product expansion at the linearized level in the Appendix $D$ where the right hand side contains the central charge \begin{eqnarray} c_{N=2} =\frac{6k}{k+3}, \label{n2central} \end{eqnarray} and the self-coupling constant \begin{eqnarray} \alpha_{N=2}^2= \frac{27(2-k)^2(1+k)^2}{(-1+k)(5+k)(3+2k)(-3+5k)} = -\frac{(3+c)^2 (-12+5 c)^2}{2 (-15+c) (-1+c) (6+c) (-3+2 c)}, \label{const1} \end{eqnarray} where we replace the level $k$ with the central charge $c$ (\ref{n2central}). Compared to the pure bosonic case(for example, the operator product expansion between the spin $3$ current and itself in terms of WZW currents), as in introduction, the operator product expansion of $W(Z)$ and itself in ${\cal N}=2$ superspace or in the component approach has a self-coupling constant term in the right hand side. For the bosonic spin $3$ case, there is no spin $3$ current that will appear in the $\frac{1}{(z-w)^3}$ term in the right hand side of the operator product expansion. One can easily see this observation by considering the operator product expansion with reversing the arguments and realizing that there will be an inconsistency in the operator product expansion. However, this is not true for the spin $4$ case. In general, the operator product expansion between the spin $4$ current and itself(in terms of WZW currents) in $W_N$ algebra generates the spin $4$ current in the right hand side. It would be interesting to find out the self-coupling constant for the spin $4$ current in the bosonic case. We present the operator product expansion in (\ref{open2linear}) at linearized level. Note that the coefficient functions in the right hand side are characterized by the central charge $c$ and self-coupling constant $\alpha$. One sees that the $\alpha$ dependence appears in the current $W(Z_2)$ and its descendant fields and the functions of central charge appear in the other remaining fields. \subsection{The $N=4$ case: ${\bf CP}^4 (=\frac{SU(5)}{SU(4) \times U(1)})$ coset model} \label{n4} Let us recall that the field contents of ${\cal N}=2$ ${\cal W}_5$ algebra are given by the stress energy tensor with spins $(1, \frac{3}{2}, \frac{3}{2}, 2)$ and higher spin currents with spins $(2, \frac{5}{2}, \frac{5}{2}, 3), (3, \frac{7}{2}, \frac{7}{2}, 4)$, and $(4, \frac{9}{2}, \frac{9}{2}, 5)$ \footnote{If one considers ${\cal N}=2$ ${\cal W}_4$ algebra, then the coset can be described as ${\bf CP}^3 =\frac{SU(4)}{SU(3) \times U(1)} =\frac{SU(4) \times U(1)}{SU(3) \times U(1) \times U(1)}=\frac{U(4)}{U(3) \times U(1)}$ by introducing the extra $U(1)$'s in order to have even-dimensional groups $G$ and $H$ from (\ref{cosetcpn}). In principle, one can find the corresponding ${\cal N}=2$ current algebra with $U(4)$ group in the complex basis. This should correspond to the work of \cite{BW}.}. Then how one can determine these currents in terms of the fundamental currents which live in the supersymmetric WZW model? As before, the stress energy tensor is given by (\ref{stress}). It is nontrivial to find the extra symmetry currents in the generalization of Sugawara construction(so called Casimir construction) that includes the higher spin generators. Compared to the previous case where there exist only $8$ independent fields, there exist $24$ independent fundamental WZW currents. One way to write down the lowest higher spin current with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ is to take into account of all the possible terms(quartic terms, cubic terms and quadratic terms and linear terms in the WZW currents (\ref{newfields})). The other way is to take the $N=2$ case (\ref{n2terms}) with arbitrary coefficient functions and apply the two conditions (\ref{KWvanishing}) and (\ref{TW}) but did not come out properly. By brute force, one should add other possible terms coming from \begin{eqnarray} TT(Z),\,\,\, \partial T(Z),\,\,\, [D, \overline{D}]T(Z),\,\,\, T_H T_H(Z), \,\,\,\partial T_H(Z), \,\,\,[D, \overline{D}] T_H(Z), \,\,\, T T_H(Z), \label{spin2contents} \end{eqnarray} where $T(Z)$ is given by (\ref{stress}) and $T_H(Z)$ is given by (\ref{Hstress}). In other words, by looking at the explicit expressions (\ref{spin2contents}), collecting the independent terms and adding these into the expression (\ref{n2terms}). Finally, it turns out that the correct higher spin current with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$, satisfying the two conditions (\ref{KWvanishing}) and (\ref{TW}), takes the form \begin{eqnarray} W(Z) &=& b_1 \, f_{\bar{c} a}^{\;\;\;\;\bar{m}} f_{c\bar{b}}^{\;\;\;\;n} J^a J^{\bar{b}} K^m K^{\bar{n}}(Z) + b_2 \, f_{\bar{c} a}^{\;\;\;\;n} f_{c\bar{b}}^{\;\;\;\;\bar{m}} J^a J^{\bar{b}} K^m K^{\bar{n}}(Z)+ b_3 \, J^a J^b J^{\bar{a}} J^{\bar{b}}(Z) \nonumber \\ & + & b_4 \, f_{a\bar{m}}^{\;\;\;\;b} J^a K^{\bar{m}} D J^{\bar{b}}(Z) + b_5 \, f_{m a}^{\;\;\;\;b} K^m \overline{D} J^{a} J^{\bar{b}}(Z) + b_6 \, f_{\bar{m} a}^{\;\;\;\;b} J^{a} J^{\bar{b}} D K^{\bar{m}}(Z) \nonumber \\ & + & b_7 \, f_{m a}^{\;\;\;\;b} J^{a} J^{\bar{b}} \overline{D} K^{m}(Z) +b_8 \, f_{m n}^{\;\;\;\;p} \overline{D} K^{m} K^n K^{\bar{p}}(Z) + b_9 \, \overline{D} J^a D J^{\bar{a}}(Z) \nonumber \\ & + & b_{10} \, \overline{D} K^m D K^{\bar{m}}(Z) +b_{11} \, J^a \partial J^{\bar{a}}(Z) + b_{12} \, \partial J^a J^{\bar{a}}(Z) + b_{13} \, K^m \partial K^{\bar{m}}(Z) \nonumber \\ & + & b_{14} \, \partial K^m K^{\bar{m}}(Z) + b_{15} \, J^a [D, \overline{D}] J^{\bar{a}}(Z) + b_{16} \, [D, \overline{D}] J^a J^{\bar{a}}(Z) +b_{17} \, K^m [D, \overline{D}] K^{\bar{m}}(Z) \nonumber \\ & + & b_{18} \, [D, \overline{D}] K^m K^{\bar{m}}(Z) +b_{19} \, D K^m \overline{D} K^{\bar{m}}(Z) + b_{20} \, f_{m\bar{n}}^{\;\;\;\;\bar{n}} \partial \overline{D} K^{m}(Z) + b_{21} \, f_{\bar{m} \bar{n}}^{\;\;\;\;\bar{n}} \partial D K^{\bar{m}}(Z) \nonumber \\ & + & b_{22} \, f_{\bar{m} \bar{b}}^{\;\;\;\;\bar{b}} J^a J^{\bar{a}} D K^{\bar{m}}(Z) + b_{23} \, f_{m \bar{b}}^{\;\;\;\;\bar{b}} J^a J^{\bar{a}} \overline{D} K^{m}(Z) + b_{24} \, f_{\bar{m}\bar{a}}^{\;\;\;\;\bar{a}} f_{\bar{n}\bar{b}}^{\;\;\;\;\bar{b}} D K^{\bar{m}} D K^{\bar{n}}(Z) \nonumber \\ & + & b_{25} \, f_{\bar{m}\bar{a}}^{\;\;\;\;\bar{a}} f_{n \bar{b}}^{\;\;\;\;\bar{b}} D K^{\bar{m}} \overline{D} K^{n}(Z) +b_{26} \, f_{m\bar{a}}^{\;\;\;\;\bar{a}} f_{n \bar{b}}^{\;\;\;\;\bar{b}} \overline{D} K^{m} \overline{D} K^{n}(Z) +b_{27} \, f_{\bar{m} a}^{\;\;\;\;a } \partial D K^{\bar{m}}(Z), \label{superspin2} \end{eqnarray} where all the coefficient functions are given in the Appendix $F$ explicitly \footnote{Totally, there are $249$ independent terms if we expand out the structure constants(the number of nonzero structure constants is $492$ from the discussion of the Appendix $E$) and the metric. }. This is an ${\cal N}=2$ current and one can read off the corresponding component currents with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$. The spin $2$ current $W(z)$ in (\ref{n2W}) can be obtained by putting all the $\theta$ and $\bar{\theta}$ dependence in the right hand side of (\ref{superspin2}) to zero. For the spin $\frac{5}{2}$ currents $D W(z)$ and $\overline{D} W(z)$ can be obtained also by taking the supercovariant derivatives $D$ and $\overline{D}$ into the right hand side of (\ref{superspin2}) and then putting the $\theta$ and $\bar{\theta}$ to zero at the final expression. For the spin $3$ current $-\frac{1}{2} [D, \overline{D}] W (z)$, one can do similar analysis. Due to the constraints (\ref{Constraints}), there are several ways to write down these component currents \footnote{ Also note that the previous spin $2$ current (\ref{n2terms}) can be written in terms of (\ref{superspin2}) with the coefficients in the Appendix (\ref{coeffn2two}).}. On the other hands, one can make the explicit operator product expansion between $T(Z_1)$ (\ref{stress}) and $W(Z_2)$ (\ref{superspin2}) by using the defining equations (\ref{basicOPE}). Since it should satisfy the primary condition (\ref{TW}), one can read off the above component currents straightforwardly by looking at the singular terms in the operator product expansion. We list them in the Appendix $C$. How to determine the spin $3$ current, for example? First, we determine the spin $\frac{5}{2}$ current $\overline{D} W(z)$ by using the seventh equation of (\ref{TWcomp}) and reading off the $\frac{1}{(z-w)}$ terms where one uses $\overline{D} T(z)$ from (\ref{compstress}). Next, by using the fifth equation of (\ref{TWcomp}) with spin $\frac{3}{2}$ current $D T(z)$ (\ref{compstress}) and collecting the $\frac{1}{(z-w)}$ terms, one sees the spin $3$ current $-\frac{1}{2}[D, \overline{D}] W(w)$ and the descendant field $\partial W(w)$. During this computation, one uses the constraint equations (\ref{Constraints}) all the time. In this way, one obtains all the component fields. For example, the field $D W(w)$ can be obtained from the fourth equation of (\ref{TWcomp}). Let us focus on the $\frac{1}{(z-w)^4}$ terms in the operator product expansion of $W(z) W(w)$ where the spin $2$ current $W(z)$ is the first component of $W(Z)$ in (\ref{n2W}) that has the form in (\ref{superspin2}) together with (\ref{coeffn5}). One determines the overall constant $A(k)$ \begin{eqnarray} A(k)^2 & = & -\frac{\left(25 \sqrt{3}+135 i \sqrt{5}-23 \sqrt{3} k+15 i \sqrt{5} k-8 \sqrt{3} k^2\right)^2}{8 (-1+k) (5+k)^4 (9+k) (5+2 k) (-5+11 k)} \nonumber \\ & = & \frac{(-12+c)^4 \left(-60 \sqrt{3}-324 i \sqrt{5}+33 \sqrt{3} c+39 i \sqrt{5} c+\sqrt{3} c^2-i \sqrt{5} c^2\right)^2}{207360000 (-27+c) (-2+c) (-1+c) (12+c)}, \label{Ak} \end{eqnarray} by requiring that the $\frac{1}{(z-w)^4}$ term should be equal to $\frac{c}{2}$ where the central charge is \begin{eqnarray} c_{N=4} = \frac{12 k}{k+5}. \label{centraln4} \end{eqnarray} Let us consider the $\frac{1}{(z-w)^2}$ terms in the operator product expansion of $W(z) W(w)$. In general, there are three different field contents, $W(w), [D, \overline{D}]T(w)$ and $T T(w)$. The easiest way to determine the self-coupling constant appearing in the coefficient function in front of $W(w)$(in the right hand side of this operator product expansion $W(z) W(w)$) is to focus on any quartic term which does not appear in the fields $[D, \overline{D}]T(w)$ and $T T(w)$. For example, let us consider the $K^1 K^5 K^{\bar{3}} K^{\bar{7}}(w)$ in the $\frac{1}{(z-w)^2}$ term. Definitely, this quartic term does not appear in the $[D, \overline{D}]T(w)$ and $T T(w)$. It turns out that the self-coupling constant is given in terms of either the level $k$ or the central charge $c$ (\ref{centraln4}): \begin{eqnarray} \alpha_{N=4}^2 = \frac{25 (-4+k)^2 (1+k)^2}{(-1+k) (9+k) (5+2 k) (-5+11 k)} = \frac{(3+c)^2 (-16+3 c)^2}{2 (27-c) (-2+c) (-1+c) (12+c)}. \label{const2} \end{eqnarray} Compared to the previous one for $N=2$ (\ref{const1}), this is different from (\ref{const1}). It seems that the factors $(3+c)^2$ and $(-1+c)$ are common and they appear as $N$-independent factors but other factors should behave as $N$-dependent factors. Therefore, one should consider the most general self-coupling constant which will depend on $N$ when one describes the Kazama-Suzuki model for the general $N$. Let us consider the $\frac{1}{(z-w)^2}$ term in the operator product expansion of the spin $2$ field and the spin $3$ field, $W(z) (-1)\frac{1}{2} [D, \overline{D}] W(w)$. We do not present the spin $3$ current here because the full expression for this is rather complicated. In general, there exist the seven different spin $3$ fields in this singular term: \begin{eqnarray} [D, \overline{D}]W(w), \,\, T W(w), \,\, \partial [D, \overline{D}] T(w), \,\, T [D, \overline{D}]T(w), \,\, T T T(w), \,\, \overline{D} T D T(w), \,\, \partial^2 T(w). \label{spin3comp} \end{eqnarray} As in bosonic case \footnote{Recall that for the bosonic case, the spin $4$ current appearing in the operator product expansion between the spin $3$ current and itself vanishes for $N=3$ and one of the levels being $1$ in the $W_N$ coset minimal model. However, for general $N$, the spin $4$ current occurs naturally.} \cite{Ahn2011}, one expects that there should be the extra higher spin fields because the field contents of ${\cal N}=2$ ${\cal W}_{5}$ algebra are given by the multiplet $W(Z)$ with spins $(2,\frac{5}{2}, \frac{5}{2},3)$, the multiplet $V(Z)$ with spins $(3, \frac{7}{2}, \frac{7}{2},4)$, and the multiplet $X(Z)$ with spins $(4, \frac{9}{2},\frac{9}{2},5)$. As before, one has the following component currents for this new primary current with spins $(3, \frac{7}{2}, \frac{7}{2}, 4)$, for example, \begin{eqnarray} V(Z) & = & V(z)+ \theta \,\, D V(z) + \bar{\theta} \,\, \overline{D} V(z)+ \theta \bar{\theta}\,\, (-1) \frac{1}{2} [ D, \overline{D} ] V(z). \label{Vexpress} \end{eqnarray} Although it is rather involved procedure to extract the exact form for the new primary field explicitly, one can check the existence of this field by looking at the particular term in the corresponding $\frac{1}{(z-w)^2}$ term. It turns out that, in ${\cal N}=2$ superspace, one should add the following extra singular terms in the operator product expansion $W(Z_1) W(Z_2)$, at the linearized level, compared to the ${\cal N}=2$ ${\cal W}_3$ algebra described in previous subsection, \begin{eqnarray} \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} \,\, 3 V(Z_2) + \frac{\bar{\theta}_{12}}{z_{12}} \,\, \overline{D} V(Z_2) - \frac{\theta_{12} }{z_{12}} \,\, D V(Z_2) + \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \,\, 2 \partial V(Z_2). \label{extra} \end{eqnarray} It is easy to see that all the relative coefficients can be fixed by using the conformal invariance, for given normalized factor $3$ in front of $V(Z_2)$ in (\ref{extra}). In the operator product expansion of $\phi_m(z) \phi_n(w) \sim \phi_p + \mbox{descendant}$, the relative coefficient function, for given primary field, can be determined by conformal invariance \cite{BPZ,Dotsenko}. For example, let us put the arbitrary coefficients $c_1$ and $c_2$ and $c_3$ in the second, third and fourth term respectively. Then the operator product expansion of $W(z) [D, \overline{D}]W(w)$( see also (\ref{ope346})) can be written in terms of \begin{eqnarray} \frac{1}{(z-w)^2} (-1) 6 V(w) +\frac{1}{(z-w)} \left[ (c_1-c_2 -2 c_3) \partial V + \cdots \right](w) +\cdots. \label{ope233} \end{eqnarray} The descendant field of $V(w)$, at $\frac{1}{(z-w)}$ term, is given by $\partial V(w)$. We do not write down other terms which are not relevant to our consideration. One uses the formula for the relative coefficient is given by \begin{eqnarray} \frac{h_p +h_m -h_n}{2 h_p}= \frac{3 + 2 -3}{2 \times 3} =\frac{1}{3}, \label{formula} \end{eqnarray} where the field $\phi_m$ plays the role of $W(z)$ which has a conformal dimension $2$, the field $\phi_n$ corresponds to $[D, \overline{D}]W(w)$ with spin $3$ and the field $\phi_p$ corresponds to $V(w)$ with spin $3$. Therefore, the coefficient of $\partial V(w)$ should be equal to $-2$ for given the coefficient $-6$ in front of $V(w)$ in (\ref{ope233}). That is, $\frac{-2}{-6}= \frac{1}{3}$. Then one has \begin{eqnarray} c_1-c_2 -2 c_3 =-2. \label{1} \end{eqnarray} Similarly, the operator product expansion $D W(z) \overline{D} W(w)$(see also (\ref{ope351}) and (\ref{ope352})) contains \begin{eqnarray} \frac{1}{(z-w)^2} 3 V(w) +\frac{1}{(z-w)} \left[ \frac{1}{2}(c_2+ 2 c_3) \partial V + \cdots \right](w) +\cdots. \label{opeother} \end{eqnarray} By counting the conformal dimensions and using the above formula, one gets \begin{eqnarray} \frac{h_p +h_m -h_n}{2 h_p} = \frac{3+\frac{5}{2}-\frac{5}{2}}{2 \times 3}=\frac{1}{2}. \nonumber \end{eqnarray} The coefficient of $\partial V(w)$ in (\ref{opeother}) should be equal to $\frac{3}{2}$. Therefore, the following relation holds \begin{eqnarray} \frac{1}{2}(c_2+ 2 c_3) = \frac{3}{2}. \label{2} \end{eqnarray} Due to the structure of the formula, as long as $h_m=h_n$, the relative coefficient becomes $\frac{1}{2}$ \cite{BPZ}. Finally, the operator product expansion $D W(z) [D, \overline{D}] W(w)$ has the singular term \begin{eqnarray} \frac{1}{(z-w)^2} (-6 +c_2) D V(w) +\frac{1}{(z-w)} \left[ (-c_2- 2 c_3) \partial D V + \cdots \right](w) +\cdots. \label{opeopeother} \end{eqnarray} Again, in this case, one has \begin{eqnarray} \frac{h_p +h_m -h_n}{2 h_p} = \frac{\frac{7}{2}+ \frac{5}{2}-3}{2 \times \frac{7}{2}} = \frac{3}{7}. \nonumber \end{eqnarray} The coefficient of $\partial D V(w)$ in (\ref{opeopeother}) should be equal to $\frac{3}{7}(-6+c_2)$. The final equation satisfies \begin{eqnarray} (-c_2- 2 c_3) = \frac{3}{7}(-6+c_2). \label{3} \end{eqnarray} By combing these three equations (\ref{1}), (\ref{2}) and (\ref{3}), and solving them, then there exists a unique solution and one has $c_1=1, c_2 =-1$, and $c_3=2$. Due to the field contents for ${\cal N}=2$ ${\cal W}_5$ algebra, there are other five operator product expansions in ${\cal N}=2$ superspace. In principle, one obtains them by taking the operator product expansions once all the primary fields are determined and expressed in terms of WZW currents, although the computations will be rather complicated. \subsection{The general $N$ case: ${\bf CP}^N (=\frac{SU(N+1)}{SU(N) \times U(1)})$ coset model} The self-coupling constant of the current with spins $(2, \frac{5}{2}, \frac{5}{2},3)$ for any ${\cal N}=2$ ${\cal W}_{N+1}$ algebra is determined from the unitarity arguments in \cite{BW} \begin{eqnarray} \alpha(N,k)^2 & = & \frac{3 (1+k)^2 (k-N)^2 (1+N)^2}{(-1+k) (-1+N) (1+2 k+N) (1+k+2 N) (-1-N+k (-1+3 N))} \nonumber \\ & = & \frac{(3+c)^2 \left(c+2 c N-3 N^2\right)^2}{(-1+c) (3-c+6 N) (-1+N) (c+3 N) (-3 N+c (2+N))}, \label{selfcoupling} \end{eqnarray} where the central charge is given by (\ref{centralcharge}). Note that the previous constants (\ref{const1}) and (\ref{const2}) can be read off from this general expression (\ref{selfcoupling}) by substituting $N=2$ and $N=4$ respectively. This behavior is also observed in the work of \cite{HP} by considering the singular terms $\frac{1}{(z-w)^4}$ and $\frac{1}{(z-w)^2}$ with spin $2$ current in the KS model simultaneously because the normalization in the highest singular term is different from each other. They computed the operator product expansions between the spin $2$ field and itself, by following \cite{Ito} in the context of free field realization, for $N=2,3,4,5$ cases and obtained by extrapolating these results. Now one can write down the operator product expansion between $W(Z_1)$ and $W(Z_2)$ as follows: \begin{eqnarray} && W(Z_1) W(Z_2) = \nonumber \\ && \frac{1}{z_{12}^4} \,\, \frac{c}{2} +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^4} \,\, 3 T(Z_2) +\frac{\bar{\theta}_{12}}{z_{12}^3} \,\, 3 \overline{D} T(Z_2) -\frac{\theta_{12}}{z_{12}^3} \,\, 3 D T(Z_2) +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^3} \,\, 3 \partial T(Z_2) \nonumber \\ && + \frac{1}{z_{12}^2} \left[ 2 \alpha W +\frac{c}{(-1+c)} [D, \overline{D}] T \right](Z_2) +\frac{\bar{\theta}_{12}}{z_{12}^2} \left[ \alpha \overline{D} W +\frac{(-3+2c)}{(-1+c)} \partial \overline{D} T \right](Z_2) \nonumber \\ && + \frac{\theta_{12}}{z_{12}^2} \left[ \alpha D W -\frac{(-3+2c)}{(-1+c)} \partial D T \right](Z_2) + \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} \left[ \frac{3(-8+c)}{2(-12+5c)} \alpha [D, \overline{D} ] W \right. \nonumber \\ && + \left. \frac{9c(-12+5c)}{4(-1+c)(6+c)(-3+2c)} \partial [D, \overline{D}] T + \frac{3(18-15c+2c^2+2c^3)}{2(-1+c)(6+c)(-3+2c)} \partial^2 T + 3 V \right](Z_2) \nonumber \\ && + \frac{1}{z_{12}} \left[ \alpha \partial W -\frac{c}{2(-1+c)} \partial [D, \overline{D}] T \right](Z_2) \nonumber \\ && + \frac{\bar{\theta}_{12}}{z_{12}} \left[ \frac{3(-6+c)(-1+c)}{(3+c)(-12+5c)} \alpha \overline{D} W + \frac{3c(9+3c+2c^2)}{4(-1+c)(6+c)(-3+2c)} \partial^2 \overline{D} T + \overline{D} V \right](Z_2) \nonumber \\ && + \frac{\theta_{12}}{z_{12}} \left[ \frac{3(-6+c)(-1+c)}{(3+c)(-12+5c)} \alpha D W - \frac{3c(9-3c+c^2)}{2(-1+c)(6+c)(-3+2c)} \partial^2 D T - D V \right](Z_2) \nonumber \\ && + \frac{\theta_{12}\bar{\theta}_{12}}{z_{12}} \left[- \frac{(-15+c)c}{(3+c)(-12+5c)} \alpha [D, \overline{D}] W + \frac{(-18-3c-2c^2+2c^3)}{2(-1+c)(6+c)(-3+2c)} \partial^3 T + 2 \partial V \right](Z_2) \nonumber \\ && + (\mbox{Non-linear singular terms}) +\cdots, \label{openolimit} \end{eqnarray} where the central charge is given by (\ref{centralcharge}) and the self-coupling constant is given by (\ref{selfcoupling}) \begin{eqnarray} c & = & c(N,k) = \frac{3 N k}{ N +k +1}, \label{twoconsts} \\ \alpha^2 & = & \alpha(N, k)^2 = \frac{3 (1+k)^2 (k-N)^2 (1+N)^2}{(-1+k) (-1+N) (1+2 k+N) (1+k+2 N) (-1-N+k (-1+3 N))}. \nonumber \end{eqnarray} One should see the linear structure in (\ref{openolimit}) for the operator product expansion of the current with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ and itself in ${\cal N}=2$ ${\cal W}_{N+1}$ algebra. For the nonlinear terms, one has $TT(Z_2)$ term in the $\frac{1}{z_{12}^2}$ term and the descendant fields arise in the appropriate singular terms. One also has the nonlinear terms $T W(Z_2), T [D, \overline{D}]T(Z_2), T T T(Z_2), \overline{D} T D T(Z_2)$ whose component fields appear in (\ref{spin3comp}). In principle, with the two values (\ref{twoconsts}), one can find other higher spin currents. For given the higher spin current $W(Z)$ of spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ in the ${\cal N}=2$ ${\cal W}_{N+1}$ algebra, one can construct the operator product expansion of this current and itself. By looking at the singular terms, one can read off the next higher spin current, for example, $V(Z)$ of spins $(3, \frac{7}{2}, \frac{7}{2}, 4)$ given by (\ref{Vexpress}). Then one can continue to obtain the operator product expansion between $W(Z_1)$ and $V(Z_2)$ in order to find other higher spin current and so on \footnote{ Let us remind that for the bosonic case, the spin $3$-spin $3$ operator product expansion determines the spin $4$ current in the right hand side \cite{Ahn2011} up to the overall normalization constant that can be fixed by the highest singular term of spin $4$-spin $4$ operator product expansion. Then one can compute the spin $3$-spin $4$ operator product expansion and determine other higher spin current. For example, the spin $5$ current. The ${\cal N}=2$ ${\cal W}_5$ algebra should related to this bosonic $W_5$ algebra. It would be interesting to find the structure constant for this particular coeffcient in front of spin $5$ current in the right hand side and see whether this will coincide with the previous result by using different method.}. According to the observation of \cite{GG1}, the original proposal in \cite{GG} should hold at finite $(N,k)$ and one expects that the quantum deformation algebra of ${\cal N}=2$ ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ in \cite{HP} should satisfy the algebraic structure in (\ref{openolimit}), at finite $(N,k)$. \section{The large $(N,k)$ 't Hooft limit of ${\cal N}=2$ ${\cal W}_{N+1}$ algebra} We would like to describe the large $(N, k)$ limit for the operator product expansion between the lowest higher spin current $W(Z_1)$ with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ and itself $W(Z_2)$. The large $(N,k)$ limit for fixed 't Hooft coupling constant $\lambda$ is given by \begin{eqnarray} c(N,k) = \frac{3N k}{N+k+1} \longrightarrow 3(1-\lambda) N, \qquad \lambda \equiv \frac{N}{N+k}. \label{limit} \end{eqnarray} Similarly, one also has the following limit for the self-coupling constant (\ref{selfcoupling}) \begin{eqnarray} \alpha(N,k)^2 \longrightarrow -\frac{(-1+2 \lambda )^2}{(-2+\lambda ) (1+\lambda )}. \label{alphalimit1} \end{eqnarray} From the observations for $N=2$ and $N=4$ cases in previous subsections, one expects that the operator product expansion, in the large $(N,k)$ limit, together with (\ref{limit}) and (\ref{alphalimit1}), takes the form \begin{eqnarray} && W(Z_1) W(Z_2) = \nonumber \\ && \frac{1}{z_{12}^4} \,\, \frac{c(N,k)}{2} +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^4} \,\, 3 T(Z_2) +\frac{\bar{\theta}_{12}}{z_{12}^3} \,\, 3 \overline{D} T(Z_2) -\frac{\theta_{12}}{z_{12}^3} \,\, 3 D T(Z_2) +\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^3} \,\, 3 \partial T(Z_2) \nonumber \\ && + \frac{1}{z_{12}^2} \left[ 2 \alpha(N,k) W + [D, \overline{D}] T \right](Z_2) +\frac{\bar{\theta}_{12}}{z_{12}^2} \left[ \alpha(N,k) \overline{D} W +2 \partial \overline{D} T \right](Z_2) \nonumber \\ && + \frac{\theta_{12}}{z_{12}^2} \left[ \alpha(N,k) D W -2 \partial D T \right](Z_2) + \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2} \left[ \frac{3}{10} \alpha(N,k) [D, \overline{D} ] W + \frac{3}{2} \partial^2 T + 3 V \right](Z_2) \nonumber \\ && + \frac{1}{z_{12}} \left[ \alpha(N,k) \partial W -\frac{1}{2} \partial [D, \overline{D}] T \right](Z_2) + \frac{\bar{\theta}_{12}}{z_{12}} \left[ \frac{3}{5} \alpha(N,k) \overline{D} W + \frac{3}{4} \partial^2 \overline{D} T + \overline{D} V \right](Z_2) \nonumber \\ && + \frac{\theta_{12}}{z_{12}} \left[ \frac{3}{5} \alpha(N,k) D W - \frac{3}{4} \partial^2 D T - D V \right](Z_2) \nonumber \\ && + \frac{\theta_{12} \bar{\theta}_{12}}{z_{12}} \left[ -\frac{1}{5} \alpha(N,k) [D, \overline{D}] W + \frac{1}{2} \partial^3 T + 2 \partial V \right](Z_2) \nonumber \\ && + \frac{1}{N} \mbox{(quadratic singular terms)} + \frac{1}{N^2} \mbox{(cubic singular terms)} \nonumber \\ && + \frac{1}{N^3} \mbox{(quartic singular terms)} + \cdots, \label{final} \end{eqnarray} where $c(N,k)$ and $\alpha(N,k)$ are the values after taking the large $(N,k)$ limit, given in (\ref{limit}) and (\ref{alphalimit1}) respectively. At the linear order in the right hand side, we replace the fixed coupling constants (\ref{const1}) and (\ref{const2}) with the general coupling constant (\ref{selfcoupling}) and allow to include the new ${\cal N}=2$ primary field $V(Z_2)$(and its descendant fields) (\ref{extra}) in the right hand side of the operator product expansion (\ref{final}). We list the component results of (\ref{final}) in the Appendix $G$. Note that the $\partial [D, \overline{D}] T$ term in $\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}^2}$ in (\ref{open2linear}) vanishes in this limit and does not appear in (\ref{final}) also. One expects that the extra new composite fields $T^4(Z_2), T^2 W(Z_2), W^2(Z_2)$, and $TV(Z_2)$ with spins $(4, \frac{9}{2}, \frac{9}{2},5)$ should appear in the lowest singular term $\frac{\theta_{12} \bar{\theta}_{12}}{z_{12}}$ in (\ref{final}). We have seen this feature in the ${\cal N}=2$ ${\cal W}_4$ algebra in \cite{BW} although the full structure of the algebra is not given. Also one sees the appearance of these new fields in the $AdS_3$ side. In \cite{HP}, the nonlinear terms in $(3.46)$ to $(3.53)$ contain these fields. For example, the $\frac{1}{k_{CS}^3}$ term corresponds to $T^4(Z_2)$ term and some of the $\frac{1}{k_{CS}^2}$ terms contain $T^2 W(Z_2)$ term and so on. Note that the Chern-Simon level $k_{CS}$ behaves as $N$ in the large $N$ 't Hooft limit. \section{ Comparison with the ${\cal N}=2$ classical ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ algebra of the bulk theory } One identifies the currents in the Kazama-Suzuki model with the higher spin fields in ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ introduced in \cite{HP} as follows: \begin{eqnarray} T(z) & \longleftrightarrow & a_{\frac{3}{2}} \sim W_{1,HP}^{-}(z), \nonumber \\ ( D T +\overline{D} T) (z) & \longleftrightarrow & \psi_{\frac{3}{2}} \sim G_{2,HP}^{-}(z), \nonumber \\ ( D T -\overline{D} T) (z) & \longleftrightarrow & \psi_{2} \sim G_{2,HP}^{+}(z), \nonumber \\ -\frac{1}{2} [D, \overline{D}] T(z) & \longleftrightarrow & a_2 \sim W_{2,HP}^{+}(z), \nonumber \\ W(z) & \longleftrightarrow & a_{\frac{5}{2}} \sim W_{2,HP}^{-}(z), \nonumber \\ ( D W +\overline{D} W) (z) & \longleftrightarrow & \psi_{\frac{5}{2}} \sim G_{3,HP}^{-}(z), \nonumber \\ ( D W -\overline{D} W) (z) & \longleftrightarrow & \psi_{3} \sim G_{3,HP}^{+}(z), \nonumber \\ -\frac{1}{2} [D, \overline{D}] W(z) & \longleftrightarrow & a_3 \sim W_{3,HP}^{+}(z), \nonumber \\ V(z) & \longleftrightarrow & a_{\frac{7}{2}} \sim W_{3,HP}^{-}(z), \nonumber \\ ( D V +\overline{D} V) (z) & \longleftrightarrow & \psi_{\frac{7}{2}} \sim G_{4,HP}^{-}(z), \nonumber \\ ( D V -\overline{D} V) (z) & \longleftrightarrow & \psi_{4} \sim G_{4,HP}^{+}(z), \nonumber \\ -\frac{1}{2} [D, \overline{D}] V(z) & \longleftrightarrow & a_4 \sim W_{4,HP}^{+}(z). \label{HPrelations} \end{eqnarray} We also present the CFT fields with $HP$ index in the last entry in order to specify them from \cite{HP}. In order to obtain the $AdS_3$ result, the normalization factor should occur in the operator product expansion of spin $2$ current and itself \begin{eqnarray} \beta(N,k)^2 & = & \frac{(-1+k) (-1+N) (1+2 k+N) (1+k+2 N)}{3 (1+k+N)^2 (-1-k-N+3 k N)} \nonumber \\ & \longrightarrow & -\frac{2}{9} (-1+\lambda_{HP} ) (1+2 \lambda_{HP} )= -\frac{1}{9} (-2+\lambda ) (1+\lambda ), \,\,\,\, 2\lambda_{HP} = \lambda, \label{betalimit} \end{eqnarray} where we also present the large $(N,k)$ limit (\ref{limit}). This expression is a genealization of \cite{Ito} where the $\beta(N=2,k)$ was found for fixed $N=2$ case. Similarly, one also has the following limit for the self-coupling constant (\ref{selfcoupling}) as before \begin{eqnarray} \alpha(N,k)^2 \longrightarrow -\frac{(-1+4 \lambda_{HP} )^2}{2 (-1+\lambda_{HP} ) (1+2 \lambda_{HP} )}= -\frac{(-1+2 \lambda )^2}{(-2+\lambda ) (1+\lambda )}. \label{alphalimit} \end{eqnarray} From the operator product expansion in Appendix $G$ and the following relations between our currents and the field contents in \cite{HP} \begin{eqnarray} W(z) \equiv \frac{1}{\beta(N,k)} W_2^{-}(z), \qquad -\frac{1}{2} [D, \overline{D}] T(z) \equiv W_{2,HP}^{+}(z), \label{relations} \end{eqnarray} one rewrites the equation (\ref{ope345}), together with (\ref{HPrelations}), as \begin{eqnarray} W_{2,HP}^{-}(z) W_{2,HP}^{-}(w) & = & \frac{1}{(z-w)^4} \,\, \frac{1}{2} c(N,k) \beta(N,k)^2 \nonumber \\ &+ & \frac{1}{(z-w)^2} \beta(N,k)^2 \left[ \frac{2 \alpha(N,k)}{\beta(N,k)} W_{2,HP}^{-} + 2 W_{2,HP}^{+} \right](w) \nonumber \\ &+ & \frac{1}{(z-w)} \beta(N,k)^2 \left[ \frac{\alpha(N,k)}{\beta(N,k)} \partial W_{2,HP}^{-} + \partial W_{2,HP}^{+} \right](w) \nonumber \\ & + & \frac{1}{N} \mbox{(Non-linear singular terms)} + \cdots \nonumber \\ & \longrightarrow & \frac{1}{(z-w)^4} \,\, (-1)\frac{1}{3}(1-\lambda_{HP})(2\lambda_{HP}-1)(2\lambda_{HP}+1)N \nonumber \\ & + & \frac{1}{(z-w)^2} \left[ \frac{2}{3}(1-4\lambda_{HP}) W_{2,HP}^{-} - \frac{4}{9} (2\lambda_{HP}+1)(\lambda_{HP}-1) W_{2,HP}^{+} \right](w) \nonumber \\ & + & \frac{1}{(z-w)} \left[ \frac{1}{3}(1-4\lambda_{HP}) \partial W_{2,HP}^{-} - \frac{2}{9} (2\lambda_{HP}+1)(\lambda_{HP}-1) \partial W_{2,HP}^{+} \right](w) \nonumber \\ & + & \frac{1}{N} \mbox{(Non-linear singular terms)} +\cdots, \label{spin2spin2limit} \end{eqnarray} where we use the large $(N,k)$ limits for $\alpha(N,k), \beta(N,k)$ and $c(N,k)$, (\ref{alphalimit}), (\ref{betalimit}) and (\ref{limit}) respectively \footnote{A commutator relation for the modes $ (W_{2,HP}^{\mp})_m$ is as follows: $ [(W_{2,HP}^{-})_m, (W_{2,HP}^{-})_n]= \beta(N,k)^2 (m-n) \left[ \frac{\alpha(N,k)}{\beta(N,k)} (W_{2,HP}^{-})_{m+n} + (W_{2,HP}^{+})_{m+n} \right] + \beta(N,k)^2 \frac{c(N,k)}{12} m (m^2-1) \delta_{m+n,0}+ \mbox{Nonlinear terms}$, where $ W_{2,HP}^{\mp} = \sum_{m \in {\bf Z}} \frac{ (W_{2,HP}^{\mp})_m}{z^{m+2}}$. One sees the similar structure in \cite{Romans}.}. This is exactly the same as the equation $(4.8)$ of \cite{HP}. Then it is straightforward to change the above to the commutator and agree with the $AdS_3$ result where one can use the identities \begin{eqnarray} \beta(N,k)^2 \longrightarrow -N^B_{\frac{5}{2}} = - \frac{1}{3} N_3^B, \qquad N^B_{\frac{5}{2}} = \frac{2}{9}(-1+\lambda_{HP})(1+2\lambda_{HP})=\frac{1}{3} N_3^B, \label{limitrelations} \end{eqnarray} where $N^B_{\frac{5}{2}}$ and $N_3^B$ in \cite{HP} are some normalization functions that depend on 't Hooft coupling constant and they appear in the commutator relations in the $AdS_3$ side. One might ask whether there exists a possibility for the existence of a new primary field of spin in the $\frac{1}{(z-w)}$ term (\ref{spin2spin2limit}). If there is a new primary field in that singular term, one can change the arguments $z$ and $w$ and use the series expansion around $w$. Then it turns out there is a minus sign for this primary field. This implies that there is no extra new primary field of spin $3$. By using the identification \begin{eqnarray} -\frac{1}{2} [D, \overline{D}] W(z) \equiv \frac{1}{\beta(N,k)} W_{3,HP}^{+}(z), \qquad V(z) \equiv \frac{1}{\beta(N,k)^2} W_{3,HP}^{-}(z), \label{defV} \end{eqnarray} and (\ref{relations}), one also computes the large $(N,k )$ limit for the operator product expansion between the spin $2$ current and the spin $3$ current, from (\ref{ope346}), as follows: \begin{eqnarray} W_{2,HP}^{-}(z) W_{3,HP}^{+}(w) & = & \frac{1}{(z-w)^4} \,\, 3 \beta(N,k)^2 W_{1,HP}^{-}(w) \nonumber \\ & + & \frac{1}{(z-w)^2} \,\, \beta(N,k)^2 \left[\frac{3}{5} \frac{\alpha(N,k)}{\beta(N,k)} W_{3,HP}^{+} + \frac{3}{\beta(N,k)^2} W_{3,HP}^{-} \right](w) \nonumber \\ & + & \frac{1}{(z-w)} \,\, \beta(N,k)^2 \left[\frac{1}{5} \frac{\alpha(N,k)}{\beta(N,k)} \partial W_{3,HP}^{+} + \frac{1}{\beta(N,k)^2} \partial W_{3,HP}^{-} \right](w) \nonumber \\ &+ & \frac{1}{N} \mbox{(Non-linear singular terms)} + \cdots \nonumber \\ & \longrightarrow & \frac{1}{(z-w)^4} (-1) \frac{2}{3} (2\lambda_{HP}+1)(\lambda_{HP}-1) W_{1,HP}^{-}(w) \nonumber \\ &+ & \frac{1}{(z-w)^2} \left[ \frac{1}{5}(1-4\lambda_{HP}) W_{3,HP}^{+} + 3 W_{3,HP}^{-} \right](w) \nonumber \\ & + & \frac{1}{(z-w)} \left[ \frac{1}{15}(1-4\lambda_{HP}) \partial W_{3,HP}^{+} + \partial W_{3,HP}^{-} \right](w) \nonumber \\ & + & \frac{1}{N} \mbox{(Non-linear singular terms)} + \cdots. \label{ope23} \end{eqnarray} One easily sees that this (\ref{ope23}) agrees with the equation $(3.46)$ in \cite{HP} at the linear order \footnote{This can be written in terms of modes as follows: $[(W_{2,HP}^{-})_m, (W_{3,HP}^{+})_n]=\frac{\beta(N,k)^2}{2} m(m+1) (W_{1,HP}^{-})_{m+n}- \frac{1}{5} \beta(N,k)^2 (2m-n) \frac{\alpha(N,k)} {\beta(N,k)} (W_{3,HP}^{+})_{m+n} + (2m-n) (W_{3,HP}^{-})_{m+n}+ \mbox{Nonlinear terms}$, which can be compared to \cite{Romans}.}. For example, the relative coefficient $\frac{1}{3}$ on the descendant field $ \partial W_{3,HP}^{-}$ can be obtained from \cite{Dotsenko,Ahn92} \begin{eqnarray} \frac{h_p + h_m -h_n }{2 h_p} = \frac{3 + 2 -3}{2 \times 3} =\frac{1}{3}. \nonumber \end{eqnarray} It is not strange that there is no descendant field for the $W_{1,HP}^{-}(w)$ because according to the counting of (\ref{formula}), the numerator becomes zero($h_m=2, h_n=3$ and $h_p=1$). This implies that the coefficient for the descendant field $\partial W_{1,HP}^{-}(w)$ vanishes and there is no such term in the $\frac{1}{(z-w)^3}$ term in (\ref{ope23}). Let us present the final bosonic operator product expansion between the spin $3$ current and itself, from (\ref{ope347}), where we use (\ref{limitrelations}) \begin{eqnarray} && W_{3,HP}^{+}(z) W_{3,HP}^{+}(w) = \frac{1}{(z-w)^6} \,\, \frac{5}{2} c(N,k) \beta(N,k)^2 \nonumber \\ && + \frac{1}{(z-w)^4} \,\, \beta(N,k)^2 \left[ 3 \frac{\alpha(N,k)}{\beta(N,k)} W_{2,HP}^{-} +15 W_{2,HP}^{+} \right](w) \nonumber \\ && + \frac{1}{(z-w)^3} \,\, \beta(N,k)^2 \left[ \frac{3}{2} \frac{\alpha(N,k) }{\beta(N,k)} \partial W_{2,HP}^{-} +\frac{15}{2} \partial W_{2,HP}^{+} \right](w) \nonumber \\ && + \frac{1}{(z-w)^2} \,\, \beta(N,k)^2 \left[ \frac{9}{20} \frac{\alpha(N,k) }{\beta(N,k)} \partial^2 W_{2,HP}^{-} +\frac{9}{4} \partial^2 W_{2,HP}^{+} + \frac{4}{\beta(N,k)^2} W_{4,HP}^{+} \right](w) \nonumber \\ && + \frac{1}{(z-w)} \,\, \beta(N,k)^2 \left[ \frac{1}{10} \frac{\alpha(N,k)}{\beta(N,k)} \partial^3 W_{2,HP}^{-} +\frac{1}{2} \partial^3 W_{2,HP}^{+} + \frac{2}{\beta(N,k)^2} \partial W_{4,HP}^{+} \right](w) \nonumber \\ && + \frac{1}{N} \mbox{(Non-linear singular terms)} +\cdots \nonumber \\ && \longrightarrow \frac{1}{(z-w)^6} (-1)\frac{5}{3}(1-\lambda_{HP})(2\lambda_{HP}-1)(2\lambda_{HP}+1)N \nonumber \\ && + \frac{1}{(z-w)^4} \left[ (1-4\lambda_{HP}) W_{2,HP}^{-} - \frac{10}{3} (2\lambda_{HP}+1)(\lambda_{HP}-1) W_{2,HP}^{+} \right] \nonumber \\ && + \frac{1}{(z-w)^3} \left[ \frac{1}{2} (1-4\lambda_{HP}) \partial W_{2,HP}^{-} - \frac{5}{3} (2\lambda_{HP}+1)(\lambda_{HP}-1) \partial W_{2,HP}^{+} \right] \nonumber \\ && + \frac{1}{(z-w)^2} \left[ \frac{3}{20} (1-4\lambda_{HP}) \partial^2 W_{2,HP}^{-} -\frac{1}{2} (2\lambda_{HP}+1)(\lambda_{HP}-1) \partial^2 W_{2,HP}^{+} +4 W_{4,HP}^{+} \right] \nonumber \\ && + \frac{1}{(z-w)} \left[ \frac{1}{30} (1-4\lambda_{HP}) \partial^3 W_{2,HP}^{-} - \frac{1}{9} (2\lambda_{HP}+1)(\lambda_{HP}-1) \partial^3 W_{2,HP}^{+} +2 \partial W_{4,HP}^{+} \right] \nonumber \\ && + \frac{1}{N} \mbox{(Non-linear singular terms)} +\cdots. \label{ope33} \end{eqnarray} It is obvious that this equation (\ref{ope33}) should correspond to the equation $(3.47)$ of \cite{HP} \footnote{ One can express this as follows: $[(W_{3,HP}^{+})_m, (W_{3,HP}^{+})_n] = \beta(N,k)^2 \frac{c}{48} m(m^2-1)(m^2-4) \delta_{m+n,0} + \beta(N,k)^2 (m-n) \left[\frac{1}{15}(m+n+3)(m+n+2) - \frac{1}{6}(m+2)(n+2) \right] \left[ \frac{3}{2} \frac{\alpha(N,k)} {\beta(N,k)} (W_{2,HP}^{-})_{m+n} -\frac{15}{4} (W_{2,HP}^{+})_{m+n} \right] + 2 (m-n) (W_{4,HP}^{-})_{m+n}+\mbox{Nonlinear terms}$. Similarly, one can compare this with the corresponding equation in \cite{Romans}.}. Also note that the relative coefficient function $\frac{1}{2}$ on $\partial W_{4,HP}^{+}$ can be obtained from the formula (\ref{formula}) by substituting $h_m = 3 = h_n$ and $h_p=4$. The relative coefficients $1, \frac{1}{2}, \frac{3}{20}$, and $\frac{1}{30}$, for the spin $2$ current in the right hand side, are standard values in the well-known $W_3$ algebra. See, for example, the review paper \cite{BS}. The coefficient $\frac{3}{20}$ is nothing but $\frac{1}{4}\frac{h_p+1}{2h_p+1}$ and this becomes $\frac{3}{20}$ at $h_p=2$ \cite{BPZ}. We also present the remaining $6$ operator product expansions, in the large $(N,k)$ limit in (\ref{remain1}), (\ref{remain2}), (\ref{remain3}), (\ref{remain4}), (\ref{remain5}), and (\ref{remain6}). Due to the ${\cal N}=2$ supersymmetry(the current multiplets $T(Z)$ and $W(Z)$ and their operator product expansions can be organized in manifest ${\cal N}=2$ superspace), compared to the bosonic case, one could obtain much informations on the various operator product expansions. In other words, for given operator product expansion of ${\cal N}=2$ currents(only after this is determined by other method, for example, Jacobi identity), there exist $16$ component operator product expansions. Without any input for the ${\cal N}=2$ supersymmetry, one should analyze all these operator product expansions separately \cite{Romans}. For example, for the ${\cal N}=2$ ${\cal W}_3$ algebra, by exploiting the package of \cite{KT} with Jacobi identity, one can easily obtain the operator product expansion for the higher spin current in ${\cal N}=2$ superspace. \section{Conclusions and outlook } We have constructed the ${\cal N}=2$ current with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ in (\ref{superspin2}) and the self-coupling constant in (\ref{const2}) in ${\cal N}=2$ ${\cal W}_5$ algebra. We also have found the extra singular terms in the operator product expansion in (\ref{extra}) which were not present in ${\cal N}=2$ ${\cal W}_3$ algebra. By observing the self-coupling constant (\ref{selfcoupling}) which depends on $(N,k)$ explicitly, the large $(N,k)$ limit of ${\cal N}=2$ ${\cal W}_{N+1}$ algebra contains the particular operator product expansion given in (\ref{final}). We have identified this with the corresponding ${\cal N}=2$ classical ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ algebra in the bulk. $\bullet$ It is an immediate question to ask how one obtains the higher spin current including (\ref{n2terms}) and (\ref{superspin2}) for ${\cal N}=2$ ${\cal W}_{N+1}$ algebra. From the structure of (\ref{superspin2}), one can try to write down the correct ansatz for the possible terms(one might add a few extra terms which are not present for $N=2$ or $N=4$ case) and then apply to the two conditions 1) and 2) in (\ref{KWvanishing}) and (\ref{TW}). It is nontrivial to find the identities for the multiple products between the structure constants in the complex basis and to collect the independent fields in each singular term. These are necessary to check the right singular structures. $\bullet$ It would be interesting to obtain the quantum ${\cal N}=2$ ${\cal W}_{\infty}^{\rm{qu}}[\lambda]$ algebra which is a deformation of the classical ${\cal N}=2$ ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ algebra. For the KS model side, once we complete all the operator product expansions at least ${\cal N}=2$ ${\cal W}_5$ algebra, then this algebra should provide all the informations on the quantum ${\cal N}=2$ ${\cal W}_{\infty}^{\rm{qu}}[\lambda]$ algebra, along the line of \cite{GG1}. The ${\cal W}_{\infty}^{\rm{cl}}[\lambda]$ for the bosonic case was found in \cite{GH, GHJ} and the corresponding quantum algebra has been studied in \cite{GG1}. See also \cite{BBSS1,Ahn2011}. We expect that there are extra linear terms for the nonlinear composite currents in (\ref{final}). From the observation \cite{BW1}, as we take $c \rightarrow \infty$ in the quantum operator product expansion, any composite field(product of $n$ fields) where the $c$'s power in the denominator is greater than $(n-1)$ will disappear in the classical limit. For example, the standard spin $3$-spin $3$ operator product expansion has the nonlinear term $\Lambda(w) \equiv TT(w) -\frac{3}{10} \partial^2 T(w)$ with coefficient function $\frac{32}{22+5c}$ in $\frac{1}{(z-w)^2}$ term as well as $\frac{3}{10} \partial^2 T(w)$ term \cite{BS}. In the $c \rightarrow \infty$ limit, the $\partial^2 T(w)$ term in the $\Lambda(w)$ vanishes while the $T T(w)$ term survives. In quantum theory, the extra term like as $\partial^2 T(w)$ in the $\Lambda(w)$ exists. On the other hand, it is an open problem to obtain the bosonic subalgebra(how to one gets the bosonic $W_5$ algebra) or ${\cal N}=1$ subalgebra for the algebra we have described, along the line of \cite{Romans}. $\bullet$ According the classification for the KS model \cite{KSNPB}, there exists the following coset model also \begin{eqnarray} \frac{SO(N+2)}{SO(N) \times SO(2)}, \qquad c(N,k) =\frac{3Nk}{N+k}. \nonumber \end{eqnarray} It would be interesting to find the higher spin currents for this model and see how they arise as an ${\cal N}=2$ nonlinear algebra. Once we construct the complex basis for the group $SO(N+2)$, then the current algebra similar to (\ref{basicOPE}) should exist. Only the structure constants and dual Coxeter number can change. Then the standard Sugawara construction can follow similarly, along the line of \cite{Ahn2012}. See also the relevant works in \cite{Ahn1106,GV}. $\bullet$ As pointed out in \cite{HGPR}, it would be interesting to construct the more supersymmetric higher spin $AdS_3$ supergravity dual to the ${\cal N}=4$ superconformal coset model that can be realized by the ${\cal N}=4$ current algebra for the supersymmetric WZW model. As a first step, one can use the previous work of \cite{RASS} where the ${\cal N}=4$ superconformal algebra(the spins for all the currents are less than or equal to $2$: the spin $2$ current, four spin $\frac{3}{2}$ currents, seven spin $1$ currents and four spin $\frac{1}{2}$ currents) can be written in terms of the ${\cal N}=2$ affine Kac-Moody currents. It is an open problem to construct the higher spin currents with spin greater than $2$. In ${\cal N}=2$ superspace, one should have $T(Z)$ with spins $(1, \frac{3}{2}, \frac{3}{2}, 2)$ and $W(Z)$ with spins $(2, \frac{5}{2}, \frac{5}{2}, 3)$ as well as the extra primary currents. For the minimal extension of ${\cal N}=2$ ${\cal W}_3$ algebra, the number of these extra currents is equal to $2$. One of them corresponds to the ${\cal N}=4$ partner of $T(Z)$ and the other corresponds to the ${\cal N}=4$ partner of $W(Z)$. The spins for the ${\cal N}=2$ multiplets can be either $(\frac{3}{2}, 2, 2, \frac{5}{2})$ or $({\frac{5}{2}, 3, 3, \frac{7}{2}})$. The former is more preferable because the extension of $W_3$ current(the last component of $W(Z)$) has its partner of spin $\frac{5}{2}$ in the context of \cite{ASS}. Note that the full ${\cal N}=4$ superconformal algebra is generated by the stress energy tensor $T(Z)$ with spins $(1, \frac{3}{2}, \frac{3}{2}, 2)$, two ${\cal N}=2$ currents with spins $(\frac{1}{2}, 1, 1, \frac{3}{2})$ and a ${\cal N}=2$ current with spins $(0, \frac{1}{2}, \frac{1}{2}, 1)$. $\bullet$ From the result of \cite{GH}, one expects that the linear structure in (\ref{final}) should have the higher spin algebra in \cite{BVW1,BVW2}, although the explicit relations are not given in this paper. One cannot use their expressions directly because the currents or generators are not primary fields. So in order to compare with our results here, one should obtain the correct primary fields with respect to the stress energy tensor. Of course, the higher spin algebra is not a subalgebra of the ultimate quantum algebra but is a subalgebra in the $c \rightarrow \infty$ limit. In general, the ultimate quantum algebra does not contain higher spin algebra as a subalgebra. $\bullet$ It is an open problem to reconsider the previous analysis in \cite{CHR}, under the large $(N,k)$ limit, along the line of \cite{GG1}. This can be done only after the ${\cal N}=2$ quantum ${\cal W}_{\infty}^{\rm{qu}}[\lambda]$ algebra is found. \vspace{.7cm} \centerline{\bf Acknowledgments} We would like to thank the following people for correspondence on the following topics: R. Gopakumar on the current status of the triality \cite{GG1}, Y. Hikida on the supersymmetric version of higher spin algebra \cite{CHR}, S. Krivonos on his mathematica package for ${\cal N}=2$ operator product expansions \cite{KT}, S. Odake on his paper \cite{Odake}, C. Peng on the asymptotic symmetry \cite{HP} and M. Vasiliev on the super $W_{\infty}(\lambda)$ algebra \cite{BVW1,BVW2}. This work was supported by the Mid-career Researcher Program through the National Research Foundation of Korea (NRF) grant funded by the Korean government (MEST) (No. 2009-0084601). \newpage
proofpile-arXiv_067-12046
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Blazars comprise an extreme subgroup of active galactic nuclei (AGNs), which consist of flat-spectrum radio quasars and BL Lac objects. They are variable on time scales ranging from less than a day to many years. This violent behavior of blazars is generally explained in terms of a relativistic jet oriented very close to our line of sight (Urry \& Padovani 1995). Intra-day variability (IDV) of AGN at centimeter wavelengths was discovered in the 1980s (Witzel et al. 1986; Heeschen et al. 1987). Today we know that IDV occurs in 25\% -- 50\% of the flat-spectrum radio sources (Quirrenbach et al. 1992; Lovell et al. 2008), and 60\% of the bright Fermi blazars (Liu et al. 2012). Since its discovery, two main explanations for the very short time scale variability have been proposed, one is that the IDV is intrinsic to the sources, but this frequently leads to a very high brightness temperature of the emitting components (Qian et al. 1991) that far exceeds the inverse-Compton limit ($10^{12}$K, see Kellermann \& Pauliny-Toth 1969). Another explanation is that the IDV is caused by propagation effects, namely by interstellar scintillation (ISS) in our galaxy (see Kedziora-Chudczer et al. 1997; Dennett-Thorpe \& de Bruyn 2000; Bignall et al. 2003). For the ``classical" type-II IDV sources (variability time scales $<$~ 0.5--2 days), however, the origin of the variability is not completely understood (e.g Fuhrmann et al. 2008). We have carried out a monitoring program for a sample of IDV sources from August 2005 to January 2010 with the Urumqi 25m radio telescope at 4.8~GHz. From the analyzed data, at least two IDV sources in the monitoring program have exhibited systematic changes of their variability time scales over the year (Gab\'anyi et al. 2007; Marchili et al. 2012). This effect is known as the annual modulation of the time scales (e.g., Rickett et al. 2001); it is explained by assuming that the origin of the variability is interstellar scintillation. The scattering material is regarded to be located in a thin plasma screen at a distance on the order of tens or hundreds of parsecs from the Earth. The orbital motion of the Earth around the Sun leads to changes in the relative velocity between the observer and the scattering screen --- the faster the Earth's movement with respect to the scattering screen, the shorter the variability time scale. Because the Earth's velocity follows a one-year periodic cycle, the relative velocity between the scattering screen and the observer should change accordingly, resulting in a seasonal cycle of the variability time scale. The principal aim of our monitoring program is to search for evidence of annual modulation in the time scales of type-II IDV sources. Among the main targets of our monitoring campaign there is S5~0716+714. It is a BL Lac object; from optical imaging of the host galaxy, Nilsson et al. (2008) suggested a possible redshift of 0.31. S5~0716+714 is one of the most variable and compact blazars, showing multi-wavelength variability from radio to gamma ray (Raiteri et al. 2003; Abdo et al. 2010). VLA data show a halo-like jet (Antonucci et al. 1986; Wagner et al. 1996); VLBI images show a core-dominated jet pointing to the north that is misaligned with the VLA jets by $\sim90^\circ$ (e.g., Bach et al. 2005). From multi-band long-term monitoring data, radio and optical light-curve behaviors appear to be quite different, only minor radio flux enhancements are found simultaneously with the major optical outbursts (Raiteri et al. 2003). On short time scales, strong intra-day variability is found in both radio and optical bands. In 1990, the detection of simultaneous transitions from fast to slow variability modes among radio and optical wavelengths during a four-week monitoring campaign suggested a common, source-intrinsic origin of the variability (see Quirrenbach et al. 1991). Since then, several multi-frequency observing campaigns have been carried out for S5~0716+714; significant evidence in favor of a correlation between optical and radio variability has not been found anymore, as discussed, e.g., in Fuhrmann et al. (2008). These authors hypothesized that both source-extrinsic and source-intrinsic mechanisms contribute to the IDV of the source, and that the importance of the two contributions may depend on the source opacity. In the source-extrinsic explanation, intra-day variations at frequencies of 3--8 GHz are attributed to ISS at the border between weak-ISS and strong-ISS (Rickett 2007). In the regime of weak scattering, relevant for 0716+714 at $\ga$~5 GHz, the emitting components that are compact enough to show intrinsic variability on time scales of a day or less might also show ISS on similar time scales. However, to separate the source-intrinsic from the source-extrinsic contribution, it is necessary to carry on IDV monitoring programs that are sufficiently long to investigate the existence of possible annual modulation effects in the time scales of the variability. \section{Observation and data reduction} The IDV observations were carried out with the Urumqi 25m radio telescope, 3-5 days per month, when possible, from Aug. 2005 to Jan. 2010, with a central frequency of 4800~MHz and a bandwidth of 600~MHz; see Sun et al. (2007) for a description of the observing system. All observations were performed in `cross-scan' mode, each scan consists of eight sub-scans in azimuth and elevation over the source position. This enabled us to check the pointing offsets in both coordinates. After applying a correction for pointing offsets, we corrected the measurements for the elevation-dependent antenna gain and the remaining systematic time-dependent effects by using several steep spectrum and non-variable secondary calibrators. Finally, we converted our measurements to absolute flux density with the frequently observed primary calibrator's assumed flux densities (3C286, 3C48 and NGC7027). The complete data calibration procedure guarantees a high level of accuracy, on the order of 0.5\%, in normal weather conditions. Following the scheme by Kraus et al. (2003), some quantities were used to evaluate significance and amplitude of the variability, namely the reduced chi-square-test, the rms flux density over mean flux density (the so-called modulation index, $m$), and the 3$\sigma$ relative variability amplitude $Y$, which is corrected for noise-bias, defined as $Y=3\sqrt{m^{2}-m_{0}^{2}}$, where $m_{0}$ is the mean modulation index of all calibrators, describing the statistical measurement accuracy during the observation. In Table~\ref{tab1}, we list the observational information and the results of the observations, in which the time scales are obtained from a structure function analysis (SF) as introduced in the next section. The columns give; (1) the epoch, (2) the day of year (DoY); (3) and (4) the duration of observation and the effective number of data points; (5) and (6) the SF time scale and relative error; (7), (8) and (9) the modulation index of calibrators, the modulation index of 0716+714 and the relative variability amplitude of the source; (10) the source's average flux density and the rms variation of the flux density; (11) the reduced $\chi_r^2$. \begin{table*} \caption[]{The observational information and the results derived from the 4.8~GHz observations.} $$ \begin{tabular}{cccccccccccc} \hline \hline \noalign{\smallskip} 1&2 &3 &4 & 5&6 &7 & 8&9 &10 &11 \\ Start Day & DoY & dur& NP& $t_{SF}$ & error & $m_{0}$ & $m$ & $Y$ & $\overline{S}_{4.8GHz}\pm rms$ & $\chi_r^2$ \\ & & (d)& &(d)& & [\%] & [\%] & [\%] & (Jy) & \\ \hline \noalign{\smallskip} 14.08.2005 & 228 & 2.9 & 24 & 0.7 & 0.3 & 0.8 & 3.3 & 9.6 & 0.880$\pm$0.029 & 7.75 \\ 27.12.2005 & 363 & 3.7 & 55 & 0.6 & 0.2 & 1.2 & 5.3 & 15.6 & 0.823$\pm$0.044 & 23.21 \\ 15.03.2006 & 76 & 3.0 & 50 & 0.8 & 0.2 & 0.5 & 1.9 & 5.4 & 0.638$\pm$0.012 & 2.72 \\ 28.04.2006 & 119 & 3.9 & 70 & 1.1 & 0.2 & 0.5 & 2.0 & 5.9 & 0.644$\pm$0.013 & 4.20 \\ 10.06.2006 & 162 & 3.2 & 89 & 1.3 & 0.3 & 0.5 & 4.9 & 14.6 & 0.735$\pm$0.036 & 24.32 \\ 14.07.2006 & 198 & 4.0 & 87 & 0.5 & 0.2 & 0.6 & 2.7 & 7.8 & 0.748$\pm$0.020 & 6.05 \\ 19.08.2006 & 235 & 2.3 & 67 & 0.9 & 0.3 & 0.5 & 3.0 & 8.8 & 0.845$\pm$0.025 & 13.40 \\ 23.09.2006 & 269 & 5.0 & 141 & 2.1 & 0.3 & 0.5 & 1.8 & 5.3 & 0.814$\pm$0.015 & 4.08 \\ 17.11.2006 & 324 & 4.9 & 133 & 1.1 & 0.3 & 0.5 & 3.2 & 9.6 & 0.745$\pm$0.024 & 9.32 \\ 18.12.2006 & 354 & 2.5 & 77 & 0.6 & 0.2 & 0.5 & 2.0 & 5.8 & 0.702$\pm$0.014 & 3.65 \\ 25.01.2007 & 26 & 2.3 & 66 & 1.0 & 0.2 & 0.4 & 2.2 & 6.4 & 0.786$\pm$0.017 & 6.30 \\ 12.02.2007 & 45 & 4.0 & 109 & 0.6 & 0.3 & 0.4 & 2.3 & 6.6 & 0.755$\pm$0.017 & 5.38 \\ 24.03.2007 & 85 & 2.8 & 72 & 1.3 & 0.3 & 0.5 & 2.2 & 6.4 & 0.735$\pm$0.016 & 5.45 \\ 20.04.2007 & 113 & 3.6 & 78 & 0.8 & 0.3 & 0.6 & 4.3 & 12.8 & 0.743$\pm$0.032 & 15.20 \\ 15.06.2007 & 168 & 2.4 & 58 & 0.6 & 0.3 & 0.6 & 2.3 & 6.6 & 0.834$\pm$0.019 & 4.85 \\ 19.07.2007 & 202 & 2.9 & 69 & 1.1 & 0.2 & 0.6 & 4.7 & 14.0 & 0.772$\pm$0.036 & 19.76 \\ 18.08.2007 & 232 & 3.1 & 72 & 1.2 & 0.3 & 0.6 & 4.1 & 12.2 & 0.779$\pm$0.032 & 14.30 \\ 13.10.2007 & 288 & 3.0 & 65 & 1.3 & 0.3 & 0.4 & 2.6 & 7.7 & 0.806$\pm$0.021 & 8.32 \\ 21.12.2007 & 357 & 3.2 & 80 & 1.2 & 0.3 & 0.4 & 2.9 & 8.6 & 0.690$\pm$0.020 & 10.24 \\ 25.02.2008 & 57 & 2.9 & 59 & 0.5 & 0.2 & 0.6 & 2.1 & 6.0 & 0.818$\pm$0.017 & 4.68 \\ 21.03.2008 & 82 & 3.0 & 76 & 0.8 & 0.3 & 0.4 & 2.9 & 8.7 & 0.790$\pm$0.023 & 10.52 \\ 21.04.2008 & 113 &3.1 & 70 & 0.8 & 0.3 & 0.5 & 3.5 & 10.4 & 0.858$\pm$0.030 & 13.06 \\ 21.06.2008 & 174 & 3.5 & 55 & 0.9 & 0.3 & 0.5 & 2.9 & 8.7 & 0.985$\pm$0.029 & 12.57 \\ 18.07.2008 & 202 & 4.8 & 55 & 0.8 & 0.2 & 0.6 & 2.5 & 7.3 & 1.272$\pm$0.032 & 4.75 \\ 20.08.2008 & 235 & 5.0 & 72 & 0.4 & 0.2 & 0.6 & 2.0 & 5.7 & 1.308$\pm$0.026 & 4.56 \\ 12.09.2008 & 258 & 3.6 & 85 & 1.3 & 0.3 & 0.4 & 2.4 & 7.2 & 1.148$\pm$0.028 & 7.04 \\ 06.11.2008 & 313 & 3.6 & 55 & 0.6 & 0.3 & 0.6 & 3.0 & 8.8 & 1.368$\pm$0.041 & 6.10 \\ 22.12.2008 & 358 & 2.3 & 57 & 1.1 & 0.3 & 0.4 & 1.8 & 5.1 & 1.369$\pm$0.024 & 3.04 \\ 11.01.2009 & 12 & 2.6 & 69 & 0.8 & 0.3 & 0.4 & 1.6 & 4.7 & 1.607$\pm$0.026 & 2.51 \\ 23.02.2009 & 55 & 3.0 & 153 & 0.5 & 0.2 & 0.4 & 1.2 & 3.2 & 1.473$\pm$0.017 & 2.23 \\ 21.03.2009 & 82 & 4.9 & 124 & 0.9 & 0.3 & 0.5 & 1.8 & 5.1 & 1.243$\pm$0.022 & 3.38 \\ 19.04.2009 & 112 & 5.4 & 94 & 2.0 & 0.4 & 0.6 & 3.7 & 10.9 & 1.362$\pm$0.050 & 14.02 \\ 06.05.2009 & 128 & 3.9 & 90 & 1.4 & 0.3 & 0.7 & 3.4 & 10.1 & 1.137$\pm$0.039 & 8.37 \\ 25.06.2009 & 177 & 2.6 & 52 & 1.0 & 0.3 & 0.6 & 3.5 & 10.4 & 0.938$\pm$0.033 & 12.27 \\ 21.08.2009 & 235 & 4.1 & 94 & 1.7 & 0.4 & 0.5 & 2.3 & 6.6 & 0.932$\pm$0.021 & 4.96 \\ 22.09.2009 & 268 & 5.5 & 131 & 3.5 & 0.8 & 0.6 & 2.0 & 5.6 & 0.816$\pm$0.016 & 2.86 \\ 09.10.2009 & 283 & 2.3 & 58 & 0.6 & 0.4 & 0.4 & 1.1 & 3.0 & 1.009$\pm$0.011 & 2.02 \\ 22.11.2009 & 328 & 3.8 & 76 & 1.3 & 0.3 & 0.7 & 3.0 & 8.6 & 1.288$\pm$0.038 & 5.30 \\ 11.12.2009 & 347 & 4.4 & 118 & 0.7 & 0.3 & 0.5 & 2.0 & 5.9 & 1.471$\pm$0.030 & 3.57 \\ 19.01.2010 & 21 & 3.5 & 67 & 1.4 & 0.3 & 0.6 & 1.8 & 5.1 & 1.328$\pm$0.024 & 2.60 \\ \noalign{\smallskip} \hline \end{tabular}{} $$ \label{tab1} \end{table*} \begin{figure} \includegraphics[width=8.5cm]{Fig.1.eps} \caption{Annual modulation plot of 0716+714. The green line shows the annual modulation pattern that best fits the time scales.} \label{fig1} \end{figure} \begin{figure} \includegraphics[width=8.5cm]{Fig.2.eps} \caption{Peak flux density (per beam) and the position angle of the VLBA `core' at 15~GHz versus epoch of observation.} \label{fig2} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{Fig.3.eps} \caption{Total flux density versus epoch of observation at 4.8~GHz .} \label{fig3} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{Fig.4.eps} \caption{IDV time scale versus total flux density S.} \label{fig4} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{Fig.5.eps} \caption{Total flux density and the IDV time scale versus epoch of observation.} \label{fig5} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{Fig.6.eps} \caption{rms flux density variation of S versus average S.} \label{fig6} \end{figure} \begin{figure} \includegraphics[width=7.5cm]{Fig.7.eps} \caption{25-day-bin-averaged rms flux density variation of S versus day of year.} \label{fig7} \end{figure} \section{Variability analysis and discussion} From the results of the IDV observations in Table~\ref{tab1}, and according to a $\chi^{2}$ test, 0716+714 exhibits prominent IDV in all observing sessions at a confidence level of $ \geq 99.9$\,\%. Here, as a criterion for the source variability, the hypothesis of a constant function is examined; the datasets with a probability to be constant $\leq$\,0.1\% are considered to be variable. To analyze the variability time scales, we used the standard structure function method (SF), i.e. a first-order structure function analysis (Simonetti et al. 1985). Above the noise level, ideally, the SF rises monotonically with a power law shape and reaches its maximum at a `saturation' level. The intersection of the power law fit with the plateau corresponding to this saturation level defines the characteristic variability time scale. In fact, the plateau is often not well pronounced, but it can be estimated by the mean of the SF around the first maximum. The errors of the power law fit to the SF have also to be taken into account. Depending on the uncertainties in the evaluation of both the SF saturation level and the power law fit, the estimated characteristic time scale changes. The error on the estimation of the time scale can therefore be obtained by taking into account the formal errors of the power-law fit and the fit to the SF plateau. Sometimes, the structure function may show more than one plateau, indicating the existence of multiple variability time scales. In those cases, we identified the characteristic time scale with the shortest one (Marchili et al. 2012). In Fig.~\ref{fig1} we plot the variability time scales versus the day of year (the so-called annual modulation plot) for all observing sessions. To investigate the possible existence of an annual modulation in the time scales of 0716+714, we fitted the time scales according to the model described in Qian \& Zhang (2001), updated to take into account the case of anisotropic scattering (see, e.g., Bignall et al. 2006; Gab\'anyi et al. 2007). An anisotropic scattering can be caused by either an elongation of the scintles (i.e., patches of focused or defocused light across which the Earth is moving) in a given direction, or an anisotropy of the emitting component (e.g., Gab\'anyi et al. 2009). According to our model (for more details of the ISS model-fit code, see Marchili et al. 2012) the variability time scale is expressed as a function of the day of year, and depends on the orientation of the elliptical scintillation pattern (a unit vector ${\bf s}=(\mathrm{cos\theta, sin\theta})$), the relative velocity between the scattering screen and the Earth, $\bf v\mathrm{(DoY)}=\bf v_{\mathrm{ISS}}-\bf v_{\oplus}\mathrm{(DoY)}$, the distance to the screen, D, and the anisotropy factor, r: \begin{equation} t(\mathrm{DoY}) \propto \frac{\mathrm{D}\cdot \sqrt{\mathrm{r}}}{\sqrt{{\mathrm{v}^2(\mathrm{DoY})+(\mathrm{r}^2-1)\,({\bf v(\mathrm{DoY}) \times s})^2}}}. \end{equation} The algorithm for the least-squares fitting of the time scales uses five free parameters: the relative velocity ${\bf v}$, projected onto the right ascension and the declination coordinates (which allows one to fit the screen velocity ($\bf v_{\mathrm{ISS},\alpha}$ and $\bf v_{\mathrm{ISS},\delta}$) relative to the Local Standard of Rest, since the Earth velocity is known with respect to the LSR), the distance to the screen, the anisotropy degree and the anisotropy angle $\theta$ (measured from east through north), which is derived from the vectorial product ${\bf v(\mathrm{DoY}) \times s}$. The parameters that best fit the time scales of 0716+714 are reported in Table~\ref{tab2}. The result with the best fit anisotropic screen model is shown in Fig.~\ref{fig1}; there is significant evidence in favor of an annual modulation of the time scales, which exhibit a remarkable slow-down peaking around DoY 270 and a secondary peak occurring around DoY 100. The screen appears to be slightly anisotropic, with an anisotropy ratio of about 1.7 and an anisotropy angle about 80 degrees. To follow the usual convention of radio image analysis, the position angle of the anisotropy is $90\degr-\theta=10\degr$ from north to east, this is roughly consistent with the inner-jet position angle ranging from a few to about 35 degrees in the VLBI images of 0716+714 (see, Britzen et al. 2009). Therefore the anisotropy might be also caused by an anisotropy of the emitting component in 0716+714. However, the data do not allow us to ambiguously determine whether the anisotropy originates from the intrinsic source structure or from the scattering screen. According to our model, the variability is associated to an interstellar cloud between the Earth and 0716+714 at a distance of 230 pc, whose characteristics could be investigated in the future, as was done for some other ISS-induced objects (Linsky, Rickett \& Redfield 2008). Anisotropic ISS models have been applied to several other IDV objects showing an annual modulation pattern, e.g., J1819+3845 (Dennett-Thorpe \& de Bruyn 2003), PKS 1257$-$326 (Bignall et al. 2006), J1128+592 (Gab\'anyi et al. 2009), PKS 1519$-$273 and PKS 1622$-$253 (Carter et al. 2009), and S4 0954+65 (Marchili et al. 2012). It is challenging to detect an annual modulation of the variability time scales in IDV sources (in particular in the slower type-II sources) -- a large amount of long (several days) IDV observations over years have to be performed. For our project, the observations were often not evenly and densely allocated over time; one has to overlap the data of years into the day of year (DoY); only after several years of observations, we were able to detect and fit an anisotropic seasonal cycle as shown in Fig.~\ref{fig1}. The modeling assumes, however, that the ISS scattering screen is stable over several years. This assumption, however, is not necessarily true, as shown, e.g., in the case of J1819+3845, where the scattering medium that is responsible for the strong and rapid IDV of the source has moved away, leading to a significant decrease of the variability (Macquart \& de Bruyn 2007; Koay et al. 2011). Changes in the scattering screen throughout the years, for instance, changes of the turbulent patches in the ISM (e.g. changes in the scattering measure, distance and/or anisotropy) will mostly result in changes of the scintillation strength, but may also affect and reduce the significance of the time scale fitting with an ISS model. This would explain the relatively high $\chi_{r}^{2}$ values frequently found for the best model fits of several IDV sources, such as J1819+3845 ($\chi_{r}^{2}$=1.5; Dennett-Thorpe \& de Bruyn 2003), PKS 1257$-$326 ($\chi_{r}^{2}$=1.97; Bignall et al. 2006), PKS 1519$-$273, and PKS 1622$-$253 ($\chi_{r}^{2}$=0.8 and 2.1, respectively; Carter et al. 2009), and J1128+592 ($\chi_{r}^{2}$=3.0; Gab\'anyi et al. 2007). For 0716+714, we found a $\chi_{r}^{2}$ of 2.5, which is comparable with the results reported for the sources mentioned above. On the other hand, considering that the position angle of the 0716+714 jet is close to the anisotropy angle derived from our ISS model fit, it is plausible that the anisotropic scattering is caused by a source-intrinsic anisotropy. Because the inner-jet position angle is oscillating from $\sim10{\degr}$ to $\sim35{\degr}$ following a 5.7$\pm$0.5-year long-term variability cycle in the total flux density of 0716+714 (Raiteri et al. 2003; Fan et al. 2007; Britzen et al. 2009), the anisotropy angle may vary accordingly. This would also contribute to an increase of the $\chi_{r}^{2}$ of the annual modulation fit. We have model-fitted the core (inner-jet) of the 15 GHz MOJAVE images (Lister et al. 2009) of 0716+714 and obtained the peak flux density (per beam) and the position angle evolution over 12 years; as shown in Fig.~\ref{fig2}, the position angle (PA) positively correlates with the peak flux density, resulting a linear Pearson correlation coefficient of 0.44 (significance 4.6E-4). During our IDV observations in 2006-2009, the PA first decreased, then increased following the same trend of the flux density, and then decreased, with a PA variation of about 10$\degr$. If the scintillating component in the source is anisotropic and contributes to the anisotropy of the ISS scattering, the ISS model fit to the 4.5 years of collected IDV time scales should be considerably affected, with a significant increase of the $\chi_{r}^{2}$. The inner-jet PA evolution would influence the IDV time scales year by year; to investigate this effect, more densely sampled IDV observations and careful year-by-year anisotropic modeling would be necessary. In our case, the IDV time scale data are still too sparse in every single year to model a change of anisotropic scattering pattern induced by the source's PA evolution. In Table~\ref{tab2}, the screen velocity from our model is much lower than the Earth orbiting velocity w.r.t. the LSR, indicating that the Sun's motion plays the main role in the variation of the time scales. For 0716+714's position on the sky, the time scale peak falls around DoY 270 (Fig.~\ref{fig1}) as expected, supporting the hypothesis that ISS is the dominating contribution to the IDV of 0716+714. During the 4.5 years of monitoring, the flux density of 0716+714 showed strong variations also on time scales of months (Fig.~\ref{fig3}); a flare appears from mid-2008 to mid-2009 with peak-to-through variations on the order of 100\%, and a second flare occurs late in 2009. This long-term flux density variation should have a source-intrinsic origin. The IDV time scales during the flaring state could be prolonged due to, e.g., an enlargement of the scintillating component. However, in Table~\ref{tab1} and Fig.~\ref{fig4} we see that the IDV time scales in the flaring state (e.g. $>$ 1 Jy) are not very different compared to those estimated during the relatively `quiescent' state (e.g. $<$ 1 Jy) in general, and no correlation is found between the IDV time scale and the total flux density, implying that the source flares do not seriously affect the variability time scales of 0716+714. The total flux density and the IDV time scale are plotted versus the observing epoch in Fig.~\ref{fig5}. There is no correlation between the two, however, we can not completely rule out that the slower time scales observed in 2009 are somehow related to the 2008 flare taking into account a time delay of about one year. For the inner-jet kinematics of 0716+714, Britzen et al. (2009) proposed a model in which the VLBI components of 0716+714 are stationary with respect to the core, while the inner components are oscillating with regard to their PA. In this model, the flare in flux density is just caused by a geometric beaming effect --- the inner components of 0716+714 do not change physically from a quiescent state (less beaming) to a flaring state (strong beaming), unlike the PA, which instead changes considerably; as a result, the projected size and/or anisotropy of the scintillating component onto our line of sight must change over the years. It is hypothesized that the inner-jet PA evolution of 0716+714 follows a $\sim$5.7-year modulation, which would affect the annual modulation of IDV time scales from year to year; future densely sampled IDV observations, e.g. every week, might be able to detect such an effect. In the ISS-induced variability, the root mean square of the flux density is expected to increase linearly with the average flux density (Narayan 1992). Plotting the two parameters against each other (Fig.~\ref{fig6}) indicates that there seems to be a weak positive correlation between them, with a linear Pearson correlation coefficient of 0.36 (significance of 0.02). This weak correlation may be explained either by an increase of the contribution of instrumental noise to the overall variability at the lowest flux densities, or by an rms dependence on the variability time scale --- when this becomes longer than the observation duration, part of the variability may fall outside the observing window, leading to a decrease of the rms. If this is the case, we should see a decrease of the rms around the time of the year when the variability is slower. Plotting the rms flux density (with bins of 25 days) versus the day of year (Fig.~\ref{fig7}), we find a trough around DoY 275 and a secondary trough around DoY 70. The result suggests that the IDV amplitude is low at the slowest time scales observed in the annual modulation plot (Fig.~\ref{fig1}); the trough of the rms flux density around DoY 70 only poorly coincides with the secondary peak of time scale around DoY 100, but the difference is moderate. \begin{table} \caption[]{The best fit screen parameters from the IDV time scales of 0716+714 at 4.8~GHz. } $$ \begin{tabular}{ccccc} \hline \noalign{\smallskip} $v_{ISS,\alpha}$ & $v_{ISS,\delta}$ & Screen & Anisotropy & Anisotropy \\ to LSR & to LSR & distance & degree & angle \\ (km/s)& (km/s) & D(kpc) & r(ratio) & $\theta(E \rightarrow N)$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $1\pm4$ &$10\pm5$ & $0.23\pm0.05$ & $1.7\pm0.4$ & $80\degr\pm15\degr$ \\ \noalign{\smallskip} \hline \end{tabular}{} $$ \label{tab2} \end{table} \section{Summary} We have carried out monthly IDV observations of the blazar 0716+714 over 4.5 years with the Urumqi 25m radio telescope at 4.8~GHz; the source has shown prominent IDV as well as long-term flux variations. With the structure function analysis we found that the IDV time scale does show evidence in favor of a seasonal cycle, a result which suggests that the IDV of 0716+714 is caused by interstellar scintillation. The source underwent a strong outburst phase between mid-2008 and mid-2009; a second intense flare was observed in late 2009, but no correlation between the total flux density and the IDV time scale is found, implying that the flaring state of the source does not have serious implications for the general characteristics of its intra-day variability. However, we know that the inner-jet position angle is changing during the years, which could result in a significant variation of the annual modulation pattern with time, and therefore a decrease in the significance of the anisotropic ISS model fit to the IDV time scales. We also found indications that the lowest IDV amplitudes (rms flux density) correspond to the slowest time scales of the variability, which we were able to explain reasonably well within the ISS model. \begin{acknowledgements} We thank the anonymous referee for valuable comments, which have improved the paper. This research has made use of data from the MOJAVE database that is maintained by the MOJAVE team (Lister et al., 2009, AJ, 137, 3718). This work is supported by the National Natural Science Foundation of China (Grant No.11073036) and the 973 Program of China (2009CB824800). N.M. is funded by an ASI fellowship under contract number I/005/11/0. \end{acknowledgements}
proofpile-arXiv_067-12108
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
proofpile-arXiv_067-12179
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} $\mathrm{Ge_{2}Sb_{2}Te_{5}}$ (GST) chalogenides have been of technological importance due to their applications for rewritable data-storage devices such as compact-disk (CD) and digital versatile disk (DVD). The underlying principle behind the data-storage process is based on the fast and reversible phase transformation between a crystalline phase and an amorphous phase, leading to changes in electrical conductivity and optical reflectivity~\cite{Ovshinsky,Libera,Yamada_MRS}. In particular, the phase change happens at relatively low temperature range \cite{Yamada_1991,Friedrich_2000,Lee}, making it feasible for phase change random access memory (PCRAM). More physical understanding of both amorphous and crystalline GST have been essentially needed since it could guide us to further enhancement. The GST crystalline phase consists of two states namely metastable phase and stable phase. Based on high-resolution electron microscopy analysis, the metastable phase has been proposed to crystallize in the rocksalt-type structure(Fm$\overline{3}$m) that Te atoms completely occupy the 4(a) site and Ge, Sb, and intrinsic vacancies randomly atoms completely occupy the 4(b) site\cite{Park_JAP_2005}. On the other hand, the stable phase crystallizes in the hexagonal structure. Three different atomic arrangements have been proposed. I. I. Petrov \textit{et al.} \cite{Petrov} firstly performed an experiment by using electron transmission microscope and they reported that GST is the hexagonal structure with space group P$\overline{3}$m1 having corresponding lattice constants a = 4.20 \AA \,\, and c = 16.96 \AA, respectively. Te atoms occupy 1(a), 2(d), and 2(d) sites while Sb and Ge atoms occupy 2(d) and 2(c) sites. Later, B. J. J. Kooi \textit{et al.}\cite{Kooi} argued that Ge atoms occupy 2(d) sites and Sb atoms occupy 2(c) sites. However, T. Matsunaga \textit{et al.}\cite{Matsunaga} further investigated this by x-ray diffraction and they found different results. According to them, GST crystallizes in hexagonal structure with the space group P$\overline{3}$m1 and the lattice parameters a = 4.2247 \AA \,\,and c = 17.2391 \AA. They have also indicated that Ge and Sb atoms randomly occupy 2(d) and 2(c) sites. Up to now, complete explanations of atomic arrangement of the GST stable phase has remained unclear. However, B. S. Lee and co-workers \cite{Lee} and J. W. Park \textit{et al.} \cite{Park} experimentally studied the electronic and optical properties of the stable GST phase. In addition, there are also theoretical investigations related to the stable GST phase. Z. Sun \textit{et al.}\cite{Sun_2006} carried out first-principles electronic structure calculations based on the density functional theory(DFT) to compare three proposed models and they have concluded that the configuration proposed by Kooi \textit{et al.}\cite{Kooi} is the most stable one. Hybrid functional of J. H. Heyd, G. E. Scuseria and M. Ernzerhof, better called HSE06 \cite{Heyd} has been shown to give improved structural parameters for a number of systems as compared to the local density approximations (LDA) and Generalized gradient approximation (GGA) \cite{Paier,Marsman}. In addition, they also provide improved band gaps which are close to experiment, slightly lower for most cases \cite{Paier,Marsman}. Up to now, no theoretical studies have been conducted to investigate the structural and optical properties of GST material by using hybrid density functionals. In this work, we therefore perform the calculations to address structural and electronic properties of stable GST using GGA and hybrid density functional (HSE06). The optimized structural parameters and electronic structures are presented. Finally, the optical properties of this compound are also investigated. \section{Methods/Computational details} \textit{Ab-initio} total energy calculations based on the density functional theory(DFT)\cite{Kohn} and all-electron projector-augmented wave method\cite{Blochl} have been performed by using the VIENNA AB INITIO SIMULATION PACKAGE (VASP)\cite{Kresse94,Kresse99}. The atomic structures were constructed according to the experimental data provided by refs. \cite{Petrov,Kooi,Matsunaga}. The generalized gradient approximation of Perdew Burke-Ernzerhof (PBE)\cite{PBE} was employed as exchange-correlation functionals. 14 electrons ($3d^{10}4s^{2}4p^{2}$) of Ge, 5 electrons ($5s^{2}5p^{3}$) of Sb, and 6 electron ($5s^{2}5p^{4}$) of Te were treated as valence electrons in the pseudopotentials. The cutoff energy for plane wave basis of 800 eV and the k-point mesh for brillouin zone integration of 8x8x2 were used since they provide sufficient convergence in total energy of structural optimization. On the other hand, the denser k-point mesh of 16x16x8 was adopted for calculating density of states(DOS) and dielectric functions. Calculations using hybrid density functional (HSE06)\cite{Heyd} were also comparatively carried out. In this particular case, the exchange-correlation functionals are the rational mixing between the Fock exchange, PBE exchange and PBE correlation \begin{equation} \textrm{$E^{HSE}_{xc}$ = 1/4 $E^{HF,SR}_{x}$($\mu$) + 3/4 $E^{PBE,SR}_{x}$($\mu$) + $E^{PBE,LR}_{x}$($\mu$) + $E^{PBE}_{c}$}, \end{equation} The PBE exchange term is decomposed into two parts namely short range (sr) and long-range (lr) while the correlation part is totally from PBE. The parameter $\mu$ represents the range when the short range term is negligible and it is $0.207^{-1}$ for HSE06. The detailed mathematical derivations and tests of HSE06 functionals are given in ref. \cite{Heyd}. In order to obtain satisfactory results with reasonable computing time, the lower cutoff energy and k-point mesh, 600 eV and 4x4x2, respectively were used for structural optimization. The denser k-point mesh was adjusted higher to be 8x8x2 for calculating DOS and dielectric functions. The conjugate gradient scheme utilized for electronic relaxation algorithm is applied to all the structural optimization. The volume, shape, and atomic positions were fully optimized and relaxations were allowed until the Hellmann-Feynman forces on the atoms were less than $0.005$ eV/\AA. In addition, the calculated electronic charge density from the optimized atomic structure was used to calculate electronic charge partitioned for each atom by using a grid-based Bader charge analysis \cite{Tang,Sanville}. The zero flux surface of the electronic charge density is used to determine the amount of charge occupied by that particular atom. \section{Results and discussion} We start our calculations by optimizing atomic structures of the stable phase of GST proposed in references \cite{Petrov,Kooi,Matsunaga} and they are labeled as A (I. I. Petrov \textit{et al.}\cite{Petrov}), B (B. J. Kooi \textit{et. al.}\cite{Kooi}), and C (T. Matsunaga \textit{et al.} \cite{Matsunaga}), respectively. The equilibrium geometries by using PBE and HSE06 functionals are given in \ref{structure}. It can be seen that the unit cell (1 formula unit) consists of 9 atoms (2 Ge, 2 Sb and 5 Te). The stable GST is a layered structure that Ge, Sb, and Te atoms are stacked along $c$-axis (in the [0001] direction). The optimized lattice parameters are listed in \ref{tab1}. PBE functionals obviously overestimate lattice parameters with the maximum 2 \% difference as compared to the reported experimental values. This comes from the well-known deficiency that GGA functionals overestimate lattice constants. However, we have found that our calculations are in good agreement with those previously reported calculations as also indicated in \ref{tab1}. However, using the HSE06 functional, the lattice constant $a$ is improved to be closer to the experimental values, but the lattice constant $c$ is overestimated. This could be related to that the stable GST is a layered structure along $c$-axis. The interactions between adjacent layers are probably low and hybrid functional may fail to explain these weak interactions. After acquiring the complete information about structural parameters, we proceed to investigate the corresponding electronic structures of these stable phases of $\mathrm{Ge_{2}Sb_{2}Te_{5}}$. Our calculated density of states (DOS) with the use of PBE and HSE06 functionals of these stable phases are shown in \ref{DOS}. We see that PBE and HSE06 give almost similar DOS and it is found that Te-derived states are mainly formed at the top of the valence band while Ge, Sb, and Te share the bottom of conduction band. However, there is strong hybridization of Sb and Te atoms in the conduction band around 0.5-1.0 eV for A and C whereas Ge, Sb, and Te atoms almost equally hybridize for B. This can be explained by their different atomic arrangements. For A, Sb atoms from 2(d) sites are surrounded by Te atoms from 1(a), 2(d), and 2(d) sites. For B, Sb atoms from 2(c) site are surrounded by Te atoms from 1(a) site. In addition, states in the conduction band have very similar shapes but they are pushed upward, resulting in higher band gaps as indicated in \ref{table:2}. PBE functional predicts band gaps as 0.00 eV, 0.24 eV, and 0.22 eV for A, B, and C, respectively while it has been reported that stable GST has the band gap ranging from 0.50-0.57 eV\cite{Park,Lee}. The band gaps are quantitatively underestimated because of the main deficiency of DFT to deal with excitation. However, HSE06 hybrid functional gives band gaps closer to the experimental values found in the literature. The C phase has the band band gap of 0.48 eV which is in good agreement with the experimental values while B phase has slightly lower, 0.37 eV. On the other hand, the phase A has the smallest band gap of 0.26 eV. In \ref{Dielectric}, we show the real and imaginary parts of dielectric functions of $\mathrm{Ge_{2}Sb_{2}Te_{5}}$. The left side represents the dielectric functions calculated by using PBE functional whereas the right side denotes those calculated by using hybrid density functional. For PBE functional, A and C have nearly similar dielectric functions. They have the same amplitude of the real part dielectric function at zero energy and their main peaks of the imaginary part are located between 1.5-2.0 eV, in good agreement with reported results in ref. \cite{Park}. However, the main peak of the imaginary part of B is slightly lower than A and C. It has been experimentally reported that the imaginary dielectric function locates at 1.5 eV\cite{Park}. For HSE06 functional, It is found that A, B, and C, have slightly different dielectric functions considering in terms of the locations of main peaks of the imaginary part, the starting point at zero energy of the real part and the amplitudes. We have also calculated the electronic charge distribution by using Bader Charge analysis\cite{Tang,Sanville} and list our results in \ref{table:3}. As mentioned earlier that Ge, Sb, and Te have 14, 5, and 6 valence electrons, respectively. In a pure ionic model, Ge and Sb would loose 4 and 3 electrons respectively and Te may gain $\approx$ 3 electrons. But in our case the situation is rather complex. For instance in phase A, Ge and Sb are loosing only 0.31 and 0.6 electrons respectively, which are gained by Te. These results represent the complex nature of this material and reveal the importance of quantitative analysis over a simple ionic model. In the three proposed stable phases of this material, the charge distribution is almost same. Although, the values obtained for B phase favors slightly the charge transfer from Ge and Sb to Te, but the numbers are not fundamentally different from those, which were calculated for A and C structures. \section{Conclusions} In summary, we have performed comparative study of the structural, electronic and optical properties of the stable structures of $\mathrm{Ge_{2}Sb_{2}Te_{5}}$, with the use of GGA and hybrid density (HSE06) functionals. In our present study, we have shown that structural parameters and electronic band gap of this material calculated with the use of hybrid density functional (HSE06) are in good agreement with the available experimental results than calculated with the use of PBE functional. However, HSE06 functional slightly overestimate the C parameter and optical properties of this compound. We have also analyzed the charge distribution between the constituent elements of this material using the Bader's theory of atoms and we find that, due to the complex nature of this compound, the simple ionic model fails to explain it. We have shown that , on the overall, hybrid density functional (HSE06) is important for the correct description of GST material and especially to reproduce the electronic structure of this compound. Finally, we have presented that all the calculated parameters of stable phase B of $\mathrm{Ge_{2}Sb_{2}Te_{5}}$ are more closer to available experimental data than stable phases A and C. We would like to acknowledge VR and FORMAS for financial support. T.K.\ would also like to acknowledge the Royal Thai Government for financial support. M. Ramzan acknowledges Higher Education Commission of Pakistan. SNIC and SNAC have provided computing time for this project.
proofpile-arXiv_067-12199
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{ Introduction and statement of the results} \setcounter{equation}{0}% \setcounter{theorem}{0 A major mathematical problem in ${\mathcal P}{\mathcal T}$-symmetric quantum mechanics (see e.g. \cite{Be3}, \cite{Sp} \cite{JPhysA}-\cite{Pramana} for recent reviews) is to determine whether or not the spectrum of the ${\mathcal P}{\mathcal T}$-symmetric Schr\"odinger operator is real ({\it proper} ${\mathcal P}{\mathcal T}$ symmetry \cite{BeBo}). This is the case, of course, if the given ${\mathcal P}{\mathcal T}$-symmetric operator can be conjugated to a self-adoint one through a similarity transformation. The possibility of such similarity has been extensively studied (in addition to the relevant references in \cite{Be3}, \cite{Sp} \cite{JPhysA}-\cite{Pramana}, see also \cite{Mo1}, \cite{Mo2}, \cite{Mo3} for its examination in an abstract setting). Quite recently, a complete characterization has been obtained of ${\mathcal P}{\mathcal T}$-symmetric quadratic Schr\"odinger operators similar to a self-adjoint one \cite{CGHS}. We address in this paper the problem of constructing such a similarity transformation with the techniques of the Quantum Normal Form ({QNF}) (see e.g. \cite{Sj}, \cite{BGP}), and provide a class of ${\mathcal P}{\mathcal T}$-symmetric operators for which the procedure works. Namely: the QNF of the given ${\mathcal P}{\mathcal T}$-symmetric Schr\"odinger operator is real and convergent, uniformly with respect to $\hbar\in [0,1]$. The convergence of the QNF not only provides the similarity with a self-adjoint operator, but has the following straightforward consequences: \begin{itemize} \item[1)] It yields an {\it exact} quantization formula for the eigenvalues; \item[2)] Since the the QNF reduces to the classical normal form (CNF) for $\hbar=0$, the CNF is convergent as well, and the corresponding classical system is therefore integrable. \end{itemize} Not surprisingly, we are able to prove a result so much stronger than simple similarity with a self-adjoint operator only for a very restricted class of operators, namely a class of holomorphic, ${\mathcal P}{\mathcal T}$-symmetric perturbations of the quantization of the linear diophantine flow over the torus $\T^l$. Consider indeed a {classical} Hamiltonian family, defined in the phase space ${\mathcal R}^l\times \T^l, l=1,2.\dots$, expressed in the action-angle variables $(\xi,x)$, $\xi\in{\mathcal R}^l$, $x\in\T^l$: \begin{equation} \label{Ham} {\mathcal H}_\varepsilon(\xi,x)={\mathcal L}_\omega(\xi)+\varepsilon {\mathcal V}(\xi,x), \quad \varepsilon\in{\mathcal R}, \end{equation} where $ {\mathcal L}_\omega(\xi):=\langle\omega,\xi\rangle$, $\omega:=(\omega_1,\ldots,\omega_l)\in{\mathcal R}^l$, is the Hamiltonian generating the linear quasi-periodic flow $x_i\mapsto x_i+\omega_i t,\;\forall i=1,\dots,l,$ with frequencies $\omega_i$ over $\T^l$, and ${\mathcal V}$ is an a priori complex-valued holomorphic function of $(\xi,x)$, assumed to be ${\mathcal P}{\mathcal T}$-symmetric. Namely, if ${\mathcal P}:\; x\to -x$ denotes the parity operation, i.e. $({\mathcal P} f)(\xi,x)=f(\xi,-x),\;\forall f\in L^2({\mathcal R}^l\times \T^l)$ and ${\mathcal T}: f\to \overline{f}$ the complex conjugation in $L^2({\mathcal R}^l\times \T^l)$, then $$ (({\mathcal P}{\mathcal T}){\mathcal V})(\xi,x):=\overline{{\mathcal V}}(\xi,-x)={{\mathcal V}}(\xi,x),\quad\forall (\xi,x)\in {\mathcal R}^l\times \T^l. $$ Writing ${\mathcal V}$ through its uniformly convergent Fourier expansion: \begin{equation} \label{FE} {\mathcal V}(\xi,x)=\sum_{q\in\Z^l}\,{\mathcal V}_q(\xi)e^{i\langle q,x\rangle};\qquad {\mathcal V}_q(\xi)=(2\pi)^{-l/2}\int_{\T^l}\,{\mathcal V}(\xi,x)e^{-i\langle q,x\rangle}\,dx \end{equation} the equivalent formulation of the ${\mathcal P}{\mathcal T}$ symmetry in terms of the Fourier coefficients is immediately seen: \begin{equation} \label{FCPT} {{\mathcal V}}_{q}(\xi)=\overline{{\mathcal V}_q(\xi)}, \qquad \forall\,(\xi,q)\in{\mathcal R}^l\times\T^l. \end{equation} Moreover we assume that \begin{equation}\label{EvenOdd} {{\mathcal V}}_{-q}(\xi)=-{\mathcal V}_q(\xi);\qquad {\mathcal V}_q(-\xi)={\mathcal V}_q(\xi), \qquad \forall\,(\xi,q)\in{\mathcal R}^l\times\T^l, \end{equation} which ensures that the potential ${\mathcal V}(\xi,x)$ is even in the variable $\xi$ and odd in the variable $x$: $$ {\mathcal V}(-\xi,x) = {\mathcal V}(\xi,x),\qquad {\mathcal V}(\xi,-x) = -{\mathcal V}(\xi,x),\qquad \forall\,(\xi,q)\in{\mathcal R}^l\times\T^l. $$ We denote $V$ the operator in $L^2(\T^l)$ generated by the Weyl quantization of the symbol ${\mathcal V}$ (see Appendix A.2), namely the operator acting on $L^2(\T^l)$ in the following way: \begin{equation} \label{V} (V f)(x):= \int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_q(p)e^{i(\langle q,x\rangle+\langle p,q\rangle\hbar/2 )}f(x+p\hbar)\,dp, \quad \forall f\in L^2(\T^l), \end{equation} where $$ \widehat{{\mathcal V}}_q(p):=(2\pi)^{-l/2}\int_{{\mathcal R}^l}\,{{\mathcal V}}_q(\xi)e^{-i\langle p,\xi\rangle }\,d\xi $$ \vskip 4pt\noindent is the Fourier transform of the Fourier coefficient ${{\mathcal V}}_q(\xi)$. Then the quantization of ${\mathcal H}_\varepsilon$ is the ${\mathcal P}{\mathcal T}$-symmetric (verification below), non self-adjoint operator in $L^2(\T^l)$ acting as \begin{equation} \label{op} H(\omega,\varepsilon)= i\hbar \langle\omega,\nabla\rangle +\varepsilon V = L(\omega,\hbar)+\varepsilon V,\quad L(\omega,\hbar):=i\hbar \langle\omega,\nabla\rangle. \end{equation} The Schr\"o\-din\-ger\ operator $H(\omega,\varepsilon)$ thus represents a perturbation of the self-adjoint operator $L(\omega,\hbar)$ in $L^2(\T^l)$, whose spectrum obviously consists of the eigenvalues $\lambda_{n,\omega}=\hbar\langle\omega,n\rangle$, $n=(n_1,\ldots,n_l)\in\Z^l$, with corrresponding normalized eigenfunctions $\phi_n(x)=(2\pi)^{-l/2}e^{i\langle n,x\rangle}$. \begin{remark} \label{R1} {\rm By the assumptions to be specified below $V$ will represent a regular perturbation of $L(\omega,\hbar)$. However the spectrum of $L(\omega,\hbar)$, although pure point, is {\it dense} in ${\mathcal R}$. Therefore the standard (Rayleigh-Schr\"odinger) perturbation theory of quantum mechanics {\it cannot} be applied here because no eigenvalue is isolated, and the approach through the Normal Form is therefore {necessary}, insofar as it represents an alternative method which serves to the purpose.} \end{remark} The statement of the result will profit in clarity by first sketching the construction of the quantum normal form (QNF) (see e.g. \cite{Sj},\cite{ BGP}, and in this particular context \cite{GP}). Its purpose in this connection is to construct a similarity transformation $U(\varepsilon)$ in $L^2({\mathcal R}^l)$, generated by a continuous operator $W(\varepsilon)$, $\displaystyle U(\varepsilon)=e^{iW(\varepsilon)/\hbar}$, such that \begin{equation} \label{sa} U(\varepsilon)H(\omega,\varepsilon)U(\varepsilon)^{-1}=e^{iW(\varepsilon)/\hbar}(L(\omega,\hbar)+\varepsilon V)e^{-iW(\varepsilon)/\hbar}=S(\varepsilon) \end{equation} where the similar operator $S(\varepsilon)$ is self-adjoint. The procedure goes as follows: \begin{enumerate} \item Look for that particular similarity transformation $\displaystyle U(\varepsilon)=e^{iW(\varepsilon)/\hbar}$, such that the transformed operator $S(\varepsilon)$ assumes the form \begin{equation} \label{serie1} S(\varepsilon)= L(\omega,\hbar) + \sum_{k=1}^{\infty}\varepsilon^k B_k(\hbar) \end{equation} under the additional conditions \begin{equation} \label{diagsa} [B_k,L] = 0,\;\qquad B_k=B_k^\ast, \qquad \forall k=1,2,\ldots. \end{equation} where $ B_k:=B_k(\hbar),\;\forall k$, and $L:=L(\hbar,\omega)$. If it can be proved that the series \eqref{serie1} (under the additional conditions \eqref{diagsa}) has a positive convergence radius $\varepsilon^\ast$, then obviously $S(\varepsilon)$ is self-adjoint for $|\varepsilon|<\varepsilon^\ast$, so that its spectrum is real; moreover, $S(\varepsilon)$ is diagonal on the eigenvector basis of $L(\hbar,\omega)$. The series \eqref{serie1}, assuming the validity of conditions \eqref{diagsa}, is called the {\it operator quantum normal form (O-QNF)}. \item To determine the {O-QNF} we first construct the {QNF} {\it for the symbols} (S-QNF). That is, we first construct for any $k=1,2,\ldots$, the symbol ${\mathcal B}_k(\xi;x;\hbar)$ of the self-adjoint operator $B_k$. The symbol ${\mathcal B}_k$ turns out to be a function only of $\xi$ (depending parametrically on $\hbar$) so that the application of the Weyl quantization formula (see Appendix A.2) specifies the action of $B_k$: $$ B_k f={\mathcal B}_k (i\hbar\langle \omega,\nabla\rangle )f={\mathcal B}_k(L_\omega)f, \qquad \forall f\in L^2(\T^l),\quad L_\omega:=L=L(\hbar,\omega). $$ Hence $[B_k,L_\omega]=0,\,\forall k$, and the eigenvalues of $B_k$ are simply ${\mathcal B}_k(n\hbar,\hbar)$, $n\in\Z^l$. Then the symbol of $S(\varepsilon)$ is $$ \Sigma(\xi,\varepsilon,\hbar)={\mathcal L}_\omega(\xi)+\sum_{k=1}^\infty{\mathcal B}_k(\xi,\hbar)\varepsilon^k $$ provided the series has a non-zero convergence radius. In that case the eigenvalues of $S(\varepsilon)$, and hence of $H(\omega,\varepsilon)$, are clearly given by the following {\it exact} quantization formula: \begin{equation} \label{EQF} \lambda_n(\varepsilon,\hbar)=\langle\omega,n\rangle\hbar+\sum_{k=1}^\infty{\mathcal B}_k(n\hbar,\hbar)\varepsilon^k, \end{equation} that is, by the symbol $ \Sigma(\xi,\varepsilon,\hbar)$ evaluated at the {\it quantized} values $n\hbar$ of the classical actions $\xi\in{\mathcal R}^l$. Moreover, the spectrum of $S(\varepsilon)$, i.e. of $H(\omega,\varepsilon)$, is real if $S(\varepsilon)$ is self-adjoint, namely if $B_k$ is self-adjoint $\,\forall\,k=1,\ldots$; again by the Weyl quantization formula (Appendix A.2), this is true if ${\mathcal B}_k(\xi;\hbar)$ is real and bounded $\,\forall\,k=1, 2,\ldots$. \item By construction, each coefficient ${\mathcal B}_k(\xi,\hbar), k=1,\ldots$, of the S-QNF turns out to be a smooth function of $\hbar$ near $\hbar=0$, and ${\mathcal B}_k(\xi,0):={\mathcal B}_k(\xi)$ is just the $k-$term of the classical normal form generated by canonical perturbation theory applied to the classical Hamiltonian ${\mathcal H}_\varepsilon(\xi,x)$. More precisely: \begin{equation} \label{can} {\mathcal H}_\varepsilon(\xi,x)\sim {\mathcal L}_\omega(\xi)+\sum_{k=1}^\infty\,{\mathcal B}_k( \xi)\varepsilon^k \end{equation} where $\sim$ denotes canonical equivalence. Therefore if the convergence of the S-QNF is {\it uniform} with respect to $\hbar\in [0,1]$ the CNF \eqref{can} is also convergent and therefore the classical hamiltonian ${\mathcal H}_\varepsilon(\xi,x)$ is integrable because the equivalent hamiltonian depends only on the actions. \end{enumerate} We can now proceed to the precise statement of the results. First we describe the assumptions. Consider again the operator \begin{eqnarray*} && L(\omega,\hbar)\psi = i\hbar \langle\omega,\nabla\rangle\psi =-i\hbar\left[\omega_1\frac{\partial}{\partial x_1}+\ldots+\omega_l\frac{\partial}{\partial x_l}\right]\psi , \quad \forall\psi\in D(L_\omega)=H^1(\T^l); \\ && H^1(\T^l):=\{\psi=\sum_{n\in\Z^l}\,\psi_ne^{i\langle n,x\rangle}\in L^2(\T^l)\,:\,\sum_{n\in\Z^l}\,|n|^2\,|\psi_n|^2 <+\infty\} \end{eqnarray*} The first assumption is : \par\noindent (A1) {\it The frequencies $\omega=(\omega_1,\ldots,\omega_l)$ are} diophantine, {\it i.e. $\exists\gamma>0,\; \tau>l$ such that:} \begin{equation} \label{DC} |\langle\omega,q\rangle|^{-1}\leq \gamma |q|^{\tau}, \quad q \in\Z^l, \; q\neq 0. \end{equation} Remark that \eqref{DC} entails that all the eigenvalues $\lambda_{n,\omega}=\langle n,\omega\rangle\hbar$ of $L(\omega,\hbar)$ are simple. Let now $(t,x)\mapsto {\mathcal V}(t,x)$ be a complex-valued smooth function defined on ${\mathcal R}\times\T^l$, i.e. ${\mathcal V}\in C^{\infty}({\mathcal R}\times\T^l;\Bbb C)$. Write its Fourier expansion: \begin{equation} \label{FV} {\mathcal V}(t,x)=\sum_{q\in\Z^l}\,{\mathcal V}_{q}(t)e^{i\langle q,x\rangle}, \quad {\mathcal V}_{q}(t):=(2\pi)^{-l/2}\int_{\T^l}{\mathcal V}(t,x)e^{-i\langle q,x\rangle}\,dx \end{equation} and define the functions ${\mathcal V}_\omega(\xi,x):{\mathcal R}^l\times\T^l\to\Bbb C$ in the following way: \begin{equation} \label{Vom} {\mathcal V}_\omega(\xi,x):={\mathcal V}(\langle\omega,\xi\rangle,x)=\sum_{q\in\Z^l}\,{\mathcal V}_{\omega,q}(\xi)e^{i\langle q,x\rangle}, \qquad {\mathcal V}_{\omega,q}(\xi):={\mathcal V}_q(\langle\omega,\xi\rangle). \end{equation} Now consider the space Fourier transform of ${\mathcal V}_{q}(t), q\in\Z^l$: $$ \widehat{{\mathcal V}}_q(p):=\frac{1}{\sqrt{2\pi}}\displaystyle\int_{{\mathcal R}}\,{\mathcal V}_q(t)e^{-ipt}\,dt,\quad p\in{\mathcal R}.$$ Then (see formula (\ref{(A1)})) the Weyl quantization of ${\mathcal V}_\omega(\xi,x)$ is the operator in $L^2(\T^l)$ acting as follows: $$ (V_\omega f)(x)=\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_q(p) e^{i(\langle q,x\rangle+\hbar p\langle \omega,q\rangle/2)}f(x+\hbar p\omega)\,dp, \quad f\in L^2(\T^l). $$ $V_\omega$ is actually a continuous operator in $L^2(\T^l)$ (see Appendix, Remark \ref{Ra1}(d)) by virtue of our second assumption, namely: \vskip 5pt\noindent (A2) {\it Let the diophantine constants $\gamma$ and $\tau$ be such that } $$ \gamma\tau^\tau(\tau+2)^{4(\tau+2)}<\frac12 $$ {\it and let there exist $\rho>2$ such that } \begin{equation} \label{normarho} \|{\mathcal V}_\omega\|_{\rho}:=\sum_{q\in\Z^l}\,e^{\rho |q|} \int_{{\mathcal R}}e^{\rho |p|}|\widehat{{\mathcal V}}_q(p)|\,dp<+\infty. \end{equation} \vskip 4pt\noindent \begin {remark}\label{R2} {} \par\noindent \begin{itemize} {\rm \item[(i)] Actually, by formula (A.6), $\|V_\omega\|_{L^2\to L^2}\leq \|{\mathcal V}_\omega\|_{\rho}$. Moreover, assumption (A2) makes ${\mathcal V}_\omega$ a holomorphic function of $(\xi,x)$ in $\Bbb C_{\rho}^{2l}:=\{(\xi,x)\in\Bbb C^{2l}\,:\,|{\rm Im}\xi_i |<\rho; \;|{\rm Im} x_i |<\rho,\;\forall i=1,\dots,l\} $. \item[(ii)] As discussed in \cite{GP}, ${\mathcal V}(t,x)$ must depend explicity on $t$ if $l>1$ to make the problem a nontrivial one. Once more by (A2), formula (\ref{normarho}), ${\mathcal V}(t,x)$ vanishes exponentially fast as $|t|\to\infty$ uniformly w.r.t. $x\in\T^l$.} \end{itemize} \end{remark} Our third assumption concerns the ${\mathcal P}{\mathcal T}$-symmetry, and is formulated as follows (see (\ref {FCPT}) and (\ref{EvenOdd})): \vskip 5pt\noindent (A3) {\it The Fourier coefficients ${\mathcal V}_{\omega,q}(\xi)$ enjoy the following symmetry properties:} \begin{equation} \label{PTSP} {\mathcal V}_{\omega,q}(\xi)=\overline{{\mathcal V}_{\omega,q}(\xi)};\quad{\mathcal V}_{\omega,-q}(\xi)=-{\mathcal V}_{\omega,q}(\xi);\quad {\mathcal V}_{\omega,q}(-\xi)={\mathcal V}_{\omega,q}(\xi),\quad \forall (\xi,q)\in{\mathcal R}^l\times\T^l. \end{equation} \begin{remark}\label{R3} {\rm Clearly (A3) entails ${\mathcal V}_\omega(\xi,-x)=-{\mathcal V}_\omega(\xi,x)$ and $$ (({\mathcal P}{\mathcal T}){\mathcal V}_\omega)(\xi,x)=({\mathcal P}{\mathcal T})\displaystyle(\sum_{q\in\Z^l}{\mathcal V}_{\omega,q}(\xi)e^{i\langle q,x\rangle}\displaystyle)={\mathcal V}_\omega(\xi,x),\qquad \forall(\xi,x)\in{\mathcal R}^l\times\T^l, $$ that is, ${\mathcal V}_\omega(\xi,x)$ is a ${\mathcal P}{\mathcal T}$-invariant function, odd with respect to $x$. Moreover from (\ref{PTSP}) one can easily obtain $\widehat{{\mathcal V}}_{\omega,q}(-p)=\widehat{{\mathcal V}}_{\omega,q}(p)\in{\mathcal R},\;\forall p\in{\mathcal R}^l,\,\forall q\in\Z^l$. This entails that $V:=V_\omega$ is a ${\mathcal P}{\mathcal T}$-symmetric operator in $L^2(\T^l)$, i.e. $[V,{\mathcal P}{\mathcal T}]=0$. We have indeed} \begin{eqnarray*} ({\mathcal P}{\mathcal T})(Vf)(x) &=& \int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_{\omega,q}(p) e^{i\langle q,x\rangle-i\hbar p\langle \omega,q\rangle/2}\overline{f}(-x+\hbar p\omega)\,dp \\ & =& \int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_{\omega,q}(p) e^{i(\langle q,x\rangle+\hbar p\langle \omega,q\rangle/2)}\overline{f}(-x-\hbar p\omega)\,dp \\ &=&\displaystyle \int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_{\omega,q}(p) e^{i(\langle q,x\rangle+\hbar p\langle \omega,q\rangle/2)}({\mathcal P}{\mathcal T} f)(x+\hbar p\omega)\,dp \\ &=&V({\mathcal P}{\mathcal T} f)(x)\,,\qquad \forall f\in L^2(\T^l),\;\forall x\in\T^l. \end{eqnarray*} \end{remark} To sum up, the operator family acting as $$ H(\varepsilon) = i\hbar \langle\omega,\nabla\rangle + \varepsilon V $$ and defined on $D(H(\varepsilon))=H^1(\T^l)$ has pure-point spectrum denoted $ \sigma(H(\varepsilon))$, and we will prove that it consists of a sequence of non-isolated eigenvalues denoted $\{{\lambda}_n(\hbar,\varepsilon):\; n\in\Z^l\}$. The symbol of $H(\varepsilon)$ is the Hamiltonian family defined on ${\mathcal R}^l\times\T^l$: $$ {\mathcal H}_\varepsilon(\xi,x)=\langle\omega,\xi\rangle + \varepsilon {\mathcal V}_\omega(\xi,x) = {\mathcal L}_\omega(\xi)+\varepsilon {\mathcal V}_\omega(\xi,x). $$ \vskip 5pt We can now state the main result of the paper. \begin{theorem} \label{mainth} Under Assumptions (A1-A3), there exists $\varepsilon_0>0$ independent of $\hbar\in [0,1]$ such that for $|\varepsilon|<\varepsilon_0$ the spectrum of $H(\varepsilon)$ is given by the exact quantization formula: \begin{eqnarray} \label{EQF1} && \lambda_n(\hbar,\varepsilon)=\langle\omega,n\rangle\hbar+{\mathcal B}(n\hbar,\hbar;\varepsilon), \quad n\in\Z^l \\ && \label{serieB} {\mathcal B}(n\hbar,\hbar;\varepsilon):=\displaystyle\sum_{k=1}^\infty\,{\mathcal B}_k(n\hbar,\hbar)\varepsilon^k \end{eqnarray} where \begin{enumerate} \item ${\mathcal B}_k(\xi,\hbar)\in C^\infty({\mathcal R}^l\times [0,1])$ is real-valued, $k=1,2,\ldots\;$; \item ${\mathcal B}_{2s+1}=0$, $s=0,1,\ldots\;$; \item The series \eqref{serieB} converges uniformly with respect to $(\xi,\hbar)\in{\mathcal R}^l\times [0,1]$; \item ${\mathcal B}_k(n\hbar,\hbar)$ is obtained from the Weyl quantization formula applied to ${\mathcal B}_k(\xi,\hbar)$, which is the symbol of the operator $B_k$, the term of order $k$ of the QNF. \end{enumerate} \end{theorem} \begin{corollary}\label{C1} Let $|\varepsilon|<\varepsilon_0$. Then the operator $H(\omega,\varepsilon)$ is similar to the selfadjoint operator $$ S(\varepsilon):=L(\omega,\hbar)+\sum_{k=1}^\infty\,B_k(\hbar). $$ \end{corollary} \begin{remark}\label{R4} {\rm The explicit construction of the bounded operator $W(\varepsilon)$ realizing the similarity $U=U(\omega, \varepsilon, \hbar)= e^{iW(\varepsilon)/\hbar}$ is described in the proof of Theorem \ref{mainth}.} \end{remark} A straightforward consequence of the uniformity (with respect to $\hbar\in [0,1]$) of the convergence of the QNF is a convergence result for the corresponding CNF, valid for a class of ${\mathcal P}{\mathcal T}$-symmetric, non-holomorphic perturbations of non-resonant harmonic oscillators. Consider indeed the {inverse transformation} into action-angle variables $$ {\mathcal C}(\xi,x)=(\eta,y):= \left\{\begin{array}{c} \eta_i=-\sqrt{\xi_i}\sin x_i \\ \\ y_i=\sqrt{\xi_i}\cos x_i \end{array}\right.\quad i=1,\ldots,l $$ It is defined only on ${\mathcal R}_+^l\times \T^l$ and does not preserve the regularity at the origin. On the other hand, ${\mathcal C}$ is an {analytic}, {canonical} map between ${\mathcal R}_+^l\times\T^l$ and ${\mathcal R}^{2l}\setminus\{0,0\}$. \newline Then $$ ({\mathcal H}_\varepsilon \circ {\mathcal C}^{-1})(\eta,y)= \sum_{s=1}^l\omega_s(\eta^2_s+y_s^2)+\varepsilon ({\mathcal V}\circ {\mathcal C}^{-1})(\eta,y) $$ $$ :={\mathcal P}_0(\eta,y)+\varepsilon {\mathcal P}_1(\eta,y) $$ where for $\;(\eta,y)\in{\mathcal R}^{2l}\setminus\{0,0\}$ $$ {\mathcal P}_1(\eta,y)=({\mathcal V}\circ {\mathcal C}^{-1})(\eta,y)={\mathcal P}_{1,R}(\eta,y)+{\mathcal P}_{1,I}(\eta,y), $$ $$ {\mathcal P}_{1,R}(\eta,y)=\frac12\sum_{k\in\Z^l}({\rm Re}\,{{\mathcal V}}_k\circ {\mathcal C}^{-1})(\eta,y)\prod_{s=1}^l \left(\frac{\eta_s-iy_s}{\sqrt{\eta^2_s+y_s^2}}\right)^{k_s} $$ $$ {\mathcal P}_{1,I}(\eta,y)=\frac12\sum_{k\in\Z^l} ({\rm Im}{{\mathcal V}}_k\circ {\mathcal C}^{-1})(\eta,y)\prod_{s=1}^l \left(\frac{\eta_s-iy_s}{\sqrt{\eta^2_s+y_s^2}}\right)^{k_s} $$ \begin{corollary}\label{C2} The Birkhoff normal form of ${\mathcal H}_\varepsilon$ is {real} and uniformly convergent on any compact of ${\mathcal R}^{2l}\setminus\{0,0\}$ if $|\varepsilon|<\varepsilon_0$. Hence the system is integrable. \end{corollary} \section{Proof of the results} \setcounter{equation}{0}% \setcounter{theorem}{0 {\it Proof of Theorem \ref{mainth}.} Under the present conditions, statements (3) and (4) are proved in \cite{GP}, as well as the smoothness of ${\mathcal B}_k(\xi,\hbar)$ asserted in (1). The assertions left to prove are therefore the reality statement (1), $B_k(\xi,\hbar)=\overline{B}_k(\xi,\hbar)$, $\forall\,(\xi,\hbar)\in{\mathcal R}^l\times [0,1]$, and the even nature of the QNF (2), ${\mathcal B}_{2s+1}=B_{2s+1}=0,\,\forall s=0, 1, \dots$. This requires a detailed examination of the structure of the QNF, whose construction we now recall in Subsection 2.1. In Subsection 2.2 we describe the inductive argument proving the reality assertion, and the symmetry argument proving he vanishing of the odd terms. \subsection{The Quantum Normal Form: the formal construction} (We follow Sj\"ostrand \cite{Sj} and Bambusi-Graffi-Paul \cite{BGP}). Given $H(\varepsilon) = L(\omega,\hbar) + \varepsilon V$ in $L^2(\T^l)$, look for a similarity transformation $U=U(\omega,\varepsilon,\hbar)$, in general {non unitary} ($W(\varepsilon)\neq W(\varepsilon)^\ast$): $$ U(\omega,\varepsilon,\hbar)=e^{i W(\varepsilon)/\hbar}: L^2(\T^l)\leftrightarrow L^2(\T^l) $$ such that \begin{equation} \label{2} S(\varepsilon):=UH(\varepsilon) U^{-1}=L(\omega,\hbar)+\varepsilon B_1+\varepsilon^2 B_2+\ldots = L(\omega,\hbar) + \sum_{k=1}^{\infty}B_k\varepsilon^k \end{equation} under the requirement: $$ [B_k,L]=0, \qquad \forall k. $$ Recall the formal commutator expansion \begin{equation}\label{SH} S(\varepsilon)=e^{i W(\varepsilon)/\hbar}H(\varepsilon) e^{-i W(\varepsilon)/\hbar}=\sum_{k=0}^\infty H_k \end{equation} $$ H_0:=H(\varepsilon),\quad H_k:=\frac{[W(\varepsilon),H_{k-1}]}{i\hbar k}, \qquad k\geq 1 $$ and look for $W(\varepsilon)$ in the form of a power series expansion in $\varepsilon$: \quad $W(\varepsilon)=\varepsilon W_1+\varepsilon^2W_2+\ldots.$ \newline Then \eqref{2} becomes: \begin{equation}\label{SB} S(\varepsilon)=\sum_{k=0}^{\infty}\varepsilon^k B_k \end{equation} where \begin{equation}\label{BW} B_0=L(\omega,\hbar);\quad {B}_k:=\frac{[W_k,L]}{i\hbar}+V_k\,,\qquad k\geq 1, \end{equation} $V_1\equiv V$ and \begin{eqnarray}\label{VW} V_k &=&\sum_{r=2}^k\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_s\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},L]\ldots]}{(i\hbar)^r}\nonumber\\ \\ &+& \sum_{r=1}^{k-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_s\geq 1}}\frac{[W_{j_1},[W_{j_2},\ldots,[W_{j_r},V]\ldots]}{(i\hbar)^r}. \nonumber \end{eqnarray} $V_k$ depends on $W_1,\dots, W_{k-1}$, but not on $W_k$. Thus we get the recursive homological equations: \begin{equation} \label{3} \frac{[W_k,L]}{i\hbar} +V_k=B_k, \qquad [L,B_k]=0. \end{equation} To solve \eqref{3} for the two unkowns $ B_k, W_k$, we look for their symbols and then apply the Weyl quantization formula. First recall (see e.g. \cite{Fo} or \cite{Ro}) that the symbol of the commutator $[F,G]/i\hbar$ of two operators $F$ and $G$ is the {\it Moyal bracket} $\{{\mathcal F}_{\rho,\sigma},\cal G\}_M$ of the symbols ${\mathcal F}_{\rho,\sigma}={\mathcal F}_{\rho,\sigma}(\xi,x,\hbar)$ of $F$ and $\cal G=\cal G(\xi,x,\hbar)$ of $G$, where $\{{\mathcal F}_{\rho,\sigma},\cal G\}_M$ is defined through its Fourier representation \vskip 6pt\noindent \begin{equation}\label{M1} \{{\mathcal F}_{\rho,\sigma},\cal G\}_M(\xi,x;\hbar) = \int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_q(p,\hbar) e^{i(\langle p,\xi\rangle\langle + \langle q,x\rangle)}\,dp \end{equation} \vskip 4pt\noindent and \begin{equation}\label{M2} \widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_q(p,\hbar) = \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q-q'}(p-p',\hbar)\widehat{\cal G}_{q'}(p',\hbar)\sin[ \frac{2}{\hbar}(\langle p',q\rangle-\langle p,q'\rangle)]\,dp'\,. \end{equation} Notice that $\{{\mathcal F}_{\rho,\sigma},\cal G\}_M = -\{\cal G,{\mathcal F}_{\rho,\sigma}\}_M$. The above equations (\ref {SH})-(\ref {VW}) become, once written for the symbols: \begin{equation}\label{SH1} \Sigma(\varepsilon)=\sum_{k=0}^\infty {{\mathcal H}}_k \end{equation} $$ {{\mathcal H}}_0:={\mathcal L}_\omega+\varepsilon{\mathcal V},\quad {{\mathcal H}}_k:=\frac{\{{\mathcal W}(\varepsilon),{{\mathcal H}}_{k-1}\}_M}{ k}, \;k\geq 1, $$ where ${\mathcal W}(\varepsilon)=\varepsilon {\mathcal W}_1+\varepsilon^2{\mathcal W}_2+\ldots$, \begin{equation}\label{SB1} \Sigma(\varepsilon)=\displaystyle\sum_{k=0}^{\infty}\varepsilon^k {\mathcal B}_k \end{equation} and \begin{equation}\label{BW1} {\mathcal B}_0={\mathcal L}_\omega=\langle\omega,\xi\rangle;\quad {\mathcal B}_k =\{{\mathcal W}_k,{{\mathcal L}_\omega} \}_M+{\mathcal V}_k,\; k\geq1,\quad {\mathcal V}_1\equiv {\mathcal V} \end{equation} \begin{eqnarray}\label{VW1} {\mathcal V}_k &=& \sum_{r=2}^k\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_s\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M\}_M \\ \nonumber &+&\sum_{r=1}^{k-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_s\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M\}_M , \quad k>1 \end{eqnarray} Therefore the symbols ${\mathcal W}_k$ and ${\mathcal B}_k$ of $W_k$ and $B_k$ can be recursively found solving the homological equation: \begin{equation} \label{5} \{{\mathcal W}_k,{\mathcal L}_\omega\}_M +{\mathcal V}_k={\mathcal B}_k, \qquad k=1,\ldots \end{equation} under the condition: \begin{equation} \label{6} \{{\mathcal L}_\omega,{\mathcal B}_k\}_M =0. \end{equation} Here $$ {\mathcal W}_k={\mathcal W}_k(\xi,x;\hbar),\;{\mathcal V}_k={\mathcal V}_k(\xi,x;\hbar),\;{\mathcal B}_k={\mathcal B}_k(\xi,x;\hbar). $$ Notice that, in view of Theorem \ref{Ta1} in Appendix, \eqref{6} is immediately satisfied if ${\mathcal B}_k={\mathcal B}_k(\xi;\hbar)$ does not depend on $x$. Moreover, by Theorem \ref{Ta1}(2), since ${\mathcal L}_\omega={\mathcal L}_\omega(\xi)=\langle\omega,\xi\rangle$ is linear in $\xi$, we have $$ \{{\mathcal W}_k,{\mathcal L}_\omega\}_M = \{{\mathcal W}_k,{\mathcal L}_\omega\} = -\langle\nabla_x{\mathcal W}_k,\omega\rangle $$ and \eqref{5} becomes \begin{equation} \label{7} -\langle\nabla_x{\mathcal W}_k(\xi,x),\omega\rangle + {\mathcal V}_k(\xi,x;\hbar) = {\mathcal B}_k(\xi;\hbar). \end{equation} Write now $W_k(\xi,x;\hbar)$ and ${\mathcal V}_k(\xi,x;\hbar)$ under their Fourier series representation, respectively: $$ {\mathcal W}_k(\xi,x;\hbar)=\sum_{q\in\Z^l}{\mathcal W}_{k,q}(\xi;\hbar)e^{i\langle q,x\rangle}, \qquad {\mathcal V}_k(\xi,x;\hbar)=\sum_{q\in\Z^l}{\mathcal V}_{k,q}(\xi;\hbar)e^{i\langle q,x\rangle}. $$ Then \eqref{7} in turn becomes: \begin{equation}\label{8} -i\sum_{q\neq 0}\langle q,\omega\rangle{\mathcal W}_{k,q}(\xi;\hbar)e^{i\langle q,x\rangle} + \sum_{q\in\Z^l}{\mathcal V}_{k,q}(\xi;\hbar)e^{i\langle q,x\rangle} = {\mathcal B}_k(\xi;\hbar) \end{equation} whence, imposing the equality of the Fourier coefficients of both sides, we obtain the solutions \begin{equation} \label{sol} {\mathcal B}_k(\xi,\hbar) = {\mathcal V}_{k,0}(\xi,\hbar), \qquad {\mathcal W}_{k,q}(\xi,\hbar) = \frac{{\mathcal V}_{k,q}(\xi,\hbar)}{i\langle q,\omega\rangle},\quad \forall q\neq 0. \end{equation} \subsection{Reality of ${\mathcal B}_k$: the inductive argument} Denote now ${\mathcal V}_1\equiv {\mathcal V} = {\mathcal V}_\omega$. Since ${\mathcal V}_{\omega,q}(\xi)$ is real $\forall q\in\Z^l$ by assumption, we have $$ {\mathcal B}_1(\xi,\hbar) = {\mathcal V}_{\omega,0}(\xi)\in{\mathcal R} $$ and \begin{equation}\label{solution} {\mathcal W}_{1,q}(\xi,\hbar) = \frac{{\mathcal V}_{\omega,q}(\xi)}{i\langle q,\omega\rangle}\in i{\mathcal R}, \quad\forall q\neq 0. \end{equation} Moreover, since no requirement is asked on ${\mathcal W}_{1,0}$, we can choose ${\mathcal W}_{1,0}=0$. Now assume inductively: \newline (${\bf A_1}$) $ {\mathcal V}_{j,q}(\xi,\hbar)\in{\mathcal R},\quad\forall j=1,\dots,k-1,\; \forall q\in\Z^l; $ \newline ${\bf (A_2)}$ we can choose ${\mathcal W}_{j,0}=0,\;\forall j=1,\dots,k-1.$ \newline Remark that (${\bf A_1}$) entails \begin{equation} \label{ReIm} {\mathcal W}_{j,q}(\xi,\hbar)) = \frac{{\mathcal V}_{j,q}(\xi,\hbar)}{i\langle q,\omega\rangle}\in i{\mathcal R}\,,\;\qquad {\mathcal B}_j(\xi,\hbar) = {\mathcal V}_{j,0}\in{\mathcal R}, \quad \forall j=1,\dots,k-1. \end{equation} Then the following assertions hold: \newline $ {\bf (R_1)}$ $ {\mathcal V}_{k,q}(\xi,\hbar)\in{\mathcal R},\; \forall q\in\Z^l; $ \newline $ {\bf (R_2)}$ we can choose ${\mathcal W}_{k,0}=0$. \newline Remark that $ {\bf (R_1)}$ entails \begin{equation} \label{ReIm1} {\mathcal W}_{k,q}(\xi,\hbar) = \frac{{\mathcal V}_{k,q}(\xi,\hbar)}{i\langle q,\omega\rangle}\in i{\mathcal R}\,; \qquad {\mathcal B}_k(\xi) = {\mathcal V}_{k,0}\in{\mathcal R}. \end{equation} In order to prove $ {\bf (R_1)}$ consider the Fourier expansion of ${\mathcal V}_k$ given by (\ref{VW1}) \begin{eqnarray*} && {\mathcal V}_k = \sum_{r=2}^k\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_s\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M \\ && +\sum_{r=1}^{k-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_s\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M \\ && =\sum_{q\in\Z^l}\,{\mathcal V}_{k,q}(\xi,\hbar)e^{i\langle q,x\rangle}. \end{eqnarray*} \vskip 5pt\noindent By \eqref{ReIm}, the Fourier coefficients ${\mathcal W}_{j_s,q}$ of each term ${\mathcal W}_{j_s},\; s=1,\dots,r,$ are purely imaginary, and by Theorem \ref{Ta1}(3) each Moyal bracket generates another factor $i$. Therefore $$ \Big(\sum_{{j_1+\ldots+j_r=k}\atop {j_s\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal L}_\omega\}_M\ldots\}_M\Big)_q(\xi,\hbar) =(i)^{2r}a_{k,q}(\xi,\hbar), \quad a_{k,q}(\xi,\hbar)\in{\mathcal R} $$ $$ \Big(\sum_{{j_1+\ldots+j_r=k-1}\atop {j_s\geq 1}}\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\ldots,\{{\mathcal W}_{j_r},{\mathcal V}\}_M\ldots\}_M\Big)_q(\xi,\hbar)=(i)^{2r}b_{k,q}(\xi,\hbar), \quad b_{k,q}(\xi,\hbar)\in{\mathcal R} $$ \vskip 4pt\noindent and, as a consequence: $$ {\mathcal V}_{k,q}(\xi, \hbar)= (i)^{2r}[a_{k,q}(\xi,\hbar)+a_{k,q}(\xi,\hbar)]=(-1)^r[a_{k,q}(\xi,\hbar)+a_{k,q}(\xi,\hbar)]\in{\mathcal R}, \quad \forall q\in\Z^l. $$ Hence ${\mathcal B}_k(\xi,\hbar)={\mathcal V}_{k,0}\in{\mathcal R}$. Moreover, the homological equation (\ref {8}) does not involve ${\mathcal W}_{k,0}$, therefore we can always take ${\mathcal W}_{k,0}=0$. This concludes the proof of the induction, and thus of Assertion (1) of Theorem \ref{mainth}. \subsection{Vanishing of the odd terms ${\mathcal B}_{2s+1}$} Let us now prove Assertion (2) of Theorem \ref{mainth}. This will yield $$ \Sigma(\varepsilon)={\mathcal B}(\xi;\hbar)={\mathcal L}_\omega(\xi)+\varepsilon^2{\mathcal B}_2(\xi,\hbar)+\varepsilon^4{\mathcal B}_4(\xi,\hbar)+\dots. $$ {To see this, first recall that ${\mathcal V}_\omega(\xi,x)$ is odd in $x$: ${\mathcal V}_\omega(\xi,-x)=-{\mathcal V}_\omega(\xi,x)$, and let ${\mathcal M}$ denote the set of functions $f:\T^l\to\Bbb C$ with a definite parity (either even or odd). Moreover, $\forall\, f\in{\mathcal M}$ define $$ Jf= \left\{\begin{array}{c} +1, \quad {\rm if} f\;{\rm is\; even}, \\ \\ -1, \quad {\rm if} f\;{\rm is \;odd}. \end{array}\right. $$ Then $Jf=1$ if and only if $f_q=f_{-q}$ and $Jf=-1$ if and only if $f_q=-f_{-q}, \forall q\in\Z^l$. By assumption ${\mathcal V}_{\omega,q}(\xi)=-{\mathcal V}_{\omega,-q}(\xi), \forall q\in\Z^l, \forall \xi\in{\mathcal R}^l$, i.e. $J{\mathcal V}_{\omega}(\xi)=1$, and by (\ref{solution}) $$ J{\mathcal W}_1(\xi,\hbar) = 1,\qquad \forall(\xi,\hbar)\in{\mathcal R}^l\times[0,1]. $$ Now we can prove by induction that \begin{equation}\label{J} J{\mathcal V}_k=(-1)^k,\qquad \forall k=1,2,\dots \end{equation} whence $J{\mathcal V}_{2s+1}=1$, i.e. ${\mathcal V}_{2s+1}(\xi,x,\hbar)$ is odd in $x$, which entails ${\mathcal B}_{2s+1}={\mathcal V}_{2s+1,0}=0, \forall s=0,1,\dots$. To prove (\ref{J}) inductively first notice that $J{\mathcal V}_{1}=J{\mathcal V}_{\omega}=1$ and then let us assume that $$ J{\mathcal V}_{j}=(-1)^j,\qquad \forall j=1,\dots,k-1. $$ Then by (\ref{sol}) $$ J{\mathcal W}_{j}=(-1)^{j+1},\qquad \forall j=1,\dots,k-1. $$ Let us examine the parity of the first summand in the r.h.s. of (\ref{VW1}), making use of Theorem \ref{Ta1}(4): $$ J(\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\dots\{{\mathcal W}_{j_r},{\mathcal L}_{\omega}\}_M\dots\}_M\}_M) = (-1)^r(-1)^{j_1+1}\dots(-1)^{j_r+1}=(-1)^k $$ since $J{\mathcal L}_{\omega}=1$ and $j_1+\dots+j_r=k$. Similarly for the second summand in the r.h.s. of (\ref{VW1}) we have $$ J(\{{\mathcal W}_{j_1},\{{\mathcal W}_{j_2},\dots\{{\mathcal W}_{j_r},{\mathcal V}\}_M\dots\}_M\}_M) = (-1)^{r+1}(-1)^{j_1+1}\dots(-1)^{j_r+1}=(-1)^k $$ since $J{\mathcal V}=-1$ and $j_1+\dots+j_r=k-1$. This completes the proof of Assertion (2) and hence of Theorem \ref{mainth}. \newline {\it Proof of Corollary \ref{C1}.} It is proved in \cite{GP} that the convergence of the S-QNF $$ \Sigma(\varepsilon)={\mathcal L}_\omega(\xi)+\sum_{k=1}^\infty {\mathcal B}_k(\xi,\hbar)\varepsilon^k $$ takes place in the $\|\cdot\|_{\rho/2}$-norm, where $\|\cdot\|_\rho$ is the norm defined in \eqref{normarho}. Since (Remark \ref{A2}(b) and Appendix A.2) the $\|\cdot\|_{\rho/2}$-norm majorizes the operator norm in $L^2(\T^l)$ of the corresponding Weyl-quantized operators, we can conclude that $$ S(\varepsilon)=L(\omega,\hbar)+\sum_{k=1}^\infty\,B_k\varepsilon^k,\qquad B_{2s+1}=0, \quad\forall s=0,1,\dots, $$ where the convergence takes place in the operator norm sense. Since $B_k=B_k^\ast$, $S(\varepsilon)=S(\varepsilon)^\ast$ and the similarity between $H_\varepsilon$ and a self-adjoint operator is therefore proved. \subsection{Proof of Corollary \ref{C2}} By the uniform convergence of the S-QNF with resepct to $\hbar\in [0,1]$, it is enough to check that ${\mathcal B}_k(\xi,0)$ is the $k-$th coefficient of the CNF for ${\mathcal H}_\varepsilon(\xi,x)$. \newline Under the present regularity assumptions it is known (see e.g.\cite{Sj}, \cite{BGP}) that, for each $k$, ${\mathcal W}_k(\xi,x;\hbar),\; $ ${\mathcal B}_k(\xi;\hbar),\;$ $ {\mathcal V}_k(\xi,x;\hbar)$ admit an asymptotic expansion in powers of $\hbar$ near $\hbar=0$: $$ {\mathcal W}_k(\xi;x;\hbar)\sim \displaystyle\sum_{j=0}^\infty\,{\mathcal W}_k^{(j)}(\xi,x)\hbar^j;\quad {\mathcal B}_k(\xi;\hbar)\sim \displaystyle\sum_{j=0}^\infty\,{\mathcal B}_k^{(j)}(\xi)\hbar^j\quad {\mathcal V}_k(\xi;x;\hbar)\sim \displaystyle\sum_{j=0}^\infty\,{\mathcal V}_k^{(j)}(\xi,x)\hbar^j. $$ Let us now prove that the terms of order zero in the above expansions, namely the {\it principal symbols} of ${\mathcal W}_k(\xi,x;\hbar),\; $ ${\mathcal B}_k(\xi;\hbar),\;$ $ {\mathcal V}_k(\xi,x;\hbar)$, respectively $$ w_k:={\mathcal W}_k^{(0)},\quad b_k={\mathcal B}_k^{(0)},\quad v_k={\mathcal V}_k^{(0)} $$ coincide with the coefficients of order $k$ of the CNF generated by the Hamiltonian family ${\mathcal H}_\varepsilon(\xi,x)={\mathcal L}_\omega(\xi)+\varepsilon {\mathcal V}_\omega(\xi,x)$. In fact, the recursive homological equations (\ref{5}) and (\ref{6}) $$ \{{\mathcal W}_k,{\mathcal L}\}_M +{\mathcal V}_k={\mathcal B}_k, \qquad \{{\mathcal L},{\mathcal B}_k\}_M =0,\quad k=1,\ldots $$ evaluated at $\hbar=0$ become $$ \{w_k,{\mathcal L}\} + v_k=b_k, \qquad \{{\mathcal L},b_k\}=0, \qquad \v_1\equiv v\equiv {\mathcal V} $$ \begin{eqnarray} \label{9} v_k &=& \displaystyle\sum_{r=2}^k\frac{1}{r!}\displaystyle\sum_{{j_1+\ldots+j_r=k}\atop {j_s\geq 1}}\{w_{j_1},\{w_{j_2},\ldots,\{w_{j_r},{\mathcal L}\}\ldots\} \\ \nonumber &+& \displaystyle\sum_{r=1}^{k-1}\frac{1}{r!}\displaystyle\sum_{{j_1+\ldots+j_r=k-1}\atop {j_s\geq 1}}\{w_{j_1},\{w_{j_2},\ldots,\{w_{j_r},v\}\ldots\} \end{eqnarray} where $\{f,g\}$ denotes the Poisson bracket of two observables $f, g\in C^{\infty}({\mathcal R}^l\times\T^l)$. Let us check that this is exactly the recurrence defined by {canonical perturbation theory} generated by the Lie transformation algorithm. Look indeed for an $\varepsilon$-dependent family of smooth canonical maps $\Phi_\varepsilon: {\mathcal R}^l\times \T^l \leftrightarrow {\mathcal R}^l\times \T^l$, $ (\xi,x)\mapsto (\eta,y)=\Phi_\varepsilon(\xi,x)$ such that \begin{equation} \label{10} {\mathcal H}_\varepsilon\circ \Phi_\varepsilon^{-1}(\xi,x) ={\mathcal L}(\xi)+\varepsilon b_1(\xi)+\varepsilon^2 b_2(\xi)+\ldots \end{equation} Look for $\Phi_\varepsilon$ as the time 1 flow of a smooth Hamiltonian family $w_\varepsilon(\xi,x)$, the {\ generating function}. Then \begin{equation} \label{11} {\mathcal H}_\varepsilon\circ \Phi_\varepsilon^{-1}(\xi,x) ={\mathcal H}_\varepsilon(\xi,x)+\displaystyle\sum_{s=1}^\infty\,\{w_\varepsilon^{(1)},\{w_\varepsilon^{(2)},\ldots\{w_\varepsilon^{(s)},{\mathcal L}\}\ldots\}\} \end{equation} where $w_\varepsilon^{(r)}=w_\varepsilon,\;\forall r=1,2,\dots$. If we set $$ w_\varepsilon=\varepsilon w_1+\varepsilon^2 w_2+\ldots $$ and require equality between \eqref{10} and \eqref{11} we obtain $$ {b}_k=\{w_k,{\mathcal L}\}+v_k,\quad k\geq 1, \;v_1\equiv v \equiv {\mathcal V} $$ \begin{eqnarray*} v_k &= &\sum_{r=2}^k\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k}\atop {j_s\geq 1}}\{w_{j_1},\{w_{j_2},\ldots,\{w_{j_r},{\mathcal L}\}\ldots\} \\ &+& \sum_{r=1}^{k-1}\frac{1}{r!}\sum_{{j_1+\ldots+j_r=k-1}\atop {j_s\geq 1}}\{w_{j_1},\{w_{j_2},\ldots,\{w_{j_r},v\}\ldots\} \end{eqnarray*} Condition $\{{\mathcal L},b_k\}=0$ follows from the fact that both ${\mathcal L}(\xi)$ and $b_k(\xi)$ do not depend on $x$. This concludes the proof of the corollary. \begin{appendix} \section{Moyal brackets and the Weyl quantization } \setcounter{equation}{0 \setcounter{theorem}{0 \subsection{Moyal brackets} \begin{theorem}\label{Ta1} Let ${\mathcal F}_{\rho,\sigma}={\mathcal F}_{\rho,\sigma}(\xi,x;\hbar)$ and $\cal G=\cal G(\xi,x;\hbar)$ belong to $C^{\infty}({\mathcal R}^l\times\T^l\times[0,1]; \Bbb C)$ and vanish exponentially fast as $|\xi|\to\infty$, uniformly with respect to $(x,\hbar)\in\T^l\times[0, 1]$. Consider their Fourier representation \begin{eqnarray*} {\mathcal F}_{\rho,\sigma}(\xi,x;\hbar)=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_q(p;\hbar)e^{i(\langle p,\xi\rangle +\langle q,x\rangle)}\,dp \\ \cal G(\xi,x;\hbar)=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{\cal G}_q(p;\hbar)e^{i(\langle p,\xi\rangle +\langle q,x\rangle)}\,dp\,, \end{eqnarray*} where \begin{eqnarray*} {\mathcal F}_{\rho,\sigma}_q(\xi,\hbar) = (2\pi)^{-l/2}\int_{\T^l}{\mathcal F}_{\rho,\sigma}(\xi,x,\hbar)e^{-i\langle q,x\rangle}\,dx \\ \cal G_q(\xi,\hbar) = (2\pi)^{-l/2}\int_{\T^l}\cal G(\xi,x,\hbar)e^{-i\langle q,x\rangle}\,dx \end{eqnarray*} and \begin{eqnarray*} \widehat{{\mathcal F}_{\rho,\sigma}}_q(p;\hbar) = (2\pi)^{-l/2}\int_{{\mathcal R}^l}{\mathcal F}_{\rho,\sigma}_q(\xi,\hbar)e^{-i\langle p,\xi\rangle}\,d\xi \\ \widehat{\cal G}_q(p;\hbar) = (2\pi)^{-l/2}\int_{{\mathcal R}^l}\cal G_q(\xi,\hbar)e^{-i\langle p,\xi\rangle}\,d\xi\,. \end{eqnarray*} Then the following assertions hold: \begin{enumerate} \item[(1)] If both ${\mathcal F}_{\rho,\sigma}$ and $\cal G$ do not depend on $x$, i.e. ${\mathcal F}_{\rho,\sigma}(\xi,x;\hbar)={\mathcal F}_{\rho,\sigma}(\xi;\hbar)$ and $\cal G(\xi,x;\hbar)=\cal G(\xi;\hbar) $, then $\{{\mathcal F}_{\rho,\sigma},\cal G\}_M \equiv 0$. \item[(2)] If $\cal G(\xi,x;\hbar)=\langle\omega,\xi\rangle$, for a given constant vector $\omega\in{\mathcal R}^l$, i.e. $\cal G$ does not depend on $x$ and is linear in $\xi$, then $$ \{{\mathcal F}_{\rho,\sigma},\cal G\}_M = \{{\mathcal F}_{\rho,\sigma},\cal G\} = -\langle\nabla_x{\mathcal F}_{\rho,\sigma},\omega\rangle\,. $$ \item[(3)] Consider the Fourier expansions of ${\mathcal F}_{\rho,\sigma}$ and $\cal G$ in the $x$ variable: \begin{eqnarray*} {\mathcal F}_{\rho,\sigma}(\xi,x;\hbar)=\sum_{q\in\Z^l}{\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)e^{i\langle q,x\rangle} \\ \cal G(\xi,x;\hbar)=\sum_{q\in\Z^l}\cal G_q(\xi;\hbar)e^{i\langle q,x\rangle} \end{eqnarray*} where, $\forall q\in\Z^l$, \begin{eqnarray*} {\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)=(2\pi)^{-l/2}\int_{{\mathcal R}^l}\widehat{{\mathcal F}_{\rho,\sigma}}_q(p;\hbar)e^{i\langle p,\xi\rangle}\,dp \\ \cal G_q(\xi;\hbar)=(2\pi)^{-l/2}\int_{{\mathcal R}^l}\widehat{\cal G}_q(p;\hbar)e^{i\langle p,\xi\rangle}\,dp\,. \end{eqnarray*} If ${\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)\in{\mathcal R}$, and $\cal G_q(\xi;\hbar)\in{\mathcal R},\;\forall q\in\Z^l$, then the Fourier expansion of $\{{\mathcal F}_{\rho,\sigma},\cal G\}_M $has purely imaginary Fourier coefficients, i.e. $$ (\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)_q(\xi;\hbar):= \int_{{\mathcal R}^l}\widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_q(p;\hbar)e^{i\langle p,\xi\rangle}\,dp\in i{\mathcal R}\,. $$ \item[(4)] Let $x\in\T^l\to{\mathcal F}_{\rho,\sigma}(\xi,x;\hbar)\in\Bbb C$ and $x\in\T^l\to\cal G(\xi,x;\hbar)\in\Bbb C$ belong to the space ${\mathcal M}$ of the functions with a definite parity (either even or odd) and let $J:{\mathcal M}\to\{-1,1\}$ be defined as in Section 2.3. Then $$ J\{{\mathcal F}_{\rho,\sigma},\cal G\}_M = -(J{\mathcal F}_{\rho,\sigma})(J\cal G). $$ \end{enumerate} \end{theorem} To prove the theorem we need the following \begin{lemma}\label{La1} Let ${\mathcal F}_{\rho,\sigma}={\mathcal F}_{\rho,\sigma}(\xi,x;\hbar)\in C^{\infty}({\mathcal R}^l\times\T^l\times[0,1]; \Bbb C)$.Then \begin{enumerate} \item[(i)] ${\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)\in{\mathcal R},\;\forall q\in\Z^l,\,\forall \xi\in{\mathcal R}^l$ if and only if $$ \overline{\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)} = \widehat{{\mathcal F}_{\rho,\sigma}}_q(-p,\hbar),\quad \forall q\in\Z^l,\;\forall p\in{\mathcal R}^l\,. $$ \item[(ii)] ${\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)\in i{\mathcal R},\;\forall q\in\Z^l,\,\forall \xi\in{\mathcal R}^l$ if and only if $$ \overline{\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)} = -\widehat{{\mathcal F}_{\rho,\sigma}}_q(-p,\hbar),\quad \forall q\in\Z^l,\;\forall p\in{\mathcal R}^l\,. $$ \end{enumerate} \end{lemma} {\it Proof of Lemma \ref{La1}.} We prove only (i) because the proof of (ii) is analogous. If ${\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)\in{\mathcal R},\;\forall q\in\Z^l,\,\forall \xi\in{\mathcal R}^l$, then $$ \overline{\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)} = (2\pi)^{-l/2}\int_{{\mathcal R}^l}{\mathcal F}_{\rho,\sigma}_q(\xi,\hbar)e^{i\langle p,\xi\rangle}\,d\xi = (2\pi)^{-l/2}\int_{{\mathcal R}^l}{\mathcal F}_{\rho,\sigma}_q(\xi,\hbar)e^{-i\langle -p,\xi\rangle}\,d\xi = \widehat{{\mathcal F}_{\rho,\sigma}}_q(-p,\hbar)\,. $$ Conversely, let $\overline{\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)} = \widehat{{\mathcal F}_{\rho,\sigma}}_q(-p,\hbar),\quad \forall q\in\Z^l,\;\forall p\in{\mathcal R}^l$.Then \begin{eqnarray*} \overline{{\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)} = (2\pi)^{-l/2}\int_{{\mathcal R}^l}\overline{\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)}e^{-i\langle p,\xi\rangle}\,dp = (2\pi)^{-l/2}\int_{{\mathcal R}^l}\widehat{{\mathcal F}_{\rho,\sigma}}_q(-p,\hbar)e^{i\langle -p,\xi\rangle}\,dp \\ = (2\pi)^{-l/2}\int_{{\mathcal R}^l}\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)e^{i\langle p,\xi\rangle}\,dp = {\mathcal F}_{\rho,\sigma}_q(\xi;\hbar), \end{eqnarray*} where to obtain the third equality we have performed the change of variables $p\to -p$ in the integral. Hence ${\mathcal F}_{\rho,\sigma}_q(\xi;\hbar)\in {\mathcal R},\;\forall q\in\Z^l,\,\forall \xi\in{\mathcal R}^l$ and this completes the prooof of the lemma. \par\noindent {\it Proof of Theorem \ref{Ta1}.} \begin{itemize} \item[(1)] If ${\mathcal F}_{\rho,\sigma}$ and $\cal G$ do not depend on $x$, then ${\mathcal F}_{\rho,\sigma}_q(\xi,\hbar)=\cal G_q(\xi,\hbar)=0,\;\forall q\neq 0, \forall \xi\in {\mathcal R}^l$. Therefore all the terms of the expansion in (\ref{M2}) with $q'\neq 0$ vanish. Then $\forall q\in\Z^l$ $$ \widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_q(p,\hbar) = \frac{2}{\hbar} \int_{{\mathcal R}^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p-p',\hbar)\widehat{\cal G}_{0}(p',\hbar)\sin( \frac{2}{\hbar}\langle p',q\rangle)\,dp' $$ vanishes both for $q\neq 0$ and for $q=0$, whence $\{{\mathcal F}_{\rho,\sigma},\cal G\}_M\equiv 0$ by (\ref{M1}). \item[(2)] If $\cal G(\xi,x;\hbar)=\langle\omega,\xi\rangle$, then by (\ref{M2}) \begin{eqnarray*} \widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_q(p,\hbar) = \frac{2}{\hbar} \int_{{\mathcal R}^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p-p',\hbar)\widehat{\cal G}_{0}(p',\hbar)\sin( \frac{2}{\hbar}\langle p',q\rangle)\,dp' \\ = \frac{2}{\hbar} \int_{{\mathcal R}^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p-p',\hbar)\langle\omega, i\delta'(p)\rangle\sin( \frac{2}{\hbar}\langle p',q\rangle)\,dp' \\ =\frac{2i}{\hbar}\sum_{j=1}^l\omega_j\frac{\partial}{\partial p_j}[\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p-p',\hbar)\sin( \frac{2}{\hbar}\langle p',q\rangle)]|_{p'=0} \\ = -i\sum_{j=1}^l\omega_jq_j\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p,\hbar) = -\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p,\hbar)\langle\omega,iq\rangle, \end{eqnarray*} where the Fourier transform $\widehat{\cal G}_{0}(p',\hbar)$ of $\cal G_0(\xi,\hbar) = \langle\omega,\xi\rangle$ exists in the distributional sense, and is given by $i\delta'(p')$, where $\delta'(p')$ denotes the distributional derivative of the $\delta$-function: $$ \int_{{\mathcal R}^l}\delta'(p')f(p')\,dp' = (\nabla_{p'}f)(0) = \sum_{j=1}^l\frac{\partial f}{\partial p_j'}|_{p'=0}\,,\qquad\forall f\in{\mathcal S}({\mathcal R}^l). $$ Here ${\mathcal S}({\mathcal R}^l)$ denotes the Schwartz space. Then by (\ref{M1}) \begin{eqnarray*} \{{\mathcal F}_{\rho,\sigma},\cal G\}_M(\xi,x;\hbar) = -\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\langle\omega,iq\rangle\widehat{{\mathcal F}_{\rho,\sigma}}_{q}(p,\hbar) e^{i(\langle p,\xi\rangle\langle + \langle q,x\rangle)}\,dp \\ = - \sum_{q\in\Z^l}\langle\omega,iq\rangle{\mathcal F}_{\rho,\sigma}_{q}(\xi,\hbar) e^{i \langle q,x\rangle} = -\langle\omega,\nabla_x{\mathcal F}_{\rho,\sigma}(\xi,x)\rangle. \end{eqnarray*} \item[(3)] By Lemma \ref{La1} (i) we have $\overline{\widehat{{\mathcal F}_{\rho,\sigma}}_q(p,\hbar)} = \widehat{{\mathcal F}_{\rho,\sigma}}_q(-p,\hbar)$ and $\overline{\widehat{\cal G}_q(p,\hbar)} = \widehat{\cal G}_q(-p,\hbar),\; \forall q\in\Z^l,\;\forall p\in{\mathcal R}^l$. Then, from (\ref{M2}) we obtain $$ \overline{\widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}}_q(p,\hbar) = \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q-q'}(-p+p',\hbar)\widehat{\cal G}_{q'}(-p',\hbar)\sin[ \frac{2}{\hbar}(\langle p',q\rangle-\langle p,q'\rangle)]\,dp' $$ whence, performing the change of variables $p'\to -p'$ in the integral, \begin{eqnarray*} \overline{\widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}}_q(p,\hbar) = \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q-q'}(-p-p',\hbar)\widehat{\cal G}_{q'}(p',\hbar)\sin[ \frac{2}{\hbar}(-\langle p',q\rangle+\langle -p,q'\rangle)]\,dp' \\ = -\frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q-q'}(-p-p',\hbar)\widehat{\cal G}_{q'}(p',\hbar)\sin[ \frac{2}{\hbar}(\langle p',q\rangle-\langle -p,q'\rangle)]\,dp' \\ = - \widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_q(-p,\hbar)\,,\quad\forall q\in\Z^l,\;\forall p\in{\mathcal R}^l. \end{eqnarray*} Then, by Lemma \ref{La1} (ii), $\widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}(\xi,\hbar)\in i{\mathcal R},\;\forall q\in\Z^l,\;\forall \xi\in{\mathcal R}^l$. \item[(4)] First of all recall that $J{\mathcal F}_{\rho,\sigma}=\pm 1$ if and only if ${\mathcal F}_{\rho,\sigma}_q(\xi,\hbar)=\pm{\mathcal F}_{\rho,\sigma}_{-q}(\xi,\hbar),\;\forall q\in\Z^l,\;\forall (\xi,\hbar)\in{\mathcal R}^l\times[0,1]$. Then by (\ref{M2}) we have \begin{eqnarray*} \widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_{-q}(p,\hbar) = \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{-q-q'}(p-p',\hbar)\widehat{\cal G}_{q'}(p',\hbar)\sin[ \frac{2}{\hbar}(-\langle p',q\rangle-\langle p,q'\rangle)]\,dp' \\ = \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{-q+q'}(p-p',\hbar)\widehat{\cal G}_{-q'}(p',\hbar)\sin[ \frac{2}{\hbar}(-\langle p',q\rangle+\langle p,q'\rangle)]\,dp' \\ = - \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{-q+q'}(p-p',\hbar)\widehat{\cal G}_{-q'}(p',\hbar)\sin[ \frac{2}{\hbar}(\langle p',q\rangle-\langle p,q'\rangle)]\,dp', \end{eqnarray*} where in the second equality we have performed the change of variables $q'\to -q'$. Assume first that $J{\mathcal F}_{\rho,\sigma}=J\cal G$; then ${\mathcal F}_{\rho,\sigma}_{-q}\cal G_{-q}\equiv{\mathcal F}_{\rho,\sigma}_{q}\cal G_{q}$ and $\widehat{{\mathcal F}_{\rho,\sigma}}_{-q}\widehat{\cal G}_{-q}\equiv\widehat{{\mathcal F}_{\rho,\sigma}}_{q}\widehat{\cal G}_{q},\;\forall q, q'\in\Z^l$. Thus, \begin{eqnarray*} \widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_{-q}(p,\hbar) = - \frac{2}{\hbar} \int_{{\mathcal R}^l}\sum_{q'\in\Z^l}\widehat{{\mathcal F}_{\rho,\sigma}}_{q-q'}(p-p',\hbar)\widehat{\cal G}_{q'}(p',\hbar)\sin[ \frac{2}{\hbar}(\langle p',q\rangle-\langle p,q'\rangle)]\,dp' \\ = -\widehat{(\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)}_{q}(p,\hbar) ), \end{eqnarray*} whence $$ (\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)_{-q}(\xi,\hbar) = - (\{{\mathcal F}_{\rho,\sigma},\cal G\}_M)_{q}(\xi,\hbar),\quad\forall q\in\Z^l,\;\forall (\xi,\hbar)\in{\mathcal R}^l\times[0,1] $$ and $J\{{\mathcal F}_{\rho,\sigma},\cal G\}_M = -1= -(J{\mathcal F}_{\rho,\sigma})(J\cal G)$. In a similar way we obtain $J\{{\mathcal F}_{\rho,\sigma},\cal G\}_M = 1$ if $J{\mathcal F}_{\rho,\sigma}=-J\cal G$, and this completes the proof of the theorem. \end{itemize} \subsection{The Weyl quantization} Let us sum up the canonical (Weyl) quantization procedure for functions (classical observables) defined on the phase space ${\mathcal R}^l\times\T^l$. For more detail the reader is referred to \cite{GP}. \par Let ${\mathcal A}(\xi,x,\hbar):{\mathcal R}^l\times\T^l\times [0,1]\to\Bbb C$ be a family of smooth phase-space functions indexed by $\hbar$ fulfilling the assumptions of Theorem \ref{Ta1}, written under its Fourier representation $$ {\mathcal A}(\xi,x,\hbar)=\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal A}}_q(p;\hbar)e^{i(\langle p,\xi\rangle +\langle q,x\rangle)}\,dp $$ where, as in Section 1: \begin{eqnarray*} && {\mathcal A}(\xi,x,\hbar)=\sum_{q\in\Z^l}\,{\mathcal A}_q(\xi,\hbar) e^{i\langle q,x\rangle}, \qquad {\mathcal A}_q(\xi,\hbar):=(2\pi)^{-l/2}\int_{\T^l}\,{\mathcal A}(\xi,x;\hbar)e^{-i\langle q,x\rangle}\,dx \\ && \widehat{{\mathcal A}}_q(p;\hbar)=(2\pi)^{-l/2}\int_{{\mathcal R}^l}\,{\mathcal A}_q(\xi;\hbar)e^{-i\langle p,\xi\rangle}\,dx \end{eqnarray*} Then the (Weyl) quantization of ${\mathcal A}(\xi,x;\hbar)$ is the operator acting on $L^2(\T^l)$, defined by: \begin{equation} \label{(A1)} (A(\hbar)f)(x):= \int_{{\mathcal R}^l}\sum_{q\in\Z^l}\widehat{{\mathcal A}}_q(p;\hbar)e^{i(\langle q,x\rangle+\langle p,q\rangle\hbar/2 )}f(x+p\hbar)\,dp,\; f\in L^2(\T^l). \end{equation} \begin{remark}\label{Ra1} \begin{enumerate} {\rm \item[(a)] If ${\mathcal A}$ does not depend on $\xi$, ${\mathcal A}(\xi,x,\hbar)={\mathcal A}(x,\hbar)$, (A.1) reduces to the standard {\it multiplicative} action: \begin{eqnarray*} && (A(\hbar)f)(x) =\int_{{\mathcal R}^l}\sum_{q\in\Z^l}{\mathcal A}_q(\hbar)\delta(p)e^{i(\langle q,x\rangle+\langle p,q\rangle \hbar/2)}f(x+\hbar p)\,dp \\ && =\sum_{q\in\Z^l}{\mathcal A}_q(\hbar)e^{i\langle q,x\rangle}f(x)={\mathcal A}(x,\hbar)f(x) \end{eqnarray*} \item[(b)] If ${\mathcal A}$ does not depend on $x$, then $\widehat{{\mathcal A}}_q=0, q\neq 0$; thus $\widehat{{\mathcal A}}_0=\widehat{{\mathcal A}}(p,\hbar)$ and the standard (pseudo) {differential action} is recovered: \begin{eqnarray*} (A(\hbar)f)(x)&=&\displaystyle\int_{{\mathcal R}^l}\widehat{{\mathcal A}}(p,\hbar)f(x+\hbar p)\,dp =\int_{{\mathcal R}^l}\sum_{q\in\Z^l}\,\widehat{{\mathcal A}}(p,\hbar)f_qe^{i\langle q,x+\hbar p\rangle}\,dp \\ & =& \sum_{q\in\Z^l}f_q{\mathcal A}(q\hbar,\hbar)e^{i\langle q,x\rangle} = ({\mathcal A}(-i\hbar\nabla_x,\hbar)f)(x), \end{eqnarray*} whence the {formula} yielding all the eigenvalues of $A$: \begin{equation} \label{A2} \lambda_n(\hbar)=\langle e_n,Ae_n\rangle ={\mathcal A}(n\hbar,\hbar). \end{equation} where $\{e_n:n\in\Bbb N\}$ is the set of the Hermite functions in $L^2({\mathcal R}^l)$. \item[(c)] Let ${\mathcal V}(t,x;\hbar)$ be a complex-valued, smooth function defined ${\mathcal R}\times\T^l\times[0,1]$ vanishing exponentially fast as $|t|\to\infty$ uniformly w.r.t. $(x,\hbar)\in\T^l\times[0,1]$, with Fourier expansion \begin{equation} \label{A3} {\mathcal V}(t,x;\hbar)=\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_q(p;\hbar)e^{i(\langle p,\xi\rangle +\langle q,x\rangle)}\,dp \end{equation} where, as in Section 1: \begin{eqnarray*} && {\mathcal V}(t,x,\hbar)=\sum_{q\in\Z^l}\,{\mathcal V}_q(t,\hbar) e^{i\langle q,x\rangle}, \qquad {\mathcal V}_q(t,\hbar):=(2\pi)^{-l/2}\int_{\T^l}\,{\mathcal V}(t,x;\hbar)e^{-i\langle q,x\rangle}\,dx \\ && \widehat{{\mathcal V}}_q(p;\hbar)=(2\pi)^{-l/2}\int_{{\mathcal R}}\,{\mathcal V}_q(t;\hbar)e^{-i\langle p,t\rangle}\,dt \end{eqnarray*} and let the smooth function ${\mathcal V}_\omega(\xi,x;\hbar): {\mathcal R}^l\times\T^l\times [0,1]\to\Bbb C$ be defined as follows: $$ {\mathcal V}_\omega(\xi,x;\hbar):=\left.{\mathcal V}(t,x,\hbar)\right|_{t={\mathcal L}_\omega(\xi)}={\mathcal V}(\langle\omega,\xi\rangle,x;\hbar). $$ Then we have: $$ {\mathcal V}_\omega(\xi,x;\hbar)=\int_{\mathcal R}\,\sum_{q\in\Z^l}\,\widehat{{\mathcal V}}_q(p,\hbar)e^{i(\langle q,x\rangle+p{\mathcal L}_\omega(\xi))}\,dp $$ and (A.1) clearly becomes: \begin{equation} \label{A4} (V_{\omega}(\hbar)f)(x) =\int_{{\mathcal R}}\sum_{q\in\Z^l}\widehat{{\mathcal V}}_q(p;\hbar)e^{i(\langle q,x\rangle+p\langle \omega,q\rangle \hbar/2 )}f(x+p\hbar \omega)\,dp \end{equation} \item[(d)] Let \begin{equation} \label{A5} \|{\mathcal V}_{\omega}\|_{\rho}:=\sup_{\hbar\in[0,1]}\,\sum_{q\in\Z^l}\,e^{\rho |q|}\,\int_{{\mathcal R}}\,e^{\rho |p|}\,|\widehat{{\mathcal V}}_q(p,\hbar)|\,dp<+\infty, \quad \rho\geq 0. \end{equation} and remark that \begin{eqnarray*} \|{\mathcal V}_{\omega}\|_{L^1}:=\sup_{\hbar\in[0,1]}\,\sum_{q\in\Z^l}\,\int_{{\mathcal R}}\,|\widehat{{\mathcal V}}_q(p,\hbar)|\,dp \leq \|{\mathcal V}_{\omega}\|_\rho. \end{eqnarray*} \vskip 4pt\noindent Then $V_{\omega}(\hbar)$ is a bounded operator in $L^2(\T^l)$, uniformly with respect to $\hbar\in [0,1]$, namely: \begin{equation} \label{A6} \sup_{\hbar\in[0,1]} \| V_{\omega}(\hbar)\|_{L^2\to L^2} \leq \|{\mathcal V}_{\omega}\|_{L^1}\leq \|{\mathcal V}_{\omega}\|_\rho \end{equation} because \begin{eqnarray*} && \| V_{\omega}(\hbar)f\|_{L^2}\leq \sum_{q\in\Z^l}\int_{{\mathcal R}}\,|\widehat{{\mathcal V}}_q(p,\hbar)|\,dp \,\|f\|_{L^2} \leq \|{\mathcal V}_{\omega}\|_{L^1}\,\|f\|_{L^2}. \end{eqnarray*} \item[(e)] If the symbol ${\mathcal V}$ is real valued, then its Weyl quantization $V(\hbar)$ is a clearly symmetric operator in $L^2(\T^l)$; if in addition condition (A.5) holds its boundedness entails its self-adjointness.} \end{enumerate} \end{remark} \end{appendix} \vskip 1cm\noindent
proofpile-arXiv_067-12293
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}\label{s-intro} In this article we are interested in the linear parabolic initial-boundary value problem \begin{alignat}{3} \varepsilon \partial_t u-\nabla \cdot \mu \nabla u & = f_\Omega & \qquad & \text{in }\,J \times (\Omega \setminus \Sigma) , \label{e-parabol}\\ u & = 0 & &\text {on }\, J \times (\partial \Omega \setminus \Gamma), \label{e-Diri}\\ \varepsilon \partial_t u+\nu \cdot \mu \nabla u + b u & = f_\Gamma & & \text{on }\, J \times \Gamma, \label{e-robin}\\ \varepsilon \partial_t u+[\nu_\Sigma \cdot \mu \nabla u] & = f_\Sigma && \text{on }\, J \times \Sigma, \label{e-xi-eq}\\ u(0) & = u_0 & &\text{in }\, \Omega \cup \Gamma, \label{e-initial} \end{alignat} and in its quasilinear variants. Here $J= (0,T)$ is a bounded time interval, $\Omega \subset \field{R}^d$ is a bounded domain, $\Gamma\subseteq \partial\Omega$ is a part of the boundary with outer normal~$\nu$, and $\Sigma\subset \Omega$ is e.g.\ a finite union of hypersurfaces, equipped with a normal field $\nu_\Sigma$. By $[\nu_\Sigma \cdot \mu \nabla u]$ we denote the jump of $\nu_\Sigma \cdot \mu \nabla u$ over $\Sigma$. The case that $\Gamma$ or $\Sigma$ is an empty set is not excluded. We treat a nonsmooth geometry; e.g., it suffices that $\Gamma$ and $\Sigma$ satisfy certain Lipschitz conditions. Nothing is supposed on the Dirichlet part $\partial\Omega \setminus \Gamma$ of the boundary, and the boundary parts $\Gamma$ and $\partial\Omega \setminus \Gamma$ are allowed to meet. Also on the coefficients we impose only low regularity conditions. The (possibly nonsymmetric) coefficient matrix $\mu$ is bounded and uniformly elliptic, $\varepsilon$ is positive, bounded and bounded away from zero, and $b$ only has to live in an $L^p$-space. The (possibly nonautonomous) inhomogeneities $f_\Omega$, $f_\Gamma$, $f_{\Sigma}$ and the initial value $u_0$ are assumed to be given. Parabolic problems with dynamical boundary conditions are considered by many authors, see e.g.\ \cite{AmE}, \cite{Esc2}, \cite{Hin} \cite{AQR}, \cite{BBR} and \cite{BD}, but there always severe assumptions on the data, as smoothness, are imposed (compare also \cite{FGGR} and \cite{VV}, where the boundary condition on $J \times \Gamma$ is understood as Wentzell's boundary condition). It is the aim of this work to show that any smoothness assumption on the domain and the coefficient function $\mu$ can be avoided. In particular, the domain $\Omega$ does not need to be a Lipschitz domain. Let us briefly comment on this: a moment's thought shows that, by far, many natural domains fail to be Lipschitzian. For example, if one removes from a ball one half of its equatorial plane, then the remainder fails to be Lipschitzian. As another example, consider a pair of pincers as in Figure~1. It is also not Lipschitzian. \begin{figure}[htbp] \centerline{\includegraphics[scale=0.7]{gegenbeispiel}} \caption{\label{fig-gegenbeispiel} A pair of pincers is \emph{not} a Lipschitz domain.} \end{figure} The crucial point is that such objects, obviously, occur in the physical world. In this paper we also allow the inhomogeneities not only to live in the volume of the domain, but to incorporate a part which is supported on the set $\Sigma$ of lower dimension $d-1$. This largely extends the applicability of the theory to real-world problems. The reader may think, e.g., of a heat source which is concentrated on an interface. Alternatively, one meets such constellations in electricity: surface charge densities induce a jump in the normal component of the dielectric displacement, see e.g.\ \cite[Chapter~1]{Tam}. Our approach to (\ref{e-parabol})--(\ref{e-initial}), which also covers the case that $\Gamma$ or $\Sigma$ is empty, is essentially based on the theory of sesquilinear forms and the suitable incorporation of the boundary conditions into an $L^p$-space. We consider the approach in more detail. The boundary part $\overline{\Gamma}$ is Lipschitz regular, and the interface $\Sigma\subset \Omega$ is a $(d-1)$-set in the sense of Jonsson--Wallin \cite{JW} (cf.\ Assumptions \ref{a-region} and \ref{a-d-1}). For the equations we first treat the case $\varepsilon \equiv 1$, and consider the sesquilinear form \[ {\gt}[u,v] = \int_\Omega \mu \nabla u \cdot \overline {\nabla v} \, d x, \] which is defined on the space $W^{1,2}_\Gamma$ of $W^{1,2}(\Omega)$-functions vanishing on $\partial\Omega\setminus \Gamma$. Note that this reflects the Dirichlet conditions. For all $u\in W^{1,2}_\Gamma$ we define the trace $\tr u$ on $\Gamma\cup \Sigma$ in a suitable sense (based on \cite{JW}), and show that the map $\gJ u = (u, \tr u)$ is continuous and has dense range from $W^{1,2}_\Gamma$ into $\mathbb L^2 :=L^2(\Omega)\oplus L^2(\Gamma \cup \Sigma;d\mathcal H_{d-1})$ (see Lemma \ref{lLpR202}). Here $\mathcal H_{d-1}$ is the $(d-1)$-dimensional Hausdorff measure. These properties of the trace are a consequence of the regularity of $\Gamma$ and $\Sigma$. As the form ${\gt}$ satisfies an ellipticity condition with respect to $\gJ$, the results in \cite{AE2} imply that ${\gt}$ induces an operator $A_2$ on $\mathbb L^2$. For all $\varphi \in W^{1,2}_\Gamma$ and $\mathfrak J \varphi \in \dom (A_2)$ its constitutive relation is \begin{equation} \label{e-constitu} \int_{\Omega \cup\Gamma \cup \Sigma } (A_2 \mathfrak J \varphi) \, \mathfrak J\overline \psi \, (dx+d\mathcal H_{d-1}) = \int_\Omega \mu \nabla \varphi \cdot \overline {\nabla \psi} \, d x, \qquad \psi\in W_\Gamma^{1,2}(\Omega). \end{equation} Let us show that $A_2$ describes the spatial derivatives occurring in (\ref{e-parabol}), \eqref{e-robin} and \eqref{e-xi-eq}, respectively, in an adequate manner. The argument is heuristic in general; moreover we identify within these calculations $\varphi$ with $\mathfrak J \varphi$, in order to make the writing more suggestive. Let $\Lambda$ be a surface which is piecewise $C^1$ and which decomposes $\Omega$ into two subdomains $\Omega_1$ and $\Omega_2$. (A prototypical situation is when $\Omega$ is a circular cylinder, $\Gamma$ is its upper plate, and $\Sigma$ is the midplane of $\Omega$.) First put $\Sigma =\Lambda \cap \Omega$ and assume that the outer normal $\nu_1$ of $\Omega_1$ across $\Lambda$ equals $\nu_\Sigma$ on $\partial \Omega_1\cap \Sigma$. According to \eqref{e-constitu}, for all $\varphi \in \dom(A_2)$ we have \[ \int_{\Omega \cup\Gamma \cup \Sigma } (A_2 \varphi) \, \overline \psi \, (dx+d\mathcal H_{d-1}) = \int_{\Omega_1} \mu \nabla \varphi \cdot \overline{\nabla \psi} \, dx + \int_{\Omega_2} \mu \nabla \varphi \cdot \overline{\nabla \psi} \, dx \] for all $\psi \in W^{1,2}_\Gamma(\Omega)$. Since $\psi$ vanishes on $\partial \Omega \setminus \Gamma$, one can apply Gauss' theorem to obtain \begin{equation} \label{e-gauss} \int_{\Omega_1} \mu \nabla \varphi \cdot \overline{\nabla \psi} \, dx = \int_{\Omega_1} (-\nabla \cdot \mu \nabla \varphi) \, \overline \psi \, dx + \int_{\partial \Omega_1 \cap \Gamma} (\nu \cdot \mu \nabla \varphi) \, \overline \psi \, d\mathcal H_{d-1} +\int_{\Lambda \cap \Omega} (\nu_1 \cdot \mu \nabla \varphi) \, \overline \psi \, d\mathcal H_{d-1}. \end{equation} An equation, analogous to \eqref{e-gauss}, can also be written for $\Omega_2$. Then the unit normal $\nu_2$ of $\Omega_2$ across $\Lambda$ equals $-\nu_1$ and one deduces \begin{eqnarray} \label{e-compare} \int_{\Omega \cup\Gamma \cup \Sigma } (A_2 \varphi) \, \overline \psi \, (dx+d\mathcal H_{d-1}) = \int_{\Omega } (-\nabla \cdot \mu \nabla \varphi) \, \overline \psi \, dx & + & \int_ {\Gamma} (\nu \cdot \mu \nabla \varphi) \, \overline \psi \, d\mathcal H_{d-1} \\ & + & \int_{\Lambda \cap \Omega} [\nu_\Sigma \cdot \mu \nabla \varphi] \, \overline \psi \, d\mathcal H_{d-1}, \nonumber \end{eqnarray} where $[\nu_\Sigma \cdot \mu \nabla \varphi]=\nu_\Sigma \cdot \big( \mu \nabla \varphi |_{\partial\Omega_1 \cap \Sigma} - \mu \nabla \varphi |_{\partial\Omega_2 \cap \Sigma}\big)$ is the jump in the conormal derivative. Thus, varying $\psi$ suitably and comparing both sides of \eqref{e-compare}, one recognizes that $A_2$ has in fact three `components', namely \begin{enumerate} \item the divergence of the vector field $\mu \nabla \varphi$ on $\Omega\setminus \Sigma$, taking $L^2(\Omega) $-functions as values; \item the conormal derivative on $\Gamma$, taking $L^2(\Gamma;d\mathcal H_{d-1})$-functions as values; \item the jump in the conormal derivative on $\Sigma$, taking $L^2(\Sigma;d\mathcal H_{d-1})$-functions as values. \end{enumerate} If one takes $\Sigma$ as a proper subset of $\Lambda \cap \Omega$ (which admits the $(d-1)$-property), then \eqref{e-compare} leads to the equation \begin{equation*} \int_{ \Sigma} (A_2 \varphi) \, \overline \psi \, d\mathcal H_{d-1} = \int_{\Lambda \cap \Omega} [\nu_\Sigma \cdot \mu \nabla \varphi] \, \overline \psi \, d\mathcal H_{d-1}, \end{equation*} which enforces $[\nu_\Sigma \cdot \mu \nabla \varphi]$ to vanish on $(\Lambda \cap \Omega) \setminus \overline{\Sigma}$. Hence the dynamic equations on $\Gamma$ and $\Sigma$ are modelled by the part $L^2(\Gamma \cup \Sigma; d\mathcal H_{d-1})$ of the base space $\mathbb L^2$. The subsequent analysis will show that, in either the elliptic or in the parabolic setting, these three components may be prescribed, and the equation indeed has a solution in the functional analytic setting which we will establish. Moreover, the solution depends continuously on the data. The operator $-A_2$ generates a holomorphic, submarkovian $C_0$-semigroup of contractions on $\mathbb L^2$, and may thus be extended to a semigroup of contractions on $\mathbb L^p$ for all $p\in [1,\infty]$. Denoting the corresponding generators by $-A_p$, it turns out that for all $p\in (1,\infty)$ the operator $-\varepsilon^{-1} A_p$ generates a holomorphic $C_0$-semigroup of contractions on a suitably renormed $\LL^p$-space. This has two important consequences. First, applying an abstract result that is presented e.g.\ in \cite[Proposition 2.2]{LeMX}, we obtain a bounded holomorphic functional calculus for $\varepsilon^{-1} A_p$ with angle strictly smaller than $\frac{\pi}{2}$, and in particular the boundedness of the purely imaginary powers (see Theorem \ref{t-imagin}). Moreover, the pioneering theorem of Lamberton \cite{Lamb} gives us maximal parabolic regularity for $\varepsilon^{-1}A_p$ in Theorem \ref{t-qreg}, which we consider as the main result of this work. The introduction of temporal weights as in \cite{PrSi} further allows to reduce the regularity of the initial data almost up to the base space $\mathbb L^p$. This yields the solution of (\ref{e-parabol})--(\ref{e-initial}) in an adequate manner, see Theorem~\ref{t-solution}. Based on these linear results we treat a nondegenerate quasilinear variant of (\ref{e-parabol})--(\ref{e-initial}), even if the right hand side explicitly and discontinuously depends on time (Theorem \ref{t-semilinear}). Here a difficulty is that the domain of the realization of the operator $-\nabla \cdot \mu \nabla$ on $\mathbb L^p$ is not independent of the coefficients $\mu$. We therefore consider a problem which is obtained when applying the Kirchhoff transform to the original one, and which involves only one fixed operator (see Definition \ref{quasi-solution}). Maximal parabolic regularity then allows to apply a result of Pr\"uss \cite{pru2} (see also \cite{CL}) to the transformed problem, giving local existence and uniqueness of solutions in a suitable sense. Throughout it is essential that $\dom (A_p^\theta ) \subset \mathbb L^\infty$ for large $p$ and $\theta$ sufficiently close to $1$, which is a consequence of ultracontractivity estimates for the semigroup (see Lemma \ref{lLpR206} and Proposition \ref{pLpR301}). The quasilinear problems may be of relevance for the applications: the heat source on the hypersurface can depend on the solution itself, and, additionally, explicitly on time. Let us briefly compare the approach in this paper with those in \cite{Grie2}, \cite{HaR} and \cite{HaR3} for static Robin boundary conditions. There the Banach space under consideration is a negatively indexed Sobolev space of type $H^{-\theta,q}$ or a Sobolev--Morrey space. In contrast to that settings, in $\mathbb L^p$ one may form the dual pairing of the above parabolic equation with the indicator function $\chi_\Lambda$ of suitable subsets $\Lambda \subset \Omega$. Then one may, additionally, apply Gauss' theorem to $\langle -\nabla \cdot \mu \nabla u,\chi_\Lambda \rangle =\int _{\Lambda} -\nabla \cdot \mu \nabla u\, d x +\int_{\Lambda \cap \Sigma}-\nabla \cdot \mu \nabla u\, d\mathcal H_{d-1}$. This allows to recover the underlying physical balance law for the parabolic equation, which is the starting point for the numerical treatment of such problems. For more details we refer to Remark \ref{rem-heuristics-2}. \medskip This paper is organized as follows. In Section \ref{Sec-2} we introduce the spaces $\mathbb L^p$, define an appropriate realization of $- \nabla\cdot \mu \nabla$ and show that it admits a bounded holomorphic functional calculus. In Section \ref{s-parabolic} we show that in this setting (\ref{e-parabol})--(\ref{e-initial}) enjoys maximal parabolic regularity, and in Section \ref{s-semilinear} we treat the quasilinear case. We finish with some concluding remarks in Section \ref{s-conclud}. \section{Elliptic operators on $\mathbb L^p$}\label{Sec-2} \subsection{Notation} \label{s-nota} Throughout this paper $\mathcal L(X;Y)$ denotes the space of bounded linear operators from $X$ to $Y$, where $X$ and $Y$ are Banach spaces. If $X = Y$, then we abbreviate $\mathcal L(X)$. Note that if $X$ and $Y$ are two Banach spaces spaces such that $X \subset Y$ as vector spaces, and both $X$ and $Y$ are continuously embedded in a Hausdorff locally convex space, then the inclusion map from $X$ into $Y$ is continuous by the closed graph theorem. In the sequel let $\Omega$ be a bounded domain in $\field{R}^d$ with $d >1$ and $\Gamma$ an open part of its boundary $\partial\Omega$, which may be empty. If $p \in [1,\infty)$, then $L^p(\Omega)$ is the space of complex-valued, Lebesgue measurable, $p$-integrable functions on $\Omega$, and for all $\theta \in [0,1]$ we denote by $W^{\theta,p}(\Omega)$ the usual Sobolev--Slobodetskii spaces, see \cite{Gris} or \cite{Maz}. Moreover, $L^\infty(\Omega)$ is the space of Lebesgue measurable, essentially bounded functions on $\Omega$. The $(d-1)$-dimensional Hausdorff measure on $\field{R}^d$ is denoted by $\mathcal H_{d-1}$. We denote by $B( x, r)$ the ball in $\field{R}^d$ centred at $ x$ with radius $r$. \subsection{The function spaces} \label{s-function-spaces} In this subsection we consider the function spaces on which \eqref{e-parabol}--\eqref{e-initial} will be posed. \begin{definition} \label{d-w01p} For all $q \in [1,\infty]$ we define $W_\Gamma^{1,q}$ as the closure in $W^{1,q}(\Omega)$ of the set \[ C_\Gamma^{\infty}(\Omega) \stackrel{\mathrm{def}}{=} \Big\{ u|_{\Omega} : u \in C_c^{\infty}({\field{R}}^d), \, \supp(u) \cap (\partial \Omega \setminus \Gamma) = \emptyset \Big\}. \] \end{definition} Throughout this paper we make the following assumption on $\Gamma$. \begin{assu} \label{a-region} For all $ x \in \overline \Gamma$ there is an open neighbourhood $\mathcal V_ x$ of $x$ and a bi-Lipschitz mapping $F_ x$ from $\mathcal V_ x$ onto the open unit cube $E$ in $\mathbb R^d$, such that $F_ x(x) = 0$ and $F_ x(\Omega \cap \mathcal V_ x)$ is equal to the lower open half cube $E_- = (-1,1)^{d-1} \times (-1,0)$ of $E$. \end{assu} The reader should notice that the domain $\Omega$ does not need to be Lipschitzian. Moreover, nothing is supposed on the boundary of $\Gamma$ within $\partial \Omega$. An important technical tool is an extension operator for the $W^{1,q}_\Gamma$-spaces. \begin{proposition} \label{p-extension} There is an extension operator $\gE \colon L^1(\Omega) \to L^1(\field{R}^d)$ such that the restriction $\gE|_{W^{1,q}_\Gamma}$ maps $W^{1,q}_\Gamma$ continuously into $W^{1,q}(\field{R}^d)$ for all $q \in [1,\infty]$, the restriction $\gE|_{L^q(\Omega)}$ maps $L^q(\Omega)$ continuously into $L^q(\field{R}^d)$ for all $q \in [1,\infty]$ and $\supp \gE u \subset B(0,2R)$ for all $u \in L^1(\Omega)$, where $R = \sup \{ |x| : x \in \Omega \} $. \end{proposition} \begin{proof} The proof is given in \cite[Lemma~3.4]{ERe1} for the case $q=2$, but carries over to all $q \in [1,\infty]$. Moreover, the second assertion is also easily checked. The last statement follows by multiplication with a suitable $C_c^\infty(\field{R}^d)$-function. \end{proof} It turns out that a classical condition from geometric measure theory is tailor made in order to define a geometric assumption on a $(d-1)$-dimensional shape $\Sigma$ in $\Omega$. \begin{assu} \label{a-d-1} Let $\Sigma\subset \Omega$ be a $(d-1)$-set in the sense of Jonsson--Wallin \cite[Subsection~VII.1.1]{JW}. Precisely: the set $\Sigma$ is Borel measurable and there exist $c_1,c_2 > 0$ such that \begin{equation} \label{e-measure00} c_1 r^{d-1} \le \mathcal H_{d-1} \bigl (B( x, r) \cap \Sigma \bigr ) \le c_2 r^{d-1} \end{equation} for all $x \in \Sigma$ and $r \in (0,1)$. \end{assu} \begin{rem} \label{r-hypersurf} We emphasize that $\Sigma$ does not have to be closed. Nevertheless has $\Sigma$ \emph{finite} $(d-1)$-dimensional Hausdorff measure, according to \eqref{e-measure00}. The prototype of $\Sigma$ is the finite union $\bigcup_j\Sigma_j$ of Lipschitzian hypersurfaces. In that case the restriction of the Hausdorff measure $\mathcal H_{d-1}$ to $\Gamma$ or to $\Sigma_j$ can be constructed explicitly in terms of the local bi-Lipschitz charts (compare \cite[Section~3.3.4~C]{EvG}). In particular, if $\Sigma$ is a finite union of Lipschitz graphs, then \eqref{e-measure00} is easily verified using this representation of $\mathcal H_{d-1}$. Moreover, Assumption~\ref{a-d-1} implies for general $\Sigma$ that $\Sigma$ is of ($d$-dimensional) Lebesgue measure $0$. \end{rem} Throughout this paper we always presume Assumptions~\ref{a-region} and \ref{a-d-1}. \begin{definition} We denote by $\rho$ the restriction of the Hausdorff measure $\mathcal H_{d-1}$ to $\Gamma \cup \Sigma$. \end{definition} If $u \in L^1_{\rm loc}(\field{R}^d)$ and $F \subset \field{R}^d$ is a set, then define the function $\tr_F u$ as in \cite[Page~15]{JW} by \[ (\tr_F u)(x) = \lim_{r \to 0} \frac{1}{|B(x,r)|} \, \int_{B(x,r)} u(y)\, d y, \] for all $x \in F$ for which the limit exists. The domain $\dom(\tr_F u)$ of $\tr_F u$ is the set of all $x \in F$ for which this limit exists. \begin{lemma} \label{l-measure} Let $q,r \in [1,\infty)$ and $\theta \in [0,1]$. Let $\gE$ be the extension operator as in Proposition~{\rm \ref{p-extension}}. \begin{statements} \item \label{l-measure-1} If $\frac {1}{q} - \frac {1-\theta}{d} \le \frac {1}{r}$, then $\mathfrak E$ maps $W^{1,q}_\Gamma$ continuously into $W^{\theta,r}(\field{R}^d)$. \item \label{l-measure-1.5} If $\frac {1}{q} - \frac {1-\theta}{d} < \frac {1}{r}$, then $\mathfrak E$ maps $W^{1,q}_\Gamma$ compactly into $W^{\theta,r}(\field{R}^d)$. \item \label{l-measure-2} If $\theta \in (\frac {1}{q},1]$, then the trace map $u \mapsto \tr_{\Gamma \cup \Sigma} u$ is continuous from $W^{\theta,q}(\field{R}^d)$ into $L^q(\Gamma \cup \Sigma; d\rho)$. \end{statements} \end{lemma} \begin{proof} `\ref{l-measure-1}' and `\ref{l-measure-1.5}'. This follows from Proposition~\ref{p-extension}, the support property of $\mathfrak E$ and the usual Sobolev embedding. `\ref{l-measure-2}'. Since $\Gamma$ and $\Sigma$ are disjoint, the natural map from the space $L^q(\Gamma \cup \Sigma; d\rho)$ into $L^q(\Sigma ; d\mathcal H_{d-1})\times L^q( \Gamma; d\mathcal H_{d-1})$ is a linear, topological isomorphism. Therefore, it suffices to show that the trace maps $u \mapsto \tr_\Gamma u$ and $u \mapsto \tr_\Sigma u$ are continuous from $W^{\theta,q}(\field{R}^d)$ into $L^q(\Gamma; d\mathcal H_{d-1})$ and $L^q(\Sigma; d\mathcal H_{d-1})$. It follows from \cite[Chapter~VIII, Proposition~1]{JW} that property \eqref{e-measure00} inherits to the closure $\overline \Sigma$ of $ \Sigma$. Then the trace operator $u \mapsto \tr_{\overline \Sigma} u$ is bounded from $W^{\theta,q}(\field{R}^d)$ into $L^q(\overline { \Sigma}; d\mathcal H_{d-1})$ by \cite[Chapter~V, Theorem~1]{JW}. But the set difference $\overline { \Sigma}\setminus \Sigma$ is of $\mathcal H_{d-1}$ measure $0$ (see again \cite[Chapter~VIII, Proposition~1]{JW}). Consequently the spaces $L^q(\overline { \Sigma}; d\mathcal H_{d-1})$ and $L^q( { \Sigma}; d\mathcal H_{d-1})$ are identical. Next we consider the set $\Gamma$. Using the notation as in Assumption \ref{a-region}, for every $x \in \overline \Gamma$ the map $F_x$ provides a bi-Lipschitz parametrization of $\partial \Omega \cap \mathcal V_x$, where the parameters run through the upper plate $P:= (-1,1)^{d-1} \times \{0\}$ of the half cube $E_-$. Moreover, the Hausdorff measure $\mathcal H_{d-1}$ on $\partial \Omega \cap \mathcal V_x$ is the surface measure, and the latter is obtained from the Lebesgue measure on $(-1,1)^{d-1} \times \{0\}$ via the bi-Lipschitzian parametrization, see \cite[Section~3.3.4~C]{EvG}. Define $\mathcal W_x = F_x\big ( (-\frac {1}{2},\frac {1}{2})^{d-1} \times \{0\}\big)$. Then $\mathcal W_x \subset \partial \Omega$. There exist $n \in \field{N}$ and $x_1,\ldots,x_n \in \overline \Gamma$ such that $\mathcal W_{x_1},\ldots,\mathcal W_{x_n}$ is a finite cover of $\overline \Gamma$. Obviously, $\overline {\mathcal W_{x_1}},\ldots,\overline {\mathcal W_{x_n}} $ is also a finite cover of $\overline \Gamma$. Moreover, it is not hard to see that $\bigcup_{j=1}^n \overline {\mathcal W_{x_j}}$ is a $(d-1)$-set in the sense of Jonsson--Wallin (compare \cite[Lemma~3.2]{HaR2}). Hence by \cite[Chapter~V, Theorem~1]{JW} there exists a continuous trace operator from $W^{\theta,q}(\field{R}^d)$ into $L^q(\cup_{j=1}^n \overline {\mathcal W_{x_j}}; d\mathcal H_{d-1})$. Combining this operator with the restriction operator to $\Gamma$, one obtains the desired trace operator into $L^q(\Gamma ; d\mathcal H_{d-1})$. \end{proof} For all $u \in L^1_{\rm loc}(\Omega)$ define the function $\tr u$ as in \cite[Section~VIII.1.1]{JW} by \[ \dom(\tr u) = \Big\{ x \in \Gamma \cup \Sigma : \lim_{r \to 0} \frac{1}{|B(x,r) \cap \Omega|} \, \int_{B(x,r) \cap \Omega} u(y)\, dy \;\; \mbox{ exists} \Big\} \] and \[ (\tr u)(x) = \lim_{r \to 0} \frac{1}{|B(x,r) \cap \Omega|} \, \int_{B(x,r) \cap \Omega} u(y)\,dy \] for all $x \in \dom(\tr u)$. The above defined trace enjoys the following mapping properties. \begin{proposition} \label{pLpR201} Let $q,r \in (1,\infty)$ and suppose that $\frac{d-q}{q} < \frac{d-1}{r}$. Then $\tr u \in L^r(\Gamma \cup \Sigma; d\rho)$ for all $u \in W^{1,q}_\Gamma$, and the map $u \mapsto \tr u$ is compact from $W^{1,q}_\Gamma$ into $L^r(\Gamma \cup \Sigma; d\rho)$. \end{proposition} \begin{proof} Let $\gE$ be the extension operator as in Proposition~{\rm \ref{p-extension}}. Then it follows from Lemma~\ref{l-measure} that $u \mapsto \tr_{\Gamma \cup \Sigma} \gE u$ maps $W^{1,q}_\Gamma$ compactly into $L^r(\Gamma \cup \Sigma; d\rho)$. But if $u \in W^{1,q}_\Gamma$, then we claim that \begin{equation}\label{trace-identity} (\tr u)(x) = (\tr_{\Gamma \cup \Sigma} \gE u)(x) \end{equation} for $\mathcal H_{d-1}$-a.e.\ $x \in \Gamma \cup \Sigma$. Obviously, this identity holds for $\mathcal H_{d-1}$-a.e.\ $x\in \Sigma$ since $\Sigma \subset \Omega$. For $\mathcal H_{d-1}$-a.e.\ $x\in \Gamma$ we can argue as in the proof of \cite[Chapter~VIII, Proposition~2]{JW}, where the case $\Gamma = \partial\Omega$ is considered. Indeed, the arguments given there are purely local. Since $\gE u \in W^{1,q}(\field{R}^d)$ it follows that for $\mathcal H_{d-1}$-a.e.\ $x\in \Gamma$ there exists a Borel set $E \subset \field{R}^d$ such that $\mathcal H_{d-1}(E\cap B(x,r)) = o(r^{d-1})$ and $(\gE u)(x) = \displaystyle \lim_{y\to x, \; y\notin E} (\gE u)(y)$. Using these properties of $E$, the same arguments as in the last part of the proof given in \cite{JW} establish \eqref{trace-identity}. \end{proof} The space on which (\ref{e-parabol})--(\ref{e-initial}) will be realized is given as follows. \begin{definition} \label{d-lp} For all $p \in [1,\infty]$, denote by $\mathbb L^p$ the Lebesgue space $L^p(\Omega \cup \Gamma; d x+d\rho)$. We denote the space of all real valued functions in $\mathbb L^p$ by $\mathbb L^p_\field{R}$. \end{definition} Observe that there is a natural topological isomorphism between $\mathbb L^p$ and the direct sum $L^p(\Omega) \oplus L^p(\Gamma\cup \Sigma;d\rho)$ and we will identify $\mathbb L^p$ with $L^p(\Omega) \oplus L^p(\Gamma\cup \Sigma;d\rho)$ through this natural map. By Proposition~\ref{pLpR201} we can define the map $\gJ \colon W^{1,2}_\Gamma \to \LL^2$ by \[ \gJ u = (u, \tr u) \in L^2(\Omega) \oplus L^2(\Gamma\cup \Sigma;d\rho) \cong \LL^2 . \] Note that one can always choose some $p > 2$ in Statement~\ref{lLpR202-2} of the next lemma. \begin{lemma} \label{lLpR202} \mbox{} \begin{statements} \item \label{lLpR202-1} The map $\gJ$ is continuous and has dense range. \item \label{lLpR202-2} If $p \in [1,\infty)$ and $(d-2) p < 2 (d-1)$, then $\gJ W^{1,2}_\Gamma \subset \LL^p$. \item \label{lLpR202-3} The map $\gJ$ is compact. \end{statements} \end{lemma} \begin{proof} `\ref{lLpR202-1}'. The continuity follows from Proposition~\ref{pLpR201}. Let $f = (f_\Omega,f_\partial) \in L^2(\Omega) \oplus L^2(\Gamma\cup \Sigma;d\rho)$ and suppose that $(\gJ u,f)_{L^2(\Omega) \oplus L^2(\Gamma\cup \Sigma;d\rho)} = 0$ for all $u \in W^{1,2}_\Gamma$. We show that $f=0$. For all $u \in C_c^\infty(\Omega \setminus \overline \Sigma)$ one has $0 = (\gJ u,f) = \int_\Omega u \, \overline{f_\Omega}\, dx$. Since $C_c^\infty(\Omega \setminus \overline \Sigma)$ is dense in $L^2(\Omega \setminus \overline \Sigma) = L^2(\Omega)$ one deduces that $f_\Omega = 0$. Therefore $0 = \int_{\Gamma\cup \Sigma} \tr u \, \overline{f_\partial} \, d\rho$ for all $u \in W^{1,2}_\Gamma$ and in particular for all $u \in C^\infty_\Gamma(\Omega)$. But $ \{ u|_{\Gamma\cup \Sigma} : u \in C^\infty_\Gamma(\Omega) \} $ is dense in $L^2(\Gamma\cup \Sigma; d\rho)$. So $f_\partial = 0$. `\ref{lLpR202-2}'. If $\gE$ is the extension operator as in Proposition~{\rm \ref{p-extension}} then it follows from Lemma~\ref{l-measure}\ref{l-measure-1} that $\gE$ maps $W^{1,2}_\Gamma$ continuously into $L^p(\field{R}^d)$ for all $p \in [1,\infty)$ with $(d-2) p \leq 2d$. So $W^{1,2}_\Gamma \subset L^p(\Omega)$. Now the statement follows from Proposition~\ref{pLpR201}. `\ref{lLpR202-3}'. It follows immediately from Lemma~\ref{l-measure}\ref{l-measure-1.5} that the restriction $\gE|_\Omega$ maps $W^{1,2}_\Gamma$ compactly into $L^2(\Omega)$. So the embedding of $W^{1,2}_\Gamma$ into $L^2(\Omega)$ is compact. Also the map $\tr$ is compact from $W^{1,2}_\Gamma$ into $L^2(\Gamma\cup \Sigma;d\rho)$ by Proposition~\ref{pLpR201}. Therefore the map $\gJ$ is compact. \end{proof} We end this subsection with a truncation lemma. \begin{lemma} \label{lLpR204} Let $u \in W^{1,2}_\Gamma$ be real-valued. Then $u \wedge \mathds{1}_\Omega \in W^{1,2}_\Gamma$ and $\gJ(u \wedge \mathds{1}_\Omega) = (\gJ u) \wedge \mathds{1}_{\Omega \cup \Gamma}$. \end{lemma} \begin{proof} The first statement is shown in the proof of \cite[Theorem~3.1]{ERe1}. The second statement is obvious for real-valued $u\in C_\Gamma^\infty(\Omega)$. Since the maps $u\mapsto \gJ(u \wedge \mathds{1}_\Omega)$ and $u\mapsto (\gJ u) \wedge \mathds{1}_{\Omega \cup \Gamma}$ are continuous on the real version of $W^{1,2}_\Gamma$, the identity carries over to the general case by density. \end{proof} \subsection{The operator on $\LL^p$} \label{SLpS2.2} In this subsection we introduce a differential operator on $\LL^p$ that corresponds to the spatial derivatives in \eqref{e-parabol}, \eqref{e-robin} and \eqref{e-xi-eq}. Throughout the remaining of this paper we adopt the next assumption. \begin{assu} \label{a-coeff} Let $\mu=\bigl \{\mu_{k,l}\bigr \}_{k,l} \colon \Omega \to \mathcal L(\field{R}^d;\field{R}^d)$ be a measurable map from $\Omega$ into the set of real $d\times d$ matrices. We assume that there are $\low{\mu}, \upp{\mu} > 0$ such that \[ \norm{{\mu}( x)}_{\mathcal L(\field{R}^d;\field{R}^d)} \le \upp{\mu}\quad \text{and} \quad \sum_{k,l=1}^d \mu_{k,l}( x)\, \xi_k\, \xi_l\ge \low{\mu} \sum_{k=1}^d \xi_k^2 \] for all $ x \in \Omega $ and $\xi=(\xi_1,\ldots,\xi_d)\in\field{R}^d$. \end{assu} We emphasize that $\mu$ does not have to be symmetric. \begin{definition} \label{d-form} Define the sesquilinear form ${\gt} \colon W^{1,2}_\Gamma \times W^{1,2}_\Gamma \to \field{C}$ by \[ {\gt}[u,v] = \int_\Omega \mu \nabla u \cdot \overline {\nabla v} \, d x . \] \end{definition} We emphasize that the domain of the form $\gt$ is the space $W^{1,2}_\Gamma$, which appropriately incorporates the Dirichlet condition on $\partial \Omega \setminus \Gamma$, compare \cite[Section~1.2]{Cia} or \cite[Section~II.2]{GGZ}. The form $\gt$ is continuous and \begin{equation}\label{J-ellipticity} \mathop{\rm Re} \gt[u,u] + \|\gJ u\|_{\LL^2}^2 \geq (\low{\mu} \wedge 1) \|u\|_{W^{1,2}_\Gamma}^2 \end{equation} for all $u \in W^{1,2}_\Gamma$. Therefore by Lemma \ref{lLpR202}\ref{lLpR202-1} and \cite[Theorem~2.1]{AE2} there exists a unique operator $A_2$ in $\LL^2$ such that for all $\varphi,\psi \in \LL^2$ one has $\varphi \in \dom(A_2)$ and $A_2 \varphi = \psi$ if and only if there exists a $u \in W^{1,2}_\Gamma$ such that $\gJ u = \varphi$ and \begin{equation} \label{e-opdef} \gt[u,v] = (\psi, \gJ v)_{\LL^2} \end{equation} for all $v \in W^{1,2}_\Gamma$. Although the form domain of $\gt$ is $W^{1,2}_\Gamma$, the operator $A_2$ is an operator in $\LL^2$. We refer to the introduction for a discussion of the relation of $A_2$ to the original problem (\ref{e-parabol})--(\ref{e-initial}). \begin{rem} \label{r-identopJ} The construction of $A_2$ generalizes the derivation of an operator from a suitable form $\mathfrak s$ to the case when the form domain $D_\mathfrak s$ is a priori \emph{not} contained in the corresponding Hilbert space $\mathfrak H$ (compare \cite[Section~VI.2]{Kat1} for the classical case). The substitute for the inclusion $D_\mathfrak s \subset \mathfrak H$ is the definition of an appropriate embedding operator $\gJ \colon D_\mathfrak s \to \mathfrak H$. Fortunately, all tools for form methods are still available. \end{rem} \begin{proposition} \label{pLpR203} The operator $A_2$ is $m$-sectorial with vertex $0$ and semi-angle $\arctan \frac{\upp{\mu}}{\low{\mu}}$. Moreover, $A_2$ has compact resolvent. \end{proposition} \begin{proof} It follows from \cite[Theorem~2.1]{AE2} that $A_2$ is $m$-sectorial. Let $\varphi \in \dom(A_2)$ and $u \in W^{1,2}_\Gamma$ with $\gJ u = \varphi$. Then $\mathop{\rm Re} ( A_2 \varphi, \varphi)_{\LL^2} = \mathop{\rm Re} \gt[u,u] \geq 0$. Hence the vertex is $0$. Further, one has $\mathop{\rm Re} \gt[u,u] \geq \low{\mu} \int_\Omega |\nabla \mathop{\rm Re} u|^2 +|\nabla \mathop{\rm Im} u|^2\, dx $ and \[ |\!\mathop{\rm Im} \gt[u,u]| \leq 2 \upp{\mu} \int_\Omega |\nabla \mathop{\rm Re} u| |\nabla \mathop{\rm Im} u| \, dx \leq \upp{\mu} \int_\Omega |\nabla \mathop{\rm Re} u|^2 +|\nabla \mathop{\rm Im} u|^2\, dx. \] Thus $|\arg ( A_2 \varphi, \varphi)_{\LL^2} | \leq \arctan \frac{\upp{\mu}}{\low{\mu}}$ if $\varphi \neq 0$. Since the map $\gJ$ is compact by Lemma~\ref{lLpR202}\ref{lLpR202-3}, the generator has compact resolvent by \cite[Lemma~2.7]{AE2}.\end{proof} We continue with the analysis of the operator $A_2$. By Proposition~\ref{pLpR203} the operator $A_2$ is $m$-sectorial with vertex $0$ and semi-angle $\arctan \frac{\upp{\mu}}{\low{\mu}}$. Hence by \cite[Theorem IX.1.24]{Kat1} the operator $-A_2$ generates a holomorphic semigroup, denoted by $S$, which is holomorphic and contractive on the sector with semi-angle $\arctan \frac{\upp{\mu}}{\low{\mu}}$. \begin{proposition} \label{pLpR205} The semigroup $S$ leaves $\LL^2_\field{R}$ invariant, it is submarkovian and positive. \end{proposition} \begin{proof} Clearly the set $\LL^2_\field{R}$ is closed and convex in $\LL^2$. Moreover, $\varphi \mapsto \mathop{\rm Re} \varphi$ is the projection from $\LL^2$ onto $\LL^2_\field{R}$ and $\mathop{\rm Re} \gt (u, u - \mathop{\rm Re} u) = 0$ for all $u \in W^{1,2}_\Gamma$. Since the form $\gt$ is accretive, the set $\LL^2_\field{R}$ is invariant under the semigroup by \cite[Proposition~2.9(ii)]{AE2} Next, let $C = \{ u \in \LL^2 : u \mbox{ is real valued and } u \leq \mathds{1} \} $. Then $C$ is closed and convex. Let $P \colon \LL^2 \to C$ denote the orthogonal projection. Then $P u = (\mathop{\rm Re} u) \wedge \mathds{1}_{\Omega \cup \Gamma}$. Let $u \in W^{1,2}_\Gamma$. By Lemma~\ref{lLpR204} one has $(\mathop{\rm Re} u) \wedge \mathds{1}_\Omega \in W^{1,2}_\Gamma$ and $P\gJ u = \gJ((\mathop{\rm Re} u) \wedge \mathds{1}_\Omega)$. Moreover, an easy calculation gives \[ \mathop{\rm Re} \gt[(\mathop{\rm Re} u) \wedge \mathds{1}_\Omega, u - (\mathop{\rm Re} u) \wedge \mathds{1}_\Omega] = 0 . \] Observing that the form $\gt$ is accretive, it follows from \cite[Proposition~2.9(ii)]{AE2} that $C$ is invariant under the semigroup $S$. Now let $\varphi\in \LL^2 \cap \LL^\infty$ and $t > 0$. There exists an $\alpha \in \field{R}$ such that $\|S_t \varphi\|_{\LL^\infty} = \|\mathop{\rm Re} (e^{i \alpha} S_t \varphi)\|_{\LL^\infty}$. But $\mathop{\rm Re} (e^{i \alpha} S_t \varphi) = S_t \mathop{\rm Re} (e^{i \alpha} \varphi)$. Therefore \[ \|S_t \varphi\|_{\LL^\infty} = \|S_t \mathop{\rm Re} (e^{i \alpha} \varphi)\|_{\LL^\infty} \leq \|\mathop{\rm Re} (e^{i \alpha} \varphi)\|_{\LL^\infty} \leq \|\varphi\|_{\LL^\infty} \] and $S$ is submarkovian. Finally, if $\varphi \in \LL^2_\field{R}$ and $\varphi \leq 0$, then $n \varphi \in C$ for all $n \in \field{N}$. So $S_t(n \varphi) \leq \mathds{1}$ for all $t > 0$ and $n \in \field{N}$. Therefore $S_t \varphi \leq 0$ and $S$ is positive. \end{proof} \begin{corollary} \label{cLpR209} For all $p \in [1,\infty]$ the semigroup $S$ extends consistently to a contraction semigroup $S^{(p)}$ on $\LL^p$. The semigroup $S^{(p)}$ is strongly continuous for all $p \in [1,\infty)$. \end{corollary} \begin{proof} Observe that if the coefficient matrix $\mu$ satisfies the conditions of Assumption~\ref{a-coeff}, then its transpose satisfies these as well. Thus the dual semigroup $S^*$ shares the same properties as $S$. Now the assertion follows from Proposition \ref{pLpR205} and standard interpolation and duality arguments, see e.g.\ \cite[page 56]{Ouh5}. \end{proof} We denote the generator of $S^{(p)}$ by $-A_p$. Then $-A_p$ is dissipative by the Lumer--Phillips theorem. If no confusion is possible we write $S = S^{(p)}$. \begin{rem} \label{r-invers} It is possible to prove the dissipativity of $-A_p$ also by showing that the form $-\mathfrak t$ is $p$-dissipative, cf. \cite{CiaM}. \end{rem} \begin{lemma} \label{lLpR206} \mbox{} \begin{statements} \item \label{lLpR206-1} The semigroup $S$ is ultracontractive. Moreover, for all $\beta > d-1$ and $\omega >0$ there exists a $c > 0$ such that \[ \|S_t \varphi\|_{\LL^q} \leq c \, t^{-\beta ( \frac{1}{p} - \frac{1}{q} )} e^{\omega t}\|\varphi\|_{\LL^p} \] for all $t > 0$, $\varphi \in \LL^p$ and $p,q \in [1,\infty]$ with $p \leq q$. \item \label{lLpR206-2} If $1 \leq p < q \leq \infty$ and $j \in \field{N}$ are such that $\frac{d-1}{j} \, ( \frac{1}{p} - \frac{1}{q} ) < 1$, then the operator $(A_p + 1)^{-j}$ maps $\LL^p$ continuously into $\LL^q$. \item \label{lLpR206-3} The operator $A_p$ has compact resolvent for all $p \in (1,\infty)$. \item \label{lLpR206-4} If the matrix of coefficients $\mu$ is symmetric, then the operator $A_2$ is self-adjoint and positive. \end{statements} \end{lemma} \begin{proof} `\ref{lLpR206-1}'. Let $r \in (2,\infty)$ be such that $(d-2) r < 2(d-1)$. Then it follows from Lemma~\ref{lLpR202}\ref{lLpR202-2} that $ \gJ W^{1,2}_\Gamma \subset \LL^r$, and the inclusion is continuous by the closed graph theorem. Let $\varphi \in \mathbb L^2$ and $t>0$. Since $S_t \varphi\in \dom (A_2)$, there is a $u \in W^{1,2}_\Gamma$ such that $S_t \varphi = \gJ u$. For given $\omega >0$ one has \begin{eqnarray*} \|S_t \varphi\|_{\mathbb L^r}^2 = \|\gJ u\|_{\mathbb L^r}^2 \leq C\,\|u\|_{W^{1,2}_\Gamma}^2 & \leq & C (\low{\mu} \wedge 1)^{-1} \big( \mathop{\rm Re} \gt[u,u] + \|\gJ u\|_{\LL^2}^2 \big)\\ & = & C (\low{\mu} \wedge 1)^{-1} \big( \mathop{\rm Re} ( A_2 S_t \varphi, S_t \varphi)_{\mathbb L^2} + \|S_t \varphi\|_{\LL^2}^2 \big)\\ & \leq & C' \, t^{-1} e^{2\omega t}\|\varphi\|_{\LL^2}^2, \end{eqnarray*} for suitable $C,C' > 0$, using \eqref{J-ellipticity}, the definition of $A_2$, the Cauchy--Schwarz inequality and the holomorphy and contractivity of $S_t$. Therefore the semigroup $S$ is ultracontractive, and by \cite[Lemma~6.1]{Ouh5} there exists a $c > 0$ such that \[ \|e^{-\omega t}S_t \varphi\|_{\LL^\infty} \leq c \, t^{- \frac{r}{2(r-2)} } \, \|\varphi\|_{\LL^2} \] for all $t > 0$ and $\varphi \in \LL^2$. Now duality and interpolation give Statement~\ref{lLpR206-1}. Statement~\ref{lLpR206-2} follows from \ref{lLpR206-1} and the well-known formula \[ (A_p+1)^{-j} = \frac{1}{(j-1)!}\int_0^\infty t^{j-1}e^{-t} S_t \, dt. \] Statement~\ref{lLpR206-3} is a consequence of Proposition~\ref{pLpR203} and interpolation. The last statement of the lemma is easy to verify. \end{proof} \subsection{Multipliers acting on Lebesgue spaces} \label{s-mult} In order to solve \eqref{e-parabol}--\eqref{e-initial}, we divide \eqref{e-parabol} (at first formally) by $\varepsilon$. Obviously, one is then confronted with the necessity to investigate the functional analytic properties of operators of the type $\varsigma A_p$, where $\varsigma$ is a bounded strictly positive measurable function. Concerning the generator property of an analytic semigroup in a space $L^p(\Omega)$ this was carried out in \cite{GKR} and concerning maximal parabolic regularity on $L^p(\Omega)$ in \cite{HiebR}. In the latter case the decisive instrument was the insight from \cite{DO} that a suitable multiplicative perturbation does not destroy upper Gaussian estimates, which in turn imply maximal parabolic regularity on $L^p(\Omega)$. Unfortunately, we cannot apply this here, since our Lebesgue space does not only live `on the volume'. But a surprisingly simple trick allows us to overcome the problem in the present context. The next proposition is of independent interest. \begin{proposition} \label{pLpR211} Let $(X,{\mathcal{B}},\lambda)$ be a measure space and let $\tau \colon X \to (0,\infty)$ be a measurable function such that both $\tau$ and $\tau^{-1}$ are bounded. Let $p \in [1,\infty)$ and let $T$ be an operator in $L^p(X,d\lambda)$. \begin{statements} \item \label{pLpR211-1} If $T$ is dissipative on $L^p(X,d\lambda)$, then $\tau T$ is dissipative on $L^p(X,\tau^{-1} d\lambda)$. \item \label{pLpR211-2} If $T$ generates a strongly continuous contraction semigroup on $L^p(X,d\lambda)$, then $\tau T$ generates a strongly continuous contraction semigroup on $L^p(X,\tau^{-1} d\lambda)$. \item \label{pLpR211-2.5} If $p = 2$, $\theta \in (0,\frac{\pi}{2}]$ and $T$ generates a holomorphic semigroup in $L^2(X,d\lambda)$ which is contractive in the sector with semi-angle $\theta$, then $\tau T$ generates a holomorphic semigroup in $L^2(X,\tau^{-1} d\lambda)$ which is contractive in the sector with semi-angle $\theta$. \end{statements} Now suppose that $p = 2$ and $T$ generates a strongly continuous contraction semigroup $S$ on $L^2(X,d\lambda)$. Denote the semigroup generated by $\tau T$ on $L^2(X,\tau^{-1} d\lambda)$ by $S^\tau$. \begin{statements} \setcounter{teller}{3} \item \label{pLpR211-3} If $S$ leaves the real valued functions invariant, then $S^\tau$ also leaves the real valued functions invariant. \item \label{pLpR211-4} If $S$ is positive, then $S^\tau$ is also positive. \item \label{pLpR211-5} Suppose $S$ is submarkovian. Then $S^\tau$ is also submarkovian. Hence for all $q \in [2,\infty)$ the semigroups $S$ and $S^\tau$ extend consistently to a strongly continuous semigroup $S^{(q)}$ and $S^{(\tau,q)}$ on $L^q(X,d\lambda)$ and $L^q(X,\tau^{-1} d\lambda)$, respectively. Let $T_q$ and $T_{\tau,q}$ denote the generators. Then $T_{\tau,q} = \tau T_q$ for all $q \in [2,\infty)$. \end{statements} \end{proposition} \begin{proof} `\ref{pLpR211-1}'. The operator $T$ is dissipative on $L^p(X,d\lambda)$ if and only if \[ \mathop{\rm Re} \int_{\{f \neq 0\}} (T f) \, |f|^{p-2} \, \overline f \, d\lambda \leq 0 \] for all $f \in D(T)$. This implies the dissipativity of $\tau T$ on $L^p(X,\tau^{-1} d\lambda)$. `\ref{pLpR211-2}'. Since $T$ generates a contraction semigroup on $L^p(X,d\lambda)$, it follows that $T$ is dissipative. Therefore $\tau T$ is dissipative on $L^p(X,\tau^{-1} d\lambda)$ by Statement~\ref{pLpR211-1}. So by the Lumer--Phillips theorem it remains to show that the operator $\tau T - 1$ is surjective on $L^p(X,\tau^{-1} d\lambda)$. Let $\delta > 0$ be such that $\tau^{-1} - \delta \geq 0$. Then the multiplication operator $-( \tau^{-1} - \delta)$ is dissipative on $L^p(X,d\lambda)$ and has a relative bound equal to zero with respect to $T$. Therefore $T - ( \tau^{-1} - \delta)$ generates a strongly continuous contraction semigroup on $L^p(X,d\lambda)$ by the perturbation result \cite[Theorem~3.7]{Dav1}. Hence $T - \tau^{-1}$ is surjective on $L^p(X,d\lambda)$ by the Lumer--Phillips theorem. But this implies that $\tau T - 1$ is surjective on $L^p(X,\tau^{-1} d\lambda)$. `\ref{pLpR211-2.5}'. For all $\alpha \in (-\theta,\theta)$ the above applies to the operator $e^{i \alpha} T$. Therefore $e^{i \alpha} \tau T$ generates a strongly continuous contraction semigroup on $L^2(X,\tau^{-1} d\lambda)$. Hence by \cite[Theorem IX.1.23]{Kat1} the operator $\tau T$ generates a holomorphic semigroup in $L^2(X,\tau^{-1} d\lambda)$ which is contractive on the sector with semi-angle $\theta$. Now suppose $p = 2$ and $T$ generates a strongly continuous contraction semigroup $S$ on $L^2(X,d\lambda)$. Let $C$ be a closed convex subset of $L^2(X,d\lambda)$. Then $C$ is also closed and convex in $L^2(X,\tau^{-1} d\lambda)$. Since $T$ is $m$-dissipative it follows from \cite[Theorem~2.2]{Ouh3} that $C$ is invariant under $S$ if and only if $\mathop{\rm Re} (T f, f - Pf)_{L^2(X,d\lambda)} \leq 0$ for all $f \in D(T)$, where $P$ is the orthogonal projection in $L^2(X,d\lambda)$ onto~$C$. Similarly, since $\tau T$ is $m$-dissipative on $L^2(X,\tau^{-1} d\lambda)$, the set $C$ is invariant under $S^\tau$ if and only if $\mathop{\rm Re} (\tau T f, f - P^\tau f)_{L^2(X, \tau^{-1} d\lambda)} \leq 0$ for all $f \in D(\tau T)$, where $P^\tau$ is the orthogonal projection in $L^2(X,\tau^{-1} d\lambda)$ onto~$C$. But $D(\tau T) = D(T)$. Hence if $P = P^\tau$, then $C$ is invariant under $S$ if and only if $C$ is invariant under $S^\tau$. Then for the proof of Statement~\ref{pLpR211-3} choose $C = \{ f \in L^2(X,d\lambda) : f \mbox{ is real valued} \} $ and note that the projection is $P f = \mathop{\rm Re} f = P^\tau f$. For the proof of Statement~\ref{pLpR211-4} choose $C = \{ f \in L^2(X,d\lambda) : f \mbox{ is real valued and } f \geq 0 \} $ and note that the projection is $P f = (\mathop{\rm Re} f)^+ = P^\tau f$. For the submarkovian part in the proof of Statement~\ref{pLpR211-5} choose $C = \{ f \in L^2(X,d\lambda) : |f| \leq 1 \mbox{ a.e.} \} $ and note that the projection is $P f = (|f| \wedge \mathds{1}) \sgn f = P^\tau f$. It remains to prove the second part of Statement~\ref{pLpR211-5}. Let $q \in [2,\infty)$. Let $u \in D(T_{\tau,q}) \cap D(T_{\tau,2})$. Write $v = T_{\tau,2} u$. Then $u \in L^2(X, d\lambda) \cap L^q(X, d\lambda)$ and $T_{\tau,q} u = T_{\tau,2} u = v$. So $v \in L^q(X, d\lambda)$ and $\tau^{-1} v \in L^q(X, d\lambda)$ since $\tau^{-1}$ is bounded. Moreover, $T_{\tau,2} = \tau T_2$, so $u \in D(T_2)$ and $T_2 u = \tau^{-1} v$. Therefore \[ t^{-1} (S_t^{(q)} u - u ) = t^{-1} (S_t^{(2)} u - u ) = t^{-1} \int_0^t S_s^{(2)} T_2 u \, ds = t^{-1} \int_0^t S_s^{(q)} T_2 u \, ds \] for $t >0$. As $t\downarrow 0$, the latter term converges to $T_2 u$ in $L^q(X, d\lambda)$ by the strong continuity of $S^{(q)}$. Hence $u \in D(T_q)$ and $T_q u = T_2 u = \tau^{-1} v$. Then $\tau T_q u = v = T_{\tau,q} u$. We proved that \[ D(T_{\tau,q}) \cap D(T_{\tau,2}) \subset D(\tau T_q) \cap D(\tau T_2) \] and $T_{\tau,q} u = \tau T_q u$ for all $u \in D(T_{\tau,q}) \cap D(T_{\tau,2})$. Similarly the converse inclusion is valid, so \[ D(T_{\tau,q}) \cap D(T_{\tau,2}) = D(\tau T_q) \cap D(\tau T_2) = D(T_q) \cap D(T_2) . \] We claim that $D(T_{q}) \cap D(T_{2})$ is dense in $D(T_{q})= D(\tau T_q)$. Consider the set \[ {\mathcal{D}} = \{ t^{-1} \int_0^{t} S_s^{(q)} u \, ds : u \in L^q(X, d\lambda)\cap L^2(X, d\lambda), t \in (0,\infty) \} . \] Then ${\mathcal{D}} \subset D(T_q)$. Since $S^{(q)}$ and $S^{(2)}$ are consistent, also ${\mathcal{D}} \subset D(T_2)$. So ${\mathcal{D}} \subset D(T_q) \cap D(T_2)$. Moreover, $\lim_{t \downarrow 0} t^{-1} \int_0^{t} S_s^{(q)} u \, ds = u$ in $L^q(X, d\lambda)$ for all $u \in L^q(X, d\lambda)\cap L^2(X, d\lambda)$ and $L^q(X, d\lambda)\cap L^2(X, d\lambda)$ is dense in $L^q(X, d\lambda)$. Therefore ${\mathcal{D}}$ is dense in $L^q(X, d\lambda)$. Clearly ${\mathcal{D}}$ is invariant under $S^{(q)}$. Hence ${\mathcal{D}}$ is a core for $T_q$ by \cite[Proposition II.1.7]{EN}. This implies that $D(T_{q}) \cap D(T_{2})$ is dense in $D(T_{q})$. The same arguments show that $D(T_{\tau,q}) \cap D(T_{\tau,2})$ is dense in $D(T_{\tau,q})$. Hence $T_{\tau,q} = \tau T_q$. \end{proof} Let $\varsigma \colon \Omega \cup \Gamma \to (0,\infty)$ be a measurable function such that $\varsigma,\varsigma^{-1} \in \LL^\infty$. We write \[ \LL^p_\varsigma := L^p(\Omega \cup \Gamma; \varsigma^{-1} (d x +d\rho)) . \] Proposition~\ref{pLpR211} allows to transfer the conclusion of Corollary~\ref{cLpR209} to the operators $\varsigma A_p$. \begin{theorem} \label{t-mult03-new} For all $p \in [1,\infty)$ the operator $-\varsigma A_p$ generates a strongly continuous positive semigroup $S^{(\varsigma,p)}$ of contractions on the space $\LL_\varsigma^p$. The semigroups are consistent. Moreover, $S^{(\varsigma,2)}$ is holomorphic and contractive in the sector with semi-angle $\arctan \frac{\upp{\mu}}{\low{\mu}}$. \end{theorem} \begin{proof} For $p \geq 2$ all follows from Propositions~\ref{pLpR203}, \ref{pLpR205} and \ref{pLpR211}. The dual of the operator $\varsigma A_2$ on $\LL_\varsigma^2$ is given by $\varsigma A^\#$, where $A^\#$ is the operator obtained with coefficient matrix equal to the transpose of the matrix $\mu$. Hence by Proposition~\ref{pLpR211} the dual semigroup $(S^{(\varsigma,2)})^*$ is submarkovian and extends consistently to a strongly continuous semigroup on $\LL_\varsigma^p$ for all $p \in [2,\infty)$. Then by duality the semigroup $S^{(\varsigma,2)}$ extends consistently to a strongly continuous semigroup on $\LL_\varsigma^p$ for all $p \in [1,2]$. \end{proof} \subsection{Consequences for the operators $\varsigma A_p$ on $\LL^p$.} We have the following abstract properties for $\varsigma A_p$. \begin{theorem} \label{t-imagin} Let $p \in (1,\infty)$. Then one has the following. \begin{statements} \item \label{t-imagin-1} The operator $\varsigma A_p$ admits a bounded holomorphic functional calculus on $\mathbb L^p$, with angle strictly smaller than $\frac{\pi}{2}$. In particular, it admits bounded imaginary powers. \item\label{t-imagin-2} For all $\theta \in (0,1)$ one has \[ (\varsigma A_p+1)^{-\theta} =\frac {\sin \pi \theta}{\pi} \int_0^\infty t^{-\theta} (\varsigma A_p+1+t)^{-1} \, dt. \] \item \label{t-imagin-3} If $\theta\in (0,1]$, then $\dom \bigl ((\varsigma A_p)^\theta\bigr ) = [\mathbb L^p, \dom (\varsigma A_p)]_\theta =\dom(A_p^\theta)$, where $[\cdot,\cdot]_\theta$ denotes complex interpolation. \end{statements} \end{theorem} \begin{proof} `\ref{t-imagin-1}'. For all $p \in [1,\infty)$ denote by $S^{(\varsigma,p)}$ the contraction semigroup on $\LL_\varsigma^p$ generated by $- \varsigma A_p$. Then the semigroups $S^{(\varsigma,p)}$ with $p \in [1,\infty)$ are consistent. Moreover, $S^{(\varsigma,2)}$ is holomorphic and bounded on a sector. Let $p \in (1,\infty)$. Then it follows from \cite[Proposition 3.12]{Ouh5} and duality that $S^{(\varsigma,p)}$ is holomorphic and bounded in a sector (which depends on $p$). Also $S^{(\varsigma,p)}$ is a positive contraction semigroup. Hence the operator $S^{(\varsigma,p)}_t$ is contractively regular for all $t > 0$. So by \cite[Proposition 2.2]{LeMX} the operator $\varsigma A_p$ admits a bounded holomorphic functional calculus on $\mathbb L_\varsigma^p$, with angle strictly smaller than $\frac{\pi}{2}$. This is then also the case on $\LL^p$, since $\mathbb L^p = \mathbb L_\varsigma ^p$ as vector spaces, with equivalent norms. `\ref{t-imagin-2}'. For the integral representation see \cite[(4.41)]{Lun}. `\ref{t-imagin-3}'. Since $\varsigma A_p$ admits bounded imaginary powers, it follows from \cite[Theorem~4.17]{Lun} that \[ \dom \bigl ((\varsigma A_p)^\theta\bigr ) = [\mathbb L^p, \dom(\varsigma A_p)]_\theta . \] Since $\dom(\varsigma A_p) = \dom( A_p)$, one has $\dom \bigl ((\varsigma A_p)^\theta\bigr ) = [\mathbb L^p, \dom(A_p)]_\theta$, and the result follows. \end{proof} \section{Linear parabolic equations} \label{s-parabolic} In this section we will draw conclusions for linear parabolic equations, which, in particular, allow to give (\ref{e-parabol})--(\ref{e-initial}) a precise meaning and afterwards to solve it. In the following, $J=(0,T)$ denotes a bounded interval and $X$ a Banach space. Throughout we fix the numbers \[ 1<s<\infty \qquad \mbox{and} \qquad \frac{1}{s} < \alpha \leq 1. \] We introduce the weighted space \[ L_\alpha^s(J; X) = \{u \colon J\to X \;:\; [t\mapsto t^{1-\alpha} u(t)] \in L^s(J;X)\}, \] and the corresponding weighted Sobolev space \[ W_\alpha^{1,s}(J;X) = \{u \in L_\alpha^s(J; X) : u'\in L_\alpha^s(J; X)\}, \] where here and below the time derivative is understood in the sense of $X$-valued distributions (see \cite[Subsection~III.1.1]{Ama2}). These are Banach spaces when equipped with their canonical norm, respectively. Note that $\alpha = 1$ corresponds to the unweighted case, i.e., $L_1^s = L^s$. By \cite[Lemma~2.1]{PrSi} one has $W_\alpha^{1,s}(J; X) \subset W^{1,1}(J; X)$, which implies that each element of $W_\alpha^{1,s}(J;X)$ has a well-defined trace at $t=0$. \begin{definition} \label{d-maxreg} Let $A$ be a closed linear operator on $X$ with dense domain $\dom(A)$. We say that $ A$ has \emph {maximal parabolic $L_\alpha^s(J; X)$-regularity}, if for all $f\in L_\alpha^s(J; X)$ there is a unique solution $u \in W_\alpha^{1,s}(J; X) \cap L_\alpha^s(J; \dom(A))$ of \[ u' + Au = f,\qquad u(0) = 0 . \] We write $M\!R_\alpha^s(J;X)$ for the class of all operators on $X$ with this property. \end{definition} We proceed with some comments concerning maximal parabolic regularity. \begin{enumerate} \item It is shown in \cite[Theorem 2.4]{PrSi} that $A\in M\!R_1^s(J;X)$ if and only if $A\in M\!R_\alpha^s(J;X)$ for all $\alpha \in (\frac{1}{s}, 1]$, i.e., maximal parabolic $L_\alpha^s$-regularity is independent of the weight. (In fact, in \cite{PrSi} only the case $J=(0,\infty)$ is treated, but the arguments given there also apply to bounded $J$.) In this sense it is natural to consider the temporal weights in the context of parabolic problems. \item If $A\in M\!R_1^{s_0}(J_0;X)$ for an interval $J_0 = (0,T_0)$, where $T_0 \in (0,\infty)$ and $s_0 \in (1,\infty)$, then $A\in M\!R_\alpha^{s}(J;X)$ for all $T \in (0,\infty)$, $s \in (1,\infty)$ and $\alpha \in (\frac{1}{s}, 1]$. This is shown in \cite[Corollary~5.4 and Theorem~7.1]{dor}. In this spirit, we then simply say that $A$ satisfies maximal parabolic regularity on $X$. \item The notion `maximal parabolic regularity' does not depend on the concrete norm of the Banach space. In other words: an operator $A$, satisfying maximal parabolic regularity on $X$, remains to satisfy maximal parabolic regularity if $X$ is equipped with an equivalent norm. \item If $A$ satisfies maximal parabolic regularity on $X$, then $-A$ generates an analytic $C_0$-semigroup (cf.\ \cite[Corollary~4.4]{dor}). If $X$ is a Hilbert space, then the converse is also true, cf.\ \cite{DeS}. \end{enumerate} For the case of nontrivial initial values, the following has been proved in \cite[Theorem~3.2]{PrSi}. We denote by $(\cdot,\cdot)_{\theta,s}$ the real interpolation functor, cf.\ \cite[Sections~1.3 and 1.6]{Tri}. \begin{proposition} \label{ps} Suppose that $A$ satisfies maximal parabolic regularity on $X$. Then for all $f\in L_\alpha^s(J;X)$ and $u_0\in (X,\dom(A))_{\alpha-\frac{1}{s},s}$ the Cauchy problem \[ u' + Au = f,\qquad u(0) = u_0, \] has a unique solution $u\in W_\alpha^{1,s}(J; X) \cap L_\alpha^s(J; \dom(A))$, and the estimate \begin{equation}\label{cont-dep} \|u'\|_{L_\alpha^s(J; X)} +\|u\|_{L_\alpha^s(J; \dom(A))} \le c\big( \|u_0\|_{(X,\dom(A))_{\alpha-\frac{1}{s},s}} + \|f\|_{L_\alpha^s(J; X)}\big) \end{equation} is valid for some constant $c$, independent of $f$ and $u_0$. \end{proposition} By working in temporally weighted spaces one can thus reduce the regularity of the initial values $u_0$ almost up to the base space $X$. We have the following embeddings for the weighted maximal regularity class. The space of $\gamma$-H\"older continuous functions is denoted by $C^\gamma$. \begin{proposition}\label{extra-reg} If $A$ satisfies maximal parabolic regularity on $X$, then \[ W_\alpha^{1,s}(J; X) \cap L_\alpha^s(J; \dom(A)) \subset BU\!C(\overline{J}; (X, \dom(A))_{\alpha-\frac{1}{s},s}) \cap C(J; (X, \dom(A))_{1-\frac{1}{s},s}). \] Moreover, for every $\theta \in [0,\alpha-\frac {1}{s})$ there is a $\gamma \in (0,1)$ such that \[ W_\alpha^{1,s}(J; X) \cap L_\alpha^s(J; \dom(A)) \subset C^\gamma(\overline{J};[X,\dom(A)]_\theta). \] \end{proposition} \begin{proof} The first inclusion is shown in \cite[Proposition~3.1]{PrSi}. The second one can be proved along the lines of \cite[Lemma~1]{DMRT}. \end{proof} We apply a classical result of Lamberton \cite{Lamb} to the operators $\varsigma A_p$. \begin{theorem} \label{t-qreg} Let $\varsigma \colon \Omega \cup \Gamma \to (0,\infty)$ be a measurable function such that $\varsigma,\varsigma^{-1} \in \LL^\infty$. Then for all $p\in(1,\infty)$ the operator $\varsigma A_p$ satisfies maximal parabolic regularity on $\mathbb L^p$. \end{theorem} \begin{proof} Theorem~\ref{t-mult03-new} states that the semigroup generated by $-\varsigma A_2$ on $\mathbb L^2_\varsigma$ is bounded and analytic, and that it extents consistently to a contractive semigroup on $\mathbb L^q_\varsigma$ for all $q\in [1,\infty]$. Now the result is a consequence of \cite[Corollary.~1.1]{Lamb}. \end{proof} In order to include lower order terms into the boundary and interface conditions we need some preparation. \begin{proposition} \label{pLpR301} Let $p \in (1,\infty)$ and $\theta \in (0,1)$ be such that $d-1 < \theta \, p$. Then one has $\dom\bigl( (\varsigma A_p)^\theta \bigl) \subset \LL^\infty$. \end{proposition} \begin{proof} Since $\dom \bigl((\varsigma A_p)^\theta \bigl) = \dom \bigl( (A_p+1)^\theta \bigl)$ by Theorem~\ref{t-imagin}\ref{t-imagin-3} and \cite[Lemma 4.1.11]{Lun}, it suffices to show that $(A_p +1)^{-\theta}$ maps $\LL^p$ into $\LL^\infty$. In \cite[Section 2.6]{Paz} it is shown that \[ (A_p +1)^{-\theta} = \frac{1}{\Gamma(\theta)} \int_0^\infty t^{\theta-1} \, e^{-t} \, S_t \, dt . \] Now the assertion follows from the estimate of Lemma~\ref{lLpR206}\ref{lLpR206-1}. \end{proof} \begin{corollary} \label{c-gebropot} Suppose $p \in (\frac{d-1}{\alpha - \frac{1}{s}}, \infty)$. Then $(\mathbb L^p,\dom (\varsigma A_p))_{\alpha-\frac {1}{s},s}$ continuously embeds into $ \mathbb L^\infty$. \end{corollary} \begin{proof} Fix $\theta \in (\frac{d-1}{p}, \alpha-\frac {1}{s})$. Then Proposition~\ref{pLpR301} yields $\dom\bigl (( \varsigma A_p)^\theta \bigr ) \subset \mathbb L^\infty$. But \[ (\mathbb L^p,\dom(\varsigma A_p))_{\alpha-\frac {1}{s},s} \subset (\mathbb L^p,\dom(\varsigma A_p))_{\theta,1} \subset [ \mathbb L^p, \dom(\varsigma A_p)]_\theta \] by \cite[Propositions 1.1.3, 1.3.2 and Corollary 2.1.8]{Lun}, and the latter interpolation space equals $\dom\bigl ((A_p)^\theta\bigr )$ by Theorem \ref{t-imagin}\ref{t-imagin-3}. \end{proof} \begin{definition} \label{d-emb} Fix $b \in L^p(\Gamma \cup \Sigma;d\rho)$. Define the operator $B \colon \mathbb L^\infty \to \mathbb L^p $ by \[ B(f_\Omega,f_\partial) = (0, b \, f_\partial) \] for all $f = (f_\Omega,f_\partial) \in L^p(\Omega) \oplus L^p(\Gamma\cup \Sigma;d\rho) \cong \LL^p$. \end{definition} Note that $b$ is allowed to be complex valued. \begin{theorem} \label{t-haupt} Let $p \in (d-1,\infty)$. Then the operator $\varsigma (A_p+B)$ satisfies maximal parabolic regularity on $\mathbb L^p$. \end{theorem} \begin{proof} One deduces from Corollary \ref{c-gebropot} that the operator $\varsigma B$ acts continuously on an interpolation space between $\dom(\varsigma A_p)$ and $\mathbb L^p$. Then the result follows from the perturbation theorem \cite[Theorem~6.2]{dor}. \end{proof} \begin{rem} \label{r-expltime} In a somewhat more general concept $B$ may also depend explicitly on time, see \cite{ACFP}. \end{rem} Now we are in the position to solve the parabolic problem (\ref{e-parabol})--(\ref{e-initial}) in terms of the realization of the operator $A_p$. \begin{theorem} \label{t-solution} Let $T \in (0,\infty)$ and set $J=(0,T)$. Let $ p \in (d-1,\infty)$, $s \in (1,\infty)$ and $\alpha \in (\frac{1}{s},1]$. Let $\Omega$ be a bounded domain in $\field{R}^d$ with $d >1$, let $\Gamma$ be an open part of its boundary $\partial\Omega$ and $\Sigma \subset \Omega$. Adopt the Assumptions~{\rm \ref{a-region}}, {\rm \ref{a-d-1}} and {\rm \ref{a-coeff}}. Let $\varepsilon \in \mathbb L^\infty$ be a positive function with a positive essential lower bound and let $b$ as in Definition~{\rm \ref{d-emb}}. Then the initial value problem \eqref{e-parabol}--\eqref{e-initial} admits a solution in the following sense: for all $f \in L_\alpha^s(J;\mathbb L^p)$ and $u_0 \in (\mathbb L^p,\dom(A_p))_{\alpha-\frac {1}{s},s}$ there is a unique function $u \in W_\alpha^{1,s}(J;\mathbb L^p) \cap L_\alpha^s(J;\dom(A_p))$ satisfying \begin{equation} \label{e-euat} \varepsilon u' +A_p u +Bu=f, \qquad u(0)=u_0. \end{equation} \end{theorem} \begin{proof} One reformulates \eqref{e-euat} as \[ u' +\varepsilon^{-1} A_p u +\varepsilon^{-1}Bu=\varepsilon^{-1}f, \qquad u(0)=u_0. \] Obviously, $\varepsilon^{-1}f$ satisfies the same assumptions as $f$. Moreover, one has $\dom(A_p)=\dom(\varepsilon^{-1}A_p) = \dom(\varepsilon^{-1} (A_p + B))$, with equivalent norms. This implies that \[ (\mathbb L^p,\dom(A_p))_{\alpha-\frac {1}{s},s} =(\mathbb L^p,\dom(\varepsilon^{-1}(A_p + B)))_{\alpha-\frac {1}{s},s} . \] The assertion then follows from Proposition \ref{ps} and Theorem \ref{t-haupt}. \end{proof} \begin{rem} In the situation of the theorem, the solution depends continuously on the data, due to (\ref{cont-dep}). Proposition \ref{extra-reg} gives further regularity properties of a solution. Moreover, again by (\ref{cont-dep}), it is straightforward to see that the solution depends continuously on the function $\varepsilon$, with respect to the $\mathbb L^\infty$-norm. \end{rem} \begin{rem} \label{r-reell} Since the coefficient function $\mu$ is real valued, the resolvent of $\varsigma A_p$ commutes with complex conjugation on the spaces $\mathbb L^p$. The latter is also true for the semigroup operators $e^{-t\varsigma A_p}$. Thus, the restriction of $\varsigma A_p$ to real spaces $\mathbb L^p_\field{R}$ also satisfies maximal parabolic regularity. If $B$ is induced by a real valued function, then the same is true for the operator $\varsigma (A_p+B)$. \end{rem} \begin{rem} \label{rem-heuristics-2} At the end of this section, let us give more detailed, partly heuristic arguments what the real advantage is of the treatment of our parabolic equations in the spaces $\mathbb L^p$. When considering the solution $u$ of a parabolic equation $u'+Au =f$ on a Banach space $X$ one can form the dual pairing with any $\psi\in X^*$ to obtain \begin{equation} \label{e-dualpaireq} \frac {\partial }{\partial t} \langle u,\psi \rangle + \langle Au,\psi\rangle = \langle f,\psi \rangle. \end{equation} E.g., if $X=W^{-1,2}(\Omega)$, then one can choose for $\psi$ as any element of $W^{1,2}_0(\Omega)$, but \emph{not} an indicator function of a subset of $\Omega$. In our setting, the situation is different: if $X=\mathbb L^p$, then the dual pairing with the indicator function $\chi_U$ of a measurable set $U\subset \Omega$ is admissible. Then \eqref{e-dualpaireq} reads, there $A$ taken as the $\mathbb L^2$-realization of $A_2$, \begin{equation} \label{e-balance} \frac {\partial }{\partial t} \int_U u \,(d x +d\rho) +\int_U (A_2u) \,(d x +d\rho) =\int_U f \,(dx +d\rho). \end{equation} Since $A_2u \in \mathbb L^2$ for almost every time point $t$ we are now at least in principle in the position to rewrite $\int_U (A_2u) \,(d x +d\rho)$ as a boundary integral and thus to recover from \eqref{e-balance} the `original' physical balance law for \eqref{e-parabol}--\eqref{e-initial}. Indeed, applying \eqref{e-opdef} with $v\in C_c^\infty(\Omega)$, it follows that the distributional divergence of $\mu \nabla u $ is given by the finite Radon measure induced by $(A_2u|_\Omega, A_2u|_{\Sigma}) \in L^2(\Omega)\times L^2(\Sigma; d \mathcal H_{d-1})$ with respect to $dx + d\mathcal H_{d-1}$ (see also Remark \ref{r-hypersurf}). Under certain further assumptions on $\mu \nabla u$ or $U$ one can apply the generalized Gauss-Green theorems of e.g. \cite{CTZ}, \cite{Fug} and \cite{Zie1} to obtain \begin{equation}\label{Gauss-Green} \int_U (A_2u) \,(d x +d\rho) = \int_{\partial U} \nu\cdot \mu \nabla u \, d \mathcal H_{d-1}, \end{equation} where $\nu\cdot \mu \nabla u \in L^1( \partial U; d \mathcal H_{d-1})$ is `the generalized normal component of the corresponding flux', see ibidem. Substituting \eqref{Gauss-Green} in \eqref{e-balance} gives the desired balance law, as is classical when $\nabla \cdot \mu \nabla u$ is an $L^2(\Omega)$-function; compare \cite[Chapter~21]{Somm} and \cite{CLolas}. As already mentioned in the introduction, this is the basis for local flux balances, which are crucial for the foundation of Finite Volume methods for the numerical solution of such problems, compare \cite{BRF}, \cite{FuhL} and \cite{gartn}. \end{rem} \section{Quasilinear parabolic equations} \label{s-semilinear} In this section we treat a nondegenerate quasilinear variant of \eqref{e-parabol}--\eqref{e-initial}, including nonlinear terms in the dynamic equations on $\Gamma$ and $\Sigma$, i.e., \begin{alignat}{2} \varepsilon \partial_t \mathfrak b(u)-\nabla \cdot \mu \mathfrak a(u)\nabla u & = F_\Omega(t,u) & \qquad & \text{in }\,J\times (\Omega\setminus \Sigma), \label{quasi-1} \\ u & = 0 & &\text {on }\, J \times (\partial \Omega \setminus \Gamma), \label{quasi-2}\\ \varepsilon \partial_t \mathfrak b(u) +\nu \cdot \mu \mathfrak a(u)\nabla u & = F_\Gamma(t,u) & & \text{on }\, J \times \Gamma, \label{quasi-3}\\ \varepsilon \partial_t \mathfrak b(u) +[\nu_\Sigma \cdot \mu \mathfrak a(u) \nabla u] & = F_\Sigma(t,u) & & \text{on }\, J \times \Sigma, \label{quasi-4} \\ u(T_0) & = u_0 & &\text{in }\, \Omega \cup \Gamma,\label{quasi-5} \end{alignat} where $J=(T_0,T_1)$ is a bounded interval. Interesting examples for the nonlinearities on the left-hand side are e.g.\ when $\mathfrak b$ and $\mathfrak a$ are an exponential, or the Fermi--Dirac distribution function $\mathcal F_{1/2}$, which is given by \[ \mathcal F_{1/2}(s) := \frac{2}{\sqrt{\pi}} \, \int_0^\infty \frac{\sqrt{\xi}}{1 + e^{\xi- s}} \, d \xi. \] Further, in phase separation problems a rigorous formulation as a minimal problem for the free energy reveals that $\mathfrak {a} = \mathfrak {b}^\prime$ is appropriate. This topic has been thoroughly investigated in \cite{Qua}, \cite{QRV}, \cite{GiacL1}, and \cite{GiacL2}, see also \cite{GajS} and \cite{Grie3}. \medskip We consider from now on the real part $\LL_\field{R}^p$ of the spaces $\mathbb L^p$ and the operators $A_p$. For simplicity we write $\mathbb L^p$ for $\LL_\field{R}^p$. As in the linear case we give the quasilinear equation a suitable functional analytic formulation, and within this framework the problem will then be solved (see Definition \ref{quasi-solution} and Theorem \ref{t-semilinear} below). Again throughout this section we fix the numbers \[ 1<s<\infty \qquad \mbox{and} \qquad \frac{1}{s} < \alpha \leq 1. \] We impose the following conditions on the coefficients on the left-hand side of (\ref{quasi-1})--(\ref{quasi-5}). \begin{assu} \label{a-Verteil} The coefficient matrix $\mu$ is real-valued, $\mathfrak b \in W^{2,\infty}_{\text{loc}}(\field{R})$ is such that $\mathfrak b'$ is positive, and $\mathfrak a \in W^{1,\infty}_{\text{loc}}(\field{R})$ is positive and satisfies $\int_0^\infty\mathfrak a(\zeta)\, d\zeta = \infty = \int_{-\infty}^0\mathfrak a(\zeta)\, d\zeta$. \end{assu} Note that we do not require monotonicity for $\mathfrak a$. In particular, terms of the form $\mathfrak a(u) = \eta + |u|^m$ with $\eta>0$ and $m\geq 1$ can be treated, that arise e.g.\ as a regularization of the porous medium equation. It is in general not to expect that the domain of the realization of $-\nabla \cdot \mu \mathfrak a(v) \nabla$ on $\mathbb L^p$ as in Section \ref{SLpS2.2} is independent of $v\in L^\infty(\Omega)$. Consider, e.g., the case of a smooth geometry with $\mu \mathfrak a(v)$ equal to a constant on the one hand and a nonsmooth $\mu \mathfrak a (v)$ on the other hand. This observation motivates our definition of a solution of \eqref{quasi-1}--\eqref{quasi-5}, which we describe in the following. We put \[ \mathfrak K (\xi):= \begin{cases} \int_0^\xi \mathfrak a(\zeta) \,d\zeta, \;\text {if} \; \xi \ge 0, \\ -\int_\xi^0 \mathfrak a(\zeta) \,d\zeta, \;\text {if} \; \xi < 0. \end{cases} \] Then the assumptions on $\mathfrak a$ imply that \[ \mathfrak K \colon \field{R}\to \field{R} \text{ is bijective},\quad \mathfrak K, \mathfrak K^{-1} \in W^{1,\infty}_{\text{loc}}(\field{R}), \quad \mathfrak K'=\mathfrak a, \quad \mbox{and} \quad \mathfrak K(0)=0=\mathfrak K^{-1}(0). \] In the sequel we identify the functions $\mathfrak b, \mathfrak K, \mathfrak K^{-1}$ with the Nemytzkii operators they induce. The reformulation of \eqref{quasi-1}--\eqref{quasi-5} is based on the so-called Kirchhoff transform $w = \mathfrak K(u)$. This (formally) gives $\mathfrak a(u) \nabla u = \nabla w$ and $\partial_t (\mathfrak b(u)) = \frac{\mathfrak b'}{\mathfrak a} \, (\mathfrak K^{-1}(w)) \partial_t w$. Since $\mathfrak K(0) = 0$, the problem \eqref{quasi-1}--\eqref{quasi-5} thus transforms into \begin{alignat*}{2} \partial_t w - \eta(w) \nabla \cdot \mu \nabla w & = \eta(w) F_\Omega(t,\mathfrak K^{-1}(w)) & \qquad & \text{in }\,J\times (\Omega\setminus \Sigma), \\ w & = 0 & &\text {on }\, J \times (\partial \Omega \setminus \Gamma), \\ \partial_t w + \eta(w) \nu \cdot \mu \nabla w & = \eta(w) F_\Gamma (t,\mathfrak K^{-1}(w)) & & \text{on }\, J \times \Gamma, \\ \partial_t w + \eta(w)[\nu_\Sigma \cdot \mu \nabla w] & = \eta(w) F_\Sigma(t,\mathfrak K^{-1}(w)) & & \text{on }\, J \times \Sigma, \\ w(T_0) & = \mathfrak K(u_0) & &\text{in }\, \Omega \cup \Gamma, \end{alignat*} where we have set \[ \eta(w) := \varepsilon ^{-1} \, \frac{\mathfrak a}{\mathfrak b'} \, \mathfrak K^{-1}(w). \] For all $t\in J$, let us further define the operator \begin{equation}\label{quasi-R} R(t,w) := \begin{cases} \eta(w|_{\Omega})F_\Omega (t,\mathfrak K^{-1}(w|_{\Omega}))\; \text{ on} \;\Omega \setminus \Sigma,\\ \eta(w|_{\Gamma}) F_\Gamma(t,\mathfrak K^{-1}(w|_{\Gamma})) \; \text{ on} \;\Gamma, \\ \eta(w|_{\Sigma}) F_\Sigma (t,\mathfrak K^{-1}(w|_{\Sigma})) \; \text{ on} \;\Sigma, \end{cases} \end{equation} acting on real-valued functions defined on $\Omega\cup \Gamma$. \begin{definition} \label{quasi-solution} Let $p \in (\frac{d-1}{\alpha - \frac{1}{s}}, \infty)$, and let $A_p$ be the realization of $-\nabla \cdot \mu \nabla$ on $\mathbb L^p$ as in Section \ref{SLpS2.2}. We say that $u\in C([T_0,T_1];\mathbb L^\infty)$ is a solution of \eqref{quasi-1}--\eqref{quasi-5} on $J$ if \[ \mathfrak K (u) \in W_\alpha^{1,s}(J; \mathbb L^p)\cap L_\alpha^s(J; \dom(A_p)), \] and if $w = \mathfrak K(u)$ satisfies \begin{equation}\label{quasi-abstract} \partial_t w + \eta(w) A_p w = R(\cdot,w) \quad \text{on }J, \qquad w(T_0) = \mathfrak K (u_0). \end{equation} \end{definition} If $\mathfrak K(u)$ is as above, then $u\in C([T_0,T_1];\mathbb L^\infty)$ is already a consequence of Proposition \ref{extra-reg}, Corollary \ref{c-gebropot} and the regularity of $\mathfrak K$. Proposition \ref{extra-reg} shows that in fact $u\in C^\gamma([T_0,T_1];\mathbb L^\infty)$ for some $\gamma>0$. For specific choices of $\mathfrak K$ additional regularity may carry over from $\mathfrak K(u)$ to $u$. In any case one has $u(t,\cdot) \to u_0$ as $t\to T_0$ in the $\mathbb L^\infty$-norm. Observe further that in the definition it is necessary that $\mathfrak K (u_0) \in (\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}$. It would be interesting to find another description for this condition for a class of nonlinearities $\mathfrak a$. If $\mathfrak a$ is constant, then a solution in the above sense can be defined for all $u_0 \in (\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}$. \medskip If $\mathfrak a = \mathfrak b'$, then \eqref{quasi-abstract} is in fact a semilinear problem. This is in particular the case for the phase separation problems from above. \medskip To solve \eqref{quasi-abstract} we intend to use the following abstract existence and uniqueness result, which is proved in \cite{pru2} for the temporally unweighted case $\alpha = 1$. The proof in \cite{pru2} literally carries over to the weighted case $\alpha < 1$. \begin{proposition} \label{p-pruess} Let $X,D$ be Banach spaces such that $D$ embeds continuously and densely into $X$. Assume $\mathcal{A} \colon (X, D)_{\alpha -\frac {1}{s},s} \to \mathcal{L}(D, X)$ and $\mathcal R \colon J \times (X, D)_{\alpha-\frac {1}{s},s} \to X$ are such that $\mathcal R(\cdot,w_0)$ is measurable for all $w_0\in (X, D)_{\alpha-\frac {1}{s},s}$, that $ \mathcal R(\cdot, 0) \in L_\alpha^s(J;X)$ and that for all $M > 0$ there are $C_M > 0$ and $r_M \in L_\alpha^s(J)$ with \[ \| \mathcal A(w_1) - \mathcal A(w_2) \|_{\mathcal L( D,X)} \le C_M \, \| w_1 - w_2\|_{(X,D)_{\alpha-\frac {1}{s},s}} \] and \[ \| \mathcal R(t,w_1) - \mathcal R(t,w_2)\|_X \le r_M(t) \, \| w_1 - w_2 \|_{(X, D)_{\alpha-\frac {1}{s},s}} \qquad \mbox{ for a.e.\ } t \in J, \] for all $w_1,w_2 \in (X, D)_{\alpha-\frac {1}{s},s}$ with $\|w_1\|_{(X, D)_{\alpha-\frac {1}{s},s}} \le M$ and $\|w_2\|_{(X, D)_{\alpha-\frac {1}{s},s}} \le M$. Assume further that for any $w_0\in (X, D)_{\alpha -\frac {1}{s},s}$ the operator $\mathcal A(w_0)$ with domain $D$ on $X$ satisfies maximal parabolic regularity. Then for all $w_0 \in (X, D)_{\alpha-\frac {1}{s},s}$ there are $T^*\in (T_0,T_1]$ and a unique maximal solution $w$ of \[ w'+\mathcal A(w)w= \mathcal R(\cdot,w) \quad \text{on }(T_0,T^*), \qquad w(T_0)=w_0, \] such that $w \in W_\alpha^{1,s}(T_0,T;X) \cap L_\alpha^s(T_0,T; D)$ for all $T\in (T_0,T^*)$. \end{proposition} We apply this result to \eqref{quasi-abstract}. Suppose $\mathfrak b$ and $\mathfrak a$ satisfy Assumption \ref{a-Verteil}. Let $p \in (\frac{d-1}{\alpha - \frac{1}{s}}, \infty)$, $X= \mathbb L^p$, $D = \dom (A_p)$ and $\mathcal A(w) = \eta(w) A_p$ for all $w \in (\mathbb L^p, \dom(A_p))_{\alpha- \frac{1}{s},s}$. Corollary \ref{c-gebropot} implies that \begin{equation}\label{quasi-embed} (\mathbb L^p, \dom(A_p))_{\alpha- \frac{1}{s},s} \subset \mathbb L^\infty. \end{equation} Thus if $w_0 \in (\mathbb L^p, \dom(A_p))_{\alpha- \frac{1}{s},s}$ and $\|w_0\|_{(\mathbb L^p, \dom(A_p))_{\alpha- \frac{1}{s},s}} \leq M$ for a given number $M$, then it follows from \eqref{quasi-embed} that the image of $\Omega\cup \Gamma$ under $w_0$ is almost everywhere contained in a compact interval that only depends on $M$. In particular, this gives $\eta(w_0), \eta(w_0)^{-1} \in \mathbb L^\infty$, and the operator $\mathcal A(w_0)$ with domain $\dom(A_p)$ on $\mathbb L^p$ satisfies maximal parabolic regularity by Theorem \ref{t-qreg}. The function $\eta$ is locally Lipschitz continuous on $\field{R}$. Therefore \begin{align*} \|\mathcal A(w_1) - \mathcal A(w_2)\|_{\mathcal L(\dom(A_p),\mathbb L^p)}&\, \leq \|\eta(w_1) - \eta(w_2)\|_{\mathbb L^\infty} \\ &\, \leq C_M\| w_1-w_2\|_{\mathbb L^\infty} \leq \| w_1-w_2\|_{(\mathbb L^p, \dom(A_p))_{\alpha- \frac{1}{s},s}} \end{align*} for all $w_1,w_2 \in (\LL^p,\dom(A_p))_{\alpha-\frac {1}{s},s}$ with $\|w_j\|_{(\LL^p,\dom(A_p))_{\alpha-\frac {1}{s},s}} \le M$ for all $j \in \{ 1,2 \} $. This verifies the conditions of the above proposition for $\mathcal A$. We next present sufficient conditions for the functions $F_\Omega$, $F_\Gamma$ and $F_\Sigma$ such that the operator $R$, defined in \eqref{quasi-R}, satisfies the conditions for $\mathcal R$ in Proposition~\ref{p-pruess}. \begin{assu} \label{rhs-assu} For all $\xi \in \field{R}$ the mappings $F_\Omega(\cdot,\xi) \colon J\to \field{R}$, $F_\Gamma(\cdot,\xi) \colon J\to \field{R}$ and $F_\Sigma(\cdot,\xi) \colon J\to \field{R}$ are measurable. For all $M>0$ there is $r_M \in L_\alpha^s(J)$ such that \[ |F_\Omega(t,\xi_1) - F_\Omega(t,\xi_2)| \le r_M(t) \, | \xi_1 -\xi_2 | \] for a.e.\ $t \in J$ and $\xi_1,\xi_2 \in \field{R}$ with $|\xi_1|, |\xi_2| \leq M$; and analogous conditions for $F_\Gamma$ and $F_\Sigma$. \end{assu} Under the above assumption, \eqref{quasi-embed} implies that $R(\cdot,w_0) \colon J \to \mathbb L^p$ is measurable for all $w_0\in (\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}$ and that $R(\cdot,0)\in L_\alpha^s(J)$. We verify the Lipschitz property for the first component of $R$. If $M > 0$, and $w_1,w_2 \in (\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}$ with $\|w_1\|_{(\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}} \leq M$ and $\|w_2\|_{(\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}} \leq M$, then for a.e.\ $t\in J$ we have \begin{align}\label{quasi-R-est} \|\eta(w_1|_\Omega) F_\Omega(t,\mathfrak K^{-1}&\,(w_1|_\Omega)) - \eta(w_2|_\Omega) F_\Omega (t,\mathfrak K^{-1}(w_2|_\Omega))\|_{L^p(\Omega)}\\ &\, \leq \|\eta(w_1|_\Omega) - \eta(w_2|_\Omega)\|_{L^\infty(\Omega)} \| F_\Omega(t,\mathfrak K^{-1}(w_1|_\Omega))\|_{L^p(\Omega)} \nonumber \\ &\, \qquad + \|\eta(w_2|_\Omega)\|_{L^\infty(\Omega)} \|F_\Omega(t, \mathfrak K^{-1}(w_1|_\Omega)) - F_\Omega(t, \mathfrak K^{-1}(w_1|_\Omega))\|_{L^p(\Omega)} \nonumber\\ &\, \leq C_M \big( \|w_1|_\Omega - w_2|_\Omega\|_{L^\infty(\Omega)} + \widetilde{r}_{M}(t) \|\mathfrak K^{-1}(w_1|_\Omega) - \mathfrak K^{-1}(w_2|_\Omega)\|_{L^p(\Omega)}\big) \nonumber\\ &\,\leq C_M(1+ \widetilde{r}_{M}(t))\|w_1 - w_2\|_{(\mathbb L^p, \dom(A_p))_{\alpha-\frac{1}{s},s}}\nonumber, \end{align} for a suitable $\widetilde{r}_{M}\in L_\alpha^s(J)$. The same arguments apply to the other components of $R$, and thus $R$ is as desired to apply the proposition. We have proven the main result of this section. \begin{theorem} \label{t-semilinear} Let $p \in (\frac{d-1}{\alpha - \frac{1}{s}}, \infty)$, and suppose that $\Omega$, $\Gamma$, $\Sigma$, and $\varepsilon$ are as in Theorem~{\rm \ref{t-solution}}, that $\mu$, $\mathfrak b$ and $\mathfrak a$ are as in Assumption~{\rm \ref{a-Verteil}}, and that $f$, $g$ and $h$ are as in Assumption~{\rm \ref{rhs-assu}}. Then for all $u_0\in \mathbb L^\infty$ with $\mathfrak K(u_0) \in (\mathbb L^p,\dom(A_p))_{\alpha-\frac {1}{s},s}$ there are $T^*= T^*(u_0) \in (T_0,T_1]$ and a unique maximal solution $u \in C([T_0,T^*); \mathbb L^\infty)$ of \eqref{quasi-1}--\eqref{quasi-5} in the sense of Definition~{\rm \ref{quasi-solution}}. This means that for all $T_0 < T < T^*$ we have \[ \mathfrak K (u) \in W_\alpha^{1,s}(T_0,T; \mathbb L^p)\cap L_\alpha^s(T_0,T; \dom(A_p)), \] and $\mathfrak K(u)$ is the unique solution of \begin{equation} \label{quasi-eq} \partial_t w + \eta(w) A_p w = R(\cdot,w) \quad \text{on }(T_0,T), \qquad w(T_0) = \mathfrak K (u_0). \end{equation} \end{theorem} Instead of $F_\Omega$, $F_\Gamma$ and $F_\Sigma$ one can easily find also non-local maps such that the corresponding operator $R$ satisfies the condition of Proposition \ref{p-pruess}. One can take for example (linear or nonlinear) integral operators with suitable kernel properties. Moreover, in our example, $F_\Omega$ maps $L^\infty(\Omega)$ into itself, while $F_\Gamma$ maps $L^\infty(\Gamma)$ itself, and correspondingly also for $F_\Sigma$, i.e., the mapping $R$ has no crossing terms. This is also not necessary in general. The nonlinearity in the elliptic operator may also be a nonlocal operator. This case arises e.g.\ in models for the diffusion of bacteria; see \cite{CC}, \cite{CLovat} and references therein. \medskip We end this section with some comments on the case when \eqref{quasi-1}--\eqref{quasi-5} is semilinear, i.e., when $\mathfrak b = \mathfrak K = \text{id}$, such that $u$ itself solves the realization \eqref{quasi-eq} of the problem. The following is a useful criterion for the global existence of solutions. \begin{proposition} Adopt the assumptions of Theorem~{\rm \ref{t-semilinear}}. Suppose in addition that $\mathfrak b = \mathfrak K = \text{id}$, and let $u\in C([T_0,T^*); \mathbb L^\infty)$ be the maximal solution of \eqref{quasi-1}--\eqref{quasi-5}. If \[ \limsup_{t\to T^*}\|u(t,\cdot)\|_{L^\infty(\Omega)} < \infty,\] then $u$ is a global solution, i.e., $T^* = T_1$ and $u\in W_\alpha^{1,s}(J; \mathbb L^p)\cap L_\alpha^s(J; \dom(A_p))$. \end{proposition} \begin{proof} By Proposition \ref{ps}, for all $T< T^*$ the solution $u$ satisfies \begin{equation}\label{maxreg-nl} \|u'\|_{L_\alpha^s(T_0,T; \mathbb L^p)} +\|u\|_{L_\alpha^s(T_0,T; \dom(A_p))} \le c\big( \|u_0\|_{(X,\dom(A_p))_{\alpha-\frac{1}{s},s}} + \|R(\cdot, u)\|_{L_\alpha^s(T_0,T; \mathbb L^p)}\big), \end{equation} where $c$ is uniform in~$T$. Observe that $\|u(t,\cdot)\|_{\mathbb L^\infty}\leq \|u(t,\cdot)\|_{ L^\infty(\Omega)}$ for almost every $t$ by the definition of the trace (see Section \ref{s-function-spaces}). Hence $M = \|u\|_{L^\infty(T_0,T^*; \mathbb L^\infty)}< \infty$. Estimates as in \eqref{quasi-R-est} yield \begin{align*} \|R(\cdot, u)\|_{L_\alpha^s(T_0,T^*; \mathbb L^p)} &\, \leq \|R(\cdot,0)\|_{L_\alpha^s(T_0,T^*; \mathbb L^p)} + C_M\big(1+ \|\widetilde{r}_{M}\|_{L_\alpha^s(T_0,T^*)}\big). \end{align*} Therefore the terms on the left-hand side of (\ref{maxreg-nl}) are bounded uniformly in~$T$. By \cite[Corollary 3.2]{pru2}, this implies $T^* = T_1$.\end{proof} We finally comment on the asymptotics of solutions. \begin{rem} Under the assumptions of Theorem \ref{t-semilinear}, in the autonomous semilinear case the solutions form a local semiflow in the phase space $\dom(A_p^\theta)$, where $\theta$ is sufficiently close to $1$. Since the resolvent of $A_p$ is compact by Lemma~\ref{lLpR206}\ref{lLpR206-3}, the solution semiflow is compact, and bounded orbits are relatively compact. This property is very useful in studying the long-time behaviour of solutions. \end{rem} \section{Concluding remarks} \label{s-conclud} \begin{rem} \label{c-additional term} The realization of \eqref{e-parabol}--\eqref{e-initial} in Section \ref{s-parabolic} still enjoys maximal regularity if one adds a term $bu$ in the dynamic equation on $J \times \Sigma$ and imposes suitable conditions on $b$. \end{rem} \begin{rem} \label{c-couple} Everything can be done also for systems which couple in the reactions. \end{rem} \begin{rem} \label{r-nonaut} The fundamental result of Pr\"uss (Proposition \ref{p-pruess}) allows to treat the quasilinear problem \eqref{quasi-1}--\eqref{quasi-5} also in the case where the nonlinearities $\mathfrak b$ and $\mathfrak a$ depend explicitly on time. We did not carry out this here for the sake of technical simplicity. \end{rem} \begin{rem} \label{c-Hoel} If one requires $\Omega$ to be a Lipschitz domain and, additionally, imposes a certain compatibility condition between $\Gamma$ and its complement in the boundary (see \cite{Groe}, \cite{HMRS}), then $(-\nabla \cdot \mu \nabla +1)^{-1}$ maps $\breve W^{-1,q}_\Gamma$, i.e., the anti-dual space of $W^{1,q}_\Gamma$, into a H\"older space, if $q >d$. If $s$ in Theorem \ref{t-solution}/Theorem \ref{t-semilinear} is chosen sufficiently large, then the corresponding solutions are even H\"older continuous in space and time, compare \cite{DMRT}. \end{rem} \begin{rem} \label{c-move} What cannot be treated within this framework is the case where $\Sigma$ moves in $\Omega$ in time. If one wants to include this, the concept of \cite{HaR} should be adequate, see also \cite{HaR3}. \end{rem} \begin{rem} \label{c-null} What also cannot be treated within this framework is the case where the function $\varepsilon$ is not away from $0$, in particular, if it is $0$ on a subset of positive boundary measure. This would e.g.\ affect the case of inhomogeneous Neumann boundary conditions. It is known that the resulting problem is of very different functional analytic quality and requires different methods, see \cite{Nit1}. \end{rem} \section*{Acknowledgments} We wish to thank our colleagues K.~Gr\"oger (Berlin), H. Amann (Z\"urich), H.~Vogt (Clausthal), R.~Nittka (Leipzig) and P.~C.~Kunstmann (Karlsruhe) for valuable discussions on the subject of the paper. We wish to thank the referee for his critical comments.
proofpile-arXiv_067-12309
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction and Main Results} \q Let $T > 0$, $G \subset \mathbb{R}^{n}$ ($n \in \mathbb{N}$) be a given bounded domain with a $C^{2}$ boundary $\G$. Let $\G_0$ be a suitable chosen nonempty subset (to be given later) of $\G$. Put $Q \= (0,T) \t G$, $\Si \= (0,T) \t \G$, and $\Si_0 \= (0,T) \t \G_0$. Let $(\O, {\cal F}, \{{\cal F}_t\}_{t \geq 0}, P)$ be a complete filtered probability space on which a one dimensional standard Brownian motion $\{ B(t) \}_{t\geq 0}$ is defined. Let $H$ be a Banach space. Denote by $L^{2}_{\cal F}(0,T;H)$ the Banach space consisting of all $H$-valued $\{ {\cal F}_t \}_{t\geq 0}$-adapted processes $X(\cdot)$ such that $\mathbb{E}(|X(\cdot)|^2_{L^2(0,T;H)}) < \infty$; by $L^{\infty}_{\cal F}(0,T;H)$ the Banach space consisting of all $H$-valued $\{ {\cal F}_t \}_{t\geq 0}$-adapted bounded processes; and by $L^{2}_{\cal F}(\O;C([0,T];H))$ the Banach space consisting of all $H$-valued $\{ {\cal F}_t \}_{t\geq 0}$-adapted processes $X(\cdot)$ such that $\mathbb{E}(|X(\cdot)|^2_{C([0,T];H)}) < \infty$. All of these spaces are endowed with the canonical norm. Put $$ H_{T} \= L_{\cal F}^2 (\O; C([0,T];H_{0}^1(G))). $$ Let us consider the following stochastic Schr\"{o}dinger equation: \begin{eqnarray}\label{system1} \left\{ \begin{array}{lll} \ds idy + \D ydt = (a_1 \cdot \nabla y + a_2 y + f)dt + (a_3 y + g)dB(t) &\mbox { in } Q, \\ \ns\ds y = 0 &\mbox{ on } \Si, \\ y(0) = y_0 &\mbox{ in } G, \end{array} \right. \end{eqnarray} with initial datum $y_0 \in L^2(\O,\cF_0,P;H_0^1(G))$, suitable coefficients $a_i$ ($i=1,2,3$), and source terms $f$ and $g$. The solution to \eqref{system1} is understood in the following sense. \begin{definition}\label{def1} We call $y\in H_T$ a solution to the equation \eqref{system1} if \\ 1. $y(0) = y_0$ in $G$, P-a.s.;\\ 2. For any $t \in [0,T]$ and $\eta \in H_0^1(G)$, it holds that \begin{eqnarray}\nonumber &\,& \q\int_{G} iy(t,x)\eta(x)dx - \int_{G} iy(0,x)\eta(x)dx\nonumber\\ &\,& = \int_0^t \int_G \[ \nabla y(s,x)\cdot\nabla\eta(x) + \big(a_1 \cdot \nabla y + a_2 y + f\big)\eta(x) \]dxds \nonumber \\ &\,&\q + \int_0^t \int_G (a_3 y + g)\eta(x) dxdB(s), \,\, \mbox { P-a.s. } \nonumber \end{eqnarray} \end{definition} We refer to \cite[Chapter 6]{Prato} for the well-posedness of the equation \eqref{system1} in $H_T$, under suitable assumptions (the assumptions in this paper are enough). Similar to its deterministic counterpart, the stochastic Schr\"{o}dinger equation plays an important role in quantum mechanics. We refer the readers to \cite{Bar,Kol} and the rich references therein for the details of its physical background. The main purpose of this paper is to establish a boundary observability estimate for the equation \eqref{system1} in the following setting. Denote by $\nu(x)$ the unit outward normal vector of $G$ at $x\in \G$. Let $x_0\in\big(\mathbb{R}^n\setminus \overline G\big)$. In what follows, we choose \begin{equation}\label{G0} \G_0=\big\{ x\in \G :\, (x-x_0)\cdot \nu(x)>0 \big\}. \end{equation} We assume that \begin{eqnarray}\label{coai} \left\{\begin{array} {ll} \ds i a_1 \in L_{ \mathcal{F}}^{\infty}(0,T;W_0^{1,\infty}(G;\mathbb{R}^{n})), \\ \ns\ds a_2 \in L_{ \mathcal{F}}^{\infty}(0,T;W^{1,\infty}(G)), \\ \ns \ds a_3 \in L_{\mathcal{ F}}^{\infty}(0,T;W^{1,\infty}(G)), \end{array} \right. \end{eqnarray} and that \begin{eqnarray}\label{fg} \left\{ \begin{array}{ll}\ds f \in L^2_{\mathcal{F}}(0,T;H_0^1(G)), \\ \ns\ds g \in L^2_{\mathcal{F}}(0,T;H^1(G)). \end{array} \right. \end{eqnarray} In the sequel, we put \begin{equation}\label{cA} r_1\=|a_1|^2_{L_{ \mathcal{F}}^{\infty}(0,T;W_0^{1,\infty}(G;\mathbb{R}^{n}))} + |a_2|^2_{L_{ \mathcal{F}}^{\infty}(0,T;W^{1,\infty}(G))} + |a_3|^2_{L_{ \mathcal{F}}^{\infty}(0,T;W^{1,\infty}(G))} + 1, \end{equation} and denote by $C$ a generic positive constant depending only on $T$, $G$ and $x_0$, which may change from line to line. Now we state the main result of this paper as follows. \vspace{0.3cm} \begin{theorem}\label{observability} If the conditions \eqref{G0}--\eqref{fg} hold, then any solution of the equation \eqref{system1} satisfies that \begin{equation} \label{obser esti2} \begin{array}{ll}\ds \q |y_0|_{L^2(\Omega,{ \mathcal{F}}_0, P; H_0^1(G))} \\ \ns\ds \leq e^{C r_1}\Big(\Big|\frac{\partial y}{\partial \nu}\Big |_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))} + |f|_{L^2_{ \mathcal{ F}}(0,T;H_0^1(G))} + |g|_{L^2_{ \mathcal{ F}}(0,T;H^1(G))}\Big). \end{array} \end{equation} \end{theorem} \begin{remark} Since $y$ belongs only to $H_T$, its normal derivative $\frac{\pa y}{\pa\nu}$ may not make sense. Fortunately, due to the hidden regularity of the solution to the equation \eqref{system1}, one can show that $\frac{\pa y}{\pa\nu}$ exists and belongs to $L^2_{\cF}(0,T;L^2(\G))$(see Proposition \ref{hregularity} for more details). \end{remark} It is well-known that observability estimates (in the spirit of \eqref{obser esti2}) for partial differential equations play fundamental role in proving the controllability of the dual control systems. There exist many approaches and results addressing the observability estimate for determinisitc Schr\"{o}dinger equations. For example, similar results in the spirit of Theorem \ref{observability} are obtained by Carleman estimate (e.g. \cite{Baudouin-Puel,Lasiecka-Triggiani-Zhang,Mercado-Osses-Rosier}), by the classical Rellich-type multiplier approach (\cite{Machtyngier}), by the microlocal analysis approach (\cite{Lebeau,Phung}), and so on. We refer to \cite{Zuazua} for a nice survey in this respect. However, people know very little about the stochastic counterpart. To our best knowledge, \cite{Luqi4} is the only published result for this problem, where partial results in this paper have been announced without detailed proofs. Besides its important application to the controllability problem, the observability estimate not only have its own interest (a kind of energy estimate and quantitative uniqueness for the solution) but also has some other important applications. For instance, a typical application of this sort of estimates is to study the state observation problem, that is, to determine the state of a system by a suitable observation. Once the observability is obtained, we may conclude that the state can be uniquely determined from the observed data and continuously depends on it. For instance, once the inequality \eqref{obser esti2} is established, it follows that $y\in H_T$ is determined by $\ds\frac{\pa y}{\pa \nu}\Big|_{(0,T)\times \G_0}$ continuously. In Section \ref{Sec app}, we shall consider a state observation problem for semilinear stochastic Schr\"{o}dinger equations. In this paper, we will prove Theorem \ref{observability} by applying the global Carleman estimate (See Theorem \ref{thcarleman est} below). We now introduce the weight functions to be used in our Carleman estimate. Let \begin{equation}\label{psi} \psi(x) = |x-x_0|^2 + \tau, \end{equation} where $\tau$ is a positive constant such that $\psi \geq \frac{5}{6}|\psi|_{L^{\infty}(G)}$. Let $s>0$ and $\l>0$. Put \begin{equation}\label{lvarphi} \ell = s\frac{e^{4\l \psi} - e^{5\l |\psi|_{L^{\infty}(G)}}}{t^2(T-t)^2}, \qq \varphi = \frac{e^{4\l \psi} }{t^2(T-t)^2},\qq \theta=e^\ell. \end{equation} We have the following global Carleman inequality. \vspace{0.2cm} \begin{theorem}\label{thcarleman est} According to \eqref{G0}--\eqref{cA} and \eqref{lvarphi}, there is an $s_1>0$ (depending on $r_1$) and a $\l_1>0$ such that for each $s\geq s_1$, $\l\geq \l_1$ and for any solution of the equation \eqref{system1}, it holds that \begin{eqnarray}\label{carleman est} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \theta^2\Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \\ \ns \ds \leq C \Big\{\mathbb{E}\int_Q \theta^2 \Big(|f|^2 + s^2\l^2\varphi^2 |g|^2 + |\nabla g|^2 \Big)dxdt + \mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \Big\}. \end{array} \end{eqnarray} Further, if $g\in L^2_\cF(0,T;H^1(G;\mathbb{R}))$, then \eqref{carleman est} can be strengthened as the following: \begin{eqnarray}\label{carleman est1} \begin{array} {ll} \ds \q\mathbb{E}\int_Q \theta^2\Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \\ \ns \ds\leq C \Big\{\mathbb{E}\int_Q \theta^2 \Big(|f|^2 + s^2\l^2\varphi^2 |g|^2 \Big)dxdt + \mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \Big\}. \end{array} \end{eqnarray} \end{theorem} Carleman estimate is an important tool for the study of unique continuation property, stabilization, controllability and inverse problems for deterministic partial differential equations (e.g. \cite{Baudouin-Puel,Lasiecka-Triggiani-Zhang,Mercado-Osses-Rosier,Yamamoto,Zhangxu1,Zuazua}). Although there are numerous results for the Carleman estimate for deterministic partial differential equations, people know very little about the corresponding stochastic situation. In fact, as far as we know, \cite{barbu1,Luqi4,Luqi5,Tang-Zhang1,Zhangxu3} are the only five published papers addressing the Carleman estimate for stochastic partial differential equations. The references \cite{barbu1,Luqi5,Tang-Zhang1} are devoted to stochastic heat equations, while \cite{Zhangxu3} is concerned with stochastic wave equations. In \cite{Luqi4}, Theorem \ref{thcarleman est} was announced without proof. At first glance, the proof of Theorem \ref{thcarleman est} looks very similar to that of the global Carleman estimate for (stochastic) parabolic equations (See \cite{Fursikov-Imanuvilov1,Tang-Zhang1}). Furthermore, one can find that the idea behind the proofs in this paper and \cite{Fursikov-Imanuvilov1,Tang-Zhang1} are analogous. Nevertheless, the specific proofs have big differences. First, we have to choose different weight functions. Second, we deal with different equations. Such kind of differences lead to considerably different difficulties in the proof of Theorem \ref{thcarleman est}. One cannot simply mimic the proofs in \cite{Fursikov-Imanuvilov1,Tang-Zhang1} to obtain Theorem \ref{thcarleman est}. Indeed, even in the deterministic setting, the proof of the global Carleman estimate for Schr\"{o}dinger equations are much more complicated than that for the parabolic and hyperbolic equations (see \cite{Zhangxu5,Lasiecka-Triggiani-Zhang}). The rest of this paper is organized as follows. In Section 2, we give some preliminary results, including an energy estimate and the hidden regularity for solutions of the equation \eqref{system1}. Section 3 is addressed to establish a crucial identity for a stochastic Schr\"{o}dinger-like operator. Then, in Section 4, we derive the desired Carleman estimate. Section 5 is devoted to prove Theorem \ref{observability}. In Section \ref{Sec app}, as applications of the observability/Carleman estimates developed in this work, we study a state observation problem for semilinear stochastic Schr\"{o}dinger equations and establish a unique continuation property for the solution to the equation \eqref{system1}. Finally, we present some further comments and open problems concerned with this paper in Section 7. \section{Some preliminaries} In this section, we give some preliminary results which will be used later. To begin with, for the sake of completeness, we give an energy estimate for the equation \eqref{system1}. \vspace{0.1cm} \begin{proposition} \label{Oprop1} According to \eqref{G0}--\eqref{cA}, for all $y$ which solve the equation \eqref{system1}, it holds that \begin{equation}\label{energyesti1} \mathbb{E}|y(t)|^2_{H_0^1(G)} \leq e^{C r_1} \Big(\mathbb{E}|y(s)|^2_{H^1_0(G)} + |f|^2_{L^2_{\mathcal{F}}(0,T;H^1_0(G))} + |g|^2_{L^2_{\mathcal{F}}(0,T;H^1_0(G))}\Big), \end{equation} for any $ s, t\in [0, T]$. \end{proposition} \vspace{0.1cm} {\em Proof }: Without loss of generality, we assume that $t <s$. To begin with, we compute $ \mathbb{E}| y(t)|^2_{ L^2(G)} - \mathbb{E}| y(s)|^2_{ L^2(G)}$ and $ \mathbb{E}|\nabla y(t)|^2_{ L^2(G)} - \mathbb{E}|\nabla y(s)|^2_{ L^2(G)}$. The first one reads \begin{equation}\label{Eyt} \begin{array}{ll} \ds \mathbb{E}| y(t)|^2_{ L^2(G)} - \mathbb{E}| y(s)|^2_{ L^2(G)}\\ \ns \ds = -\mathbb{E}\int_t^s\int_G \big(y d\bar{y}+\bar{y}dy + dy d\bar{y}\big)dx\\ \ns\ds = \mathbb{E}\int_t^s\int_G \Big\{ i y\big(\D \bar{y} - a_1\cdot \nabla\bar{y} - a_2 \bar{y} -\bar{f}\big) - i\bar{y}\big(\D y - a_1\cdot\nabla y - a_2 y - f\big) \\ \ns \ds\q - \big(a_3 y + g\big)\big(a_3 \bar{y} + \bar{g}\big) \Big\}dxd\si \\ \ns\ds = \mathbb{E}\int_t^s\int_G \Big\{ i \big[\div(y\nabla\bar{y})-|\nabla y|^2 - \div(| y|^2 a_1) + \div(a_1)|y|^2 - a_2|y|^2 - y\bar{f}\, \big] \\ \ns \ds \q - i \big[\div(\bar{y}\nabla y)-|\nabla y|^2 - \div(| y|^2 a_1) + \div(a_1)|y|^2 - a_2|y|^2 - f\bar{y} \big] \\ \ns \ds\q - (a_3 y + g)(a_3 \bar{y} + \bar{g}) \Big\}dxd\si\\ \ns\ds \leq \mathbb{E}\int_t^s 2\Big[\big(|a_3|_{L^{\infty}(G)}+1\big)|y|^2_{L^2(G)}+ |f|_{L^2(G)}^2 + |g|^2_{L^2(G)}\Big]dxd\si. \end{array} \end{equation} The second one is \begin{equation}\label{Etyt} \begin{array}{ll} \q\mathbb{E}|\nabla y(t)|^2_{ L^2(G)} - \mathbb{E}|\nabla y(s)|^2_{L^2(G)}\\ \ns\ds = -\mathbb{E}\int_t^s\int_G \big(\nabla y d\n\bar{y} + \nabla \bar{y} d\n y + d\nabla y d\nabla \bar{y}\big)dx\\ \ns \ds = -\mathbb{E}\int_t^s\int_G \Big\{ \div(\nabla y d\bar{y}) - \D y d\bar{y} + \div(\nabla\bar{y} dy) - \D \bar{y}dy + d\nabla y d\nabla \bar{y} \Big\}dx\\ \ns \ds = -\mathbb{E}\int_t^s\int_G \Big\{\D y \Big[i\big(\D\bar{y} - a_1\cdot\nabla \bar{y} -a_2\bar{y} -f \big)\Big]-\D\bar{y}\Big[ i\big(\D y - a_1\cdot\nabla y - a_2 y - f\big)\Big]\\ \ns \ds \q +\nabla(a_3 y + g)\nabla(a_3\bar{y}+\bar{g}) \Big\}dxd\si \\ \ns \ds \leq 2\mathbb{E}\int_t^s \Big\{\big(|a_1|^2_{W^{1,\infty}(G;\mathbb{R}^m)}+|a_3|^2_{W^{1,\infty}(G)}+1\big)|\nabla y|^2_{L^2(G)}\\ \ns\ds \q + \big(|a_2|^2_{W^{1,\infty}(G)}+|a_3|^2_{W^{1,\infty}(G)}+1\big)|y|^2_{L^2(G)} +|f|^2_{H_0^1(G)} + |g|^2_{H^1_0(G)}\Big\}dxd\si. \end{array} \end{equation} From \eqref{Eyt} and \eqref{Etyt}, we have that \begin{equation}\label{energyest2} \begin{array}{ll}\ds \q\mathbb{E}| y(t)|^2_{H_0^1(G)} - \mathbb{E}|y(s)|^2_{H_0^1(G)} \\ \ns\ds \leq 2(r_1+1)\mathbb{E}\int_t^s\int_G |y(\si)|^2_{H_0^1(G)}dxd\si + \mathbb{E}\int_t^s\int_G \big(|f(\si)|^2_{H_0^1(G)}+|g(\si)|^2_{H^1_0(G)}\big)dxd\si. \end{array} \end{equation} From this, and thanks to Gronwall's inequality, we arrive at \begin{equation}\label{energyest3} \mathbb{E}| y(t)|^2_{H_0^1(G)}\leq e^{2(r_1+1)}\Big\{\mathbb{E}|y(s)|^2_{H_0^1(G)} + \mathbb{E}\int_0^T\int_G \big(|f|^2_{H_0^1(G)}+|g|^2_{H^1_0(G)}\big)dxd\si\Big\}, \end{equation} which implies the inequality \eqref{energyesti1} immediately. \endproof \vspace{0.1cm} \begin{remark} The proof of this proposition is almost standard. However, people may doubt the correctness of the inequality \eqref{energyesti1} for $t<s$ because of the very fact that the equation \eqref{system1} is time irreversible. Fortunately, the inequality \eqref{energyesti1} is true for $t<s$. In fact, in the stochastic setting one should divide the time irreversible systems into two classes. The first class of time irreversibility is caused by the energy dissipation. Thus, one cannot estimate the energy of the system at time $t$ by that at time $s$ uniformly when $t<s$. A typical example of such kind of systems is the heat equation. The second class of time irreversibility comes from the stochastic noise. Such kind of system cannot be solved backward, that is, if we give the final data rather than the initial data, then the system is not well-posed (Recall that, this is the very starting point of backward stochastic differential equations). Stochastic Schr\"{o}dinger equations and stochastic wave equations are typical systems of the second class. For these systems, we can still estimate the energy at time $t$ by that at time $s$ for $t<s$. \end{remark} Next, we give a result concerning the hidden regularity for solutions of the equation \eqref{system1}. It shows that, solutions of this equation enjoy a higher regularity on the boundary than the one provided by the classical trace theorem for Sobolev spaces. \vspace{0.2cm} \begin{proposition}\label{hregularity} According to \eqref{G0}--\,\eqref{cA}, for any solution of the equation \eqref{system1}, it holds that \begin{equation} \label{hregularity1} \begin{array}{ll}\ds \q\Big|\frac{\partial y}{\partial \nu}\Big |^2_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))}\\ \ns\ds \leq e^{C r_1 } \Big(|y_0|^2_{L^2(\Omega,{ \mathcal{F}}_0, P; H_0^1(G))} +|f|^2_{L^2_{ \mathcal{ F}}(0,T;H_0^1(G))} + |g|^2_{L^2_{ \mathcal{ F}}(0,T;H^1(G))}\Big). \end{array} \end{equation} \end{proposition} \vspace{0.1cm} \begin{remark}\label{rm2} By means of Proposition \ref{hregularity}, we know that $\ds\Big|\frac{\partial y}{\partial \nu}\Big |^2_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))}$ makes sense. Compared with Theorem \ref{observability}, Proposition \ref{hregularity} tells us the fact that $\ds\Big|\frac{\partial y}{\partial \nu}\Big |^2_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))}$ can be bounded by the initial datum and non-homogenous terms. This result is the converse of Theorem \ref{observability} in some sense. \end{remark} To prove Proposition \ref{hregularity}, we first establish a pointwise identity. For simplicity, here and in the sequel, we adopt the notation $\ds y_i \equiv y_{i}(x) \= \frac{\partial y(x)}{\partial x_i}$, where $x_i$ is the $i$-th coordinate of a generic point $x=(x_1,\cdots, x_n)$ in $\mathbb{R}^{n}$. In a similar manner, we use the notation $z_i$, $v_i$, etc., for the partial derivatives of $z$ and $v$ with respect to $x_i$. \medskip \begin{proposition}\label{prop2} Let $\mu = \mu(x) = (\mu^1,\cdots,\mu^n):\mathbb{R}^n \to \mathbb{R}^n$ be a vector field of class $C^1$ and $z$ an $H^2_{loc}(\mathbb{R}^n)$-valued $\{\mathcal{F}_t\}_{t\geq 0}$-adapted process. Then for a.e. $x \in \mathbb{R}^n$ and P-a.s. $\omega \in \Omega$, it holds that \begin{eqnarray}\label{identity2} \begin{array} {ll} & \ds \mu\cdot\nabla\bar{z}(i dz + \Delta z dt) + \mu\cdot\nabla z(-i d\bar{z} + \Delta \bar{z} dt)\\ \ns =& \ds \nabla\cd \Big[ (\mu\cdot\nabla \bar{z})\nabla z+ (\mu\cdot\nabla z)\nabla \bar{z} - i (z d\bar{z}) \mu - |\nabla z|^2\mu \Big]dt + d(i\mu\cd\nabla \bar{z} z)\\ \ns & \ds - 2\sum_{j,k=1}^n \mu^k_j z_{j}\bar{z}_{k}dt + (\nabla\cdot \mu) |\nabla z|^2 dt + i(\nabla\cdot \mu) z d\bar{z} - i(\mu\cd\nabla d\bar z) dz. \end{array} \end{eqnarray} \end{proposition} \medskip {\em Proof of Proposition \ref{prop2}} : The proof is a direct computation. We have that \begin{eqnarray}\label{h1} \begin{array} {ll} & \ds \sum_{k=1}^n\sum_{j=1}^n \mu^k\bar{z}_k z_{jj}+ \sum_{k=1}^n\sum_{j=1}^n \mu^k z_k \bar{z}_{jj}\\ \ns = & \ds \sum_{k=1}^n\sum_{j=1}^n\Big[(\mu^k\bar{z}_k z_j)_j + (\mu^k z_k\bar{z}_j)_j + \mu^k_k|z_j|^2 - (\mu^k|z_j|^2)_k - 2\mu^k_j \bar{z}_k z_j \Big] \end{array} \end{eqnarray} and that \begin{equation}\label{h2} \begin{array}{ll}\ds \q i\sum_{k=1}^n(\mu^k\bar{z}_k dz-\mu^k z_k d\bar{z})\\ \ns\ds = i\sum_{k=1}^n\Big[\,d(\mu^k\bar{z}_k z) - \mu^k z d\bar z_k - \mu^k d\bar{z}_k dz -(\mu^k zd\bar{z})_k + \mu^k z d\bar z_k + \mu_k^k z d\bar{z}\, \Big]\\ \ns\ds = i\sum_{k=1}^n\Big[\,d(\mu^k\bar{z}_k z) - \mu^k d\bar{z}_k dz -(\mu^k zd\bar{z})_k + \mu_k^k z d\bar{z} \,\Big]. \end{array} \end{equation} Combining \eqref{h1} and \eqref{h2}, we get the equality \eqref{identity2}. \endproof \vspace{0.2cm} By virtue of Proposition \ref{prop2}, the proof of Proposition \ref{hregularity} is standard. We only give a sketch here. \vspace{0.2cm} {\em Sketch of the Proof of Proposition \ref{hregularity}} : Since $\Gamma $ is $ C^2$, one can find a vector field $\mu_0 = (\mu_0^1, \cdots, \mu_0^n) \in C^1(\overline{G};\mathbb{R}^n)$ such that $\mu_0 = \nu$ on $\Gamma$(see \cite[page 18]{Komornik} for the construction of $\mu_0$). Letting $\mu = \mu_0$ and $z = y$ in Proposition \ref{prop2}, integrating it in $Q$ and taking the expectation, by means of Proposition \ref{prop2}, with similar computation in \cite{Zhangxu1}, Proposition \ref{hregularity} can be obtained immediately. \section{An Identity for a Stochastic Schr\"{o}dinger-like Operator} \q In this section, we obtain an identity for a stochastic schr\"{o}dinger-like operator, which is similar to the formula \eqref{identity2} in the spirit but it takes a more complex form and play a key role in the proof of Theorem \ref{thcarleman est}. \vspace{0.2cm} Let $\b(t,x)\in C^{2}(\mathbb{R}^{1+n};\mathbb{R})$, and let $b^{jk}(t,x)\in C^{1,2}(\mathbb{R}^{1+n};\;\mathbb{R})$ satisfy \begin{equation}\label{bjk} b^{jk}=b^{kj},\qq j,k=1,2,\cdots,n. \end{equation} Let us define a (formal) second order stochastic partial differential operator $\cP$ as \begin{equation}\label{cp} \ds \cP z \= i\b(t,x)dz+\sum_{j,k=1}^n(b^{jk}(t,x)z_j)_k dt, \q i=\sqrt{-1}. \end{equation} We have the following equality concerning $\cP$: \vspace{0.1cm} \begin{theorem}\label{identity1} Let $\ell,\;\Psi\in C^2(\mathbb{R}^{1+n};\;\mathbb{R})$. Assume that $z$ is an $H^2_{loc}(\mathbb{R}^n,\mathbb{C})$-valued $\{\cF_t\}_{t\geq 0}$-adapted process. Put $ v=\th z$(recall \eqref{lvarphi} for the definition of $\th$). Then for a.e. $x \in \mathbb{R}^n$ and P-a.s. $\o\in \O$, it holds that \begin{eqnarray}\label{c2a1} \begin{array}{ll} &\th(\cP z\overline {I_1}+\overline{\cP z} I_1)+dM+\div V \\ \ns = & \ds 2|I_1|^2 dt +\sum_{j,k=1}^n c^{jk}(v_k\ov_j+\ov_k v_j) dt +D|v|^2 dt \\ \ns & \ds +i\sum_{j,k=1}^n\[(\b b^{jk}\ell_j)_t+ b^{jk}(\b\ell_t)_j\](\ov_kv-v_k\ov) dt \\ \ns & \ds +i\[\b\Psi+\sum_{j,k=1}^n(\b b^{jk}\ell_j)_k\](\ov dv-vd\ov)\\ & \ds + (\b^2\ell_t)dvd\ov + i\sum_{j,k=1}^n \b b^{jk}\ell_j (dv d\ov_k - dv_kd\ov), \end{array} \end{eqnarray} where \begin{equation}\label{c2a2} \left\{ \begin{array}{ll}\ds I_1\= - i\b \ell_t v - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v, \\ \ns\ds A\=\sum_{j,k=1}^n b^{jk}\ell_j\ell_k-\sum_{j,k=1}^n(b^{jk}\ell_j)_k -\Psi, \end{array} \right. \end{equation} \begin{equation} \label{c2a3} \left\{ \begin{array}{ll}\ds M\=\b^2\ell_t |v|^2 + i\b\sum_{j,k=1}^n b^{jk}\ell_j(\ov_kv-v_k\ov),\\ \ns\ds V\=[V^1,\cdots,V^k,\cdots,V^n],\\ \ns\ds V^k\=-i \b\sum_{j=1}^n\[b^{jk}\ell_j(vd\ov -\ov dv ) + b^{jk}\ell_t(v_j\ov-\ov_jv) dt\]\\ \ns\ds\qq\,\,\,\, - \Psi\sum_{j=1}^n b^{jk}(v_j\ov+\ov_jv) dt + \sum_{j=1}^n b^{jk}(2A\ell_j+\Psi_j)|v|^2 dt \\ \ns\ds\qq\q+\sum_{j,j',k'=1}^n\(2b^{jk'}b^{j'k}-b^{jk}b^{j'k'}\)\ell_j(v_{j'}\ov_{k'}+\ov_{j'}v_{k'}) dt, \end{array} \right. \end{equation} and \begin{equation}\label{cc2a3} \left\{ \begin{array}{ll}\ds c^{jk}\= \sum_{j',k'=1}^n\[2(b^{j'k}\ell_{j'})_{k'}b^{jk'}-(b^{jk}b^{j'k'}\ell_{j'})_{k'}\] - b^{jk}\Psi,\\ \ns\ds D\=(\b^2\ell_t)_t +\sum_{j,k=1}^n(b^{jk}\Psi_k)_j+2\[\sum_{j,k=1}^n(b^{jk}\ell_jA)_k+A\Psi\]. \end{array} \right. \end{equation} \end{theorem} \medskip \begin{remark} Since we only assume that $(b^{jk})_{1\leq j,k\leq n}$ is symmetric and do not assume that it is positively definite, then similar to \cite{Fu} and based on the identity \eqref{c2a1} in Theorem \ref{identity1}, we can deduce observability estimate not only for the stochastic Schr\"odinger equation, but also for deterministic hyperbolic, Schr\"{o}dinger and plate equations, which had been derived via Carleman estimate (see \cite{FYZ}, \cite{Lasiecka-Triggiani-Zhang} and \cite{Zhangxu5}, respectively). \end{remark} {\em Proof of Theorem \ref{identity1}}: The proof is divided into three steps. {\bf Step 1.} By the definition of $v$ and $w$, a straightforward computation shows that: \begin{eqnarray}\label{th1eq1} \begin{array} {ll} \ds \theta \cP z &= \ds i\b dv - i\b \ell_t v dt + \sum_{j,k=1}^n (b^{jk}v_j)_k dt \\ \ns & \ds \q + \sum_{j,k=1}^n b^{jk}\ell_j \ell_k v dt - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k dt - \sum_{j,k=1}^n (b^{jk}\ell_j)_k v dt \\ \ns &= \ds I_1dt + I_2, \end{array} \end{eqnarray} where \begin{equation}\label{I2} I_2 = i\b dv + \sum_{j,k=1}^n (b^{jk}v_j)_kdt + Avdt. \end{equation} Hence we obtain that \begin{equation}\label{th1eq2} \theta (Pz\overline{I_1} + \overline{Pz}I_1) = 2|I_1|^2dt + (I_1\overline{I_2} + I_2\overline{I_1}). \end{equation} {\bf Step 2.} In this step, we compute $I_1\overline{I_2} + I_2\overline{I_1}$. Denote the three terms in $I_1$ and $I_2$ by $I_1^j$ and $I_2^j$, $j = 1,2,3$, respectively. Then we have that \begin{equation}\label{th1eq3} \begin{array}{ll}\ds \q I_1^1\overline{I_2^1} + I_2^1 \overline{I_1^1} \\ \ns\ds = -i\b \ell_t v \overline{ (i\b dv)} + i\b dv \overline{ (-i\b \ell_t v)} \\ \ns\ds = -d(\b^2 \ell_t |v|^2) + (\b^2 \ell_t)_t|v|^2dt + \b^2 \ell_t dvd\ov. \end{array} \end{equation} Noting that \begin{eqnarray}\label{th1eq4} \left\{ \begin{array}{lll} \ds 2vd\ov = d(|v|^2) - (\ov dv - vd\ov) - dv d\ov,\\ \ns\ds 2v \ov_k = (|v|^2)_k - (\ov v_k - v\ov_k), \end{array} \right. \end{eqnarray} we find first \begin{equation}\label{th1eq4.1} \begin{array}{ll} \ds & \ds 2i\sum_{j,k=1}^n (\b b^{jk}\ell_j vd\ov)_k \\ \ns=& \ds i \sum_{j,k=1}^n \Big\{\b b^{jk}\ell_j \[ d(|v|^2) - (\ov dv - vd\ov) - dv d\ov \] \Big\}_k \\ \ns = & \ds i \sum_{j,k=1}^n\Big\{ \big(\b b^{jk}\ell_j\big)_k d(|v|^2) + \b b^{jk}\ell_j\big[d(|v|^2)\big]_k - \big[\b b^{jk}\ell_j (\ov dv - vd\ov)\big]_k\\ \ns& \ds \qq\q - \big( \b b^{jk}\ell_j\big)_k dv d\ov - \b b^{jk}\ell_jdv_k d\ov - \b b^{jk}\ell_jdvd\ov_k \Big\}, \end{array} \end{equation} next \begin{equation}\label{th1eq4.2} \begin{array}{ll} \ds \q -2i\sum_{j,k=1}^n \big(\b b^{jk}\ell_j \big)_k vd\ov \\ \ns\ds= -i \sum_{j,k=1}^n \big(\b b^{jk}\ell_j \big)_k \[ d(|v|^2) - (\ov dv - vd\ov) - dv d\ov \] \\ \ns\ds = -i \sum_{j,k=1}^n\[ \big(\b b^{jk}\ell_j\big)_k d(|v|^2) - \big(\b b^{jk}\ell_j \big)_k(\ov dv - vd\ov) \ - \big( \b b^{jk}\ell_j \big)_k dv d\bar v \], \end{array} \end{equation} then \begin{equation}\label{th1eq4.3} \begin{array}{ll}\ds \q -2i \sum_{j,k=1}^n d\big(\b b^{jk}\ell_j v\ov_k \big) \\ \ns\ds = - i \sum_{j,k=1}^n d\Big\{ \b b^{jk}\ell_j \big[(|v|^2)_k - (\ov v_k - v\ov_k) \big]\Big\}\\ \ns\ds = - i \sum_{j,k=1}^n \Big\{\big(\b b^{jk}\ell_j \big)_t (|v|^2)_kdt + \b b^{jk}\ell_j d\big[(|v|^2)_k\big] - d\big[ \b b^{jk}\ell_j (\ov v_k - v\ov_k) \big] \Big\}, \end{array} \end{equation} and that \begin{equation}\label{th1eq4.4} \begin{array}{ll}\ds \q 2i \sum_{j,k=1}^n \big(\b b^{jk}\ell_j \big)_t v\ov_k dt \\ \ns\ds = i \sum_{j,k=1}^n d \big(\b b^{jk}\ell_j \big)_t \big[(|v|^2)_k - (\ov v_k - v\ov_k) \big]dt \\ \ns\ds = i \[\sum_{j,k=1}^n \big(\b b^{jk}\ell_j \big)_t (|v|^2)_kdt - \big(\b b^{jk}\ell_j \big)_t (\ov v_k - v\ov_k)dt \]. \end{array} \end{equation} From \eqref{th1eq4.1}--\eqref{th1eq4.4}, we get that \begin{equation}\label{th1eq5} \begin{array}{ll}\ds \q (I_1^2 + I_1^3)\overline{I_2^1} + I_2^1(\overline{I_1^2} + \overline{I_1^3}) \\ \ns\ds = \(- 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v\) \overline{ (i\b dv) } + i\b dv \overline{ \( - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v \) }\\ \ns\ds = 2i\sum_{j,k=1}^n \b b^{jk}\ell_j (v_k d\bar v - \bar v_k dv) + i\b\Psi (\bar v dv - vd\bar v) \\ \ns\ds = 2i\sum_{j,k=1}^n \[ \big(\b b^{jk}\ell_j vd\ov\big)_k - \big(\b b^{jk}\ell_j\big)_k vd\ov - \b b^{jk}\ell_j vd\ov_k \] \\ \ns\ds \q -2i \sum_{j,k=1}^n \[ d\big(\b b^{jk}\ell_j v\ov_k\big) - \big(\b b^{jk}\ell_j\big)_t v\ov_k dt - \b b^{jk}\ell_j vd\ov_k \] \end{array} \end{equation} \begin{equation} \begin{array}{ll} \ds\q + 2i\sum_{j,k=1}^n \b b^{jk}\ell_j dv d\ov_k + i\b\Psi (\bar v dv - vd\bar v)\nonumber\\ \ns\ds = -i \sum_{j,k=1}^n \[ \b b^{jk}\ell_j(\ov dv - vd\ov ) \]_k dt -i \sum_{j,k=1}^n d\[ \b b^{jk}\ell_j(v\ov_k - \ov v_k) \] \\ \ns\ds \q - i\sum_{j,k=1}^n (\b b^{jk}\ell_j)_t (\ov v_k - v \ov_k)dt + i\[ \b\Psi + \sum_{j,k=1}^n (\b b^{jk}\ell_j)_k \](\ov dv - vd\ov) \\ \ns\ds \q + i\sum_{j,k=1}^n \b b^{jk}\ell_j (dv d\ov_k - dv_kd\ov). \end{array} \end{equation} Noting that $b^{jk} = b^{kj}$, we have that \begin{equation}\label{th1eq7} \begin{array}{ll}\ds \q I_1^1\overline{I_2^2} + I_2^2 \overline{I_1^1} \\ \ns\ds = -i\b \ell_t v \overline{\sum_{j,k=1}^n(b^{jk}v_j)_k}dt + \sum_{j,k=1}^n(b^{jk}v_j)_k \overline{(-i\b \ell_t v)} \\ \ns\ds = \sum_{j,k=1}^n \[ i\b b^{jk}\ell_t (v_j \ov - \ov_j v)\]_k dt + i \sum_{j,k=1}^n b^{jk}(\b \ell_t)_k (\ov_j v - v_j\ov)dt. \end{array} \end{equation} Utilizing $b^{jk} = b^{kj}$ once more, we find $$ \sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{kk'} + \ov_{j'}v_{kk'})=\sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'k}\ov_{k'} + \ov_{j'k}v_{k'}). $$ Hence, we obtain that \begin{equation}\label{th1eq8} \begin{array}{ll}\ds \q 2\sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{kk'} + \ov_{j'}v_{kk'})dt \\ \ns\ds = \sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{kk'} + \ov_{j'}v_{kk'})dt + \sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'k}\ov_{k'} + \ov_{j'k}v_{k'})dt\\ \ns\ds = \sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{k'} + \ov_{j'}v_{k'})_k dt \\ \ns\ds = \!\sum_{j,k,j',k'=1}^n \!\[ b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{k'} + \ov_{j'}v_{k'}) \]_kdt - \!\!\sum_{j,k,j',k'=1}^n (b^{jk}b^{j'k'}\ell_j)_k (v_{j'}\ov_{k'} + \ov_{j'}v_{k'})dt.\\ \end{array} \end{equation} By the equality \eqref{th1eq8}, we get that \medskip \begin{equation}\label{th1eq9} \begin{array}{ll}\ds \q I_1^2\overline{I_2^2} + I_2^2 \overline{I_1^2}\\ \ns\ds = - 2\!\sum_{j,k=1}^n b^{jk}\ell_j v_k\! \overline{ \sum_{j,k=1}^n(b^{jk}v_j)_k }dt - 2\!\sum_{j,k=1}^n(b^{jk}v_j)_k \overline{\sum_{j,k=1}^n b^{jk}\ell_j v_k}dt\\ \ns\ds = - 2\!\! \sum_{j,k,j',k'=1}^n \!\!\[ b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{k}\! +\! \ov_{j'}v_{k}) \]_{k'} dt \!+\! 2\!\!\sum_{j,k,j',k'=1}^n\!\! b^{j'k'}(b^{jk}\ell_j)_{k'} (v_{j'}\ov_{k}\! +\! \ov_{j'}v_{k})dt \\ \ns\ds \q + 2\sum_{j,k,j',k'=1}^n b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{kk'} + \ov_{j'}v_{kk'})dt \\ \ns\ds = - 2\!\! \sum_{j,k,j',k'=1}^n \!\[ b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{k} \!+\! \ov_{j'}v_{k}) \]_{k'} dt \!+\! 2\!\!\!\sum_{j,k,j',k'=1}^n\!\! b^{j'k'}(b^{jk}\ell_j)_{k'} (v_{j'}\ov_{k}\! + \!\ov_{j'}v_{k})dt \\ \ns\ds \q + \!\!\!\sum_{j,k,j',k'=1}^n \!\[ b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{k'} \!+\! \ov_{j'}v_{k'}) \]_k dt - \!\!\sum_{j,k,j',k'=1}^n (b^{jk}b^{j'k'}\ell_j)_k (v_{j'}\ov_{k'} + \ov_{j'}v_{k'})dt \\ \ns\ds = - 2\!\! \sum_{j,k,j',k'=1}^n \!\[ b^{jk'}b^{j'k}\ell_j (v_{j'}\ov_{k'}\! +\! \ov_{j'}v_{k'}) \]_{k} dt \!+\! 2\!\!\sum_{j,k,j',k'=1}^n\!\! b^{jk'}(b^{j'k}\ell_{j'})_{k'} (v_{j}\ov_{k} \!+ \!\ov_{j}v_{k})dt \\ \ns\ds \q + \!\!\sum_{j,k,j',k'=1}^n \[ b^{jk}b^{j'k'}\ell_j (v_{j'}\ov_{k'}\! + \ov_{j'}v_{k'}) \]_k dt - \!\!\sum_{j,k,j',k'=1}^n (b^{jk}b^{j'k'}\ell_{j'})_{k'} (v_{j}\ov_{k} + \ov_{j}v_{k})dt. \end{array} \end{equation} Further, it holds that \begin{equation}\label{th1eq10} \begin{array}{ll}\ds \q I_1^3\overline{I_2^2} + I_2^2 \overline{I_1^3} \\ \ns\ds = \Psi v \overline{ \sum_{j,k=1}^n(b^{jk}v_j)_k }dt + \sum_{j,k=1}^n(b^{jk}v_j)_k \overline{ \Psi v }dt \\ \ns\ds = \sum_{j,k=1}^n \[ \Psi b^{jk}(v_j \ov + \ov_j v) \]_k dt - \sum_{j,k=1}^n \Psi b^{jk}(v_j \ov_k + \ov_j v_k)dt \\ \ns\ds\q -\sum_{j,k=1}^n \Psi_k b^{jk} ( v_j \bar v + \bar v_j v) dt \\ \ns\ds = \sum_{j,k=1}^n \[ \Psi b^{jk}(v_j \ov + \ov_j v) \]_k dt - \sum_{j,k=1}^n \Psi b^{jk}(v_j \ov_k + \ov_j v_k)dt \\ \ns\ds\q -\sum_{j,k=1}^n \[ b^{jk}\Psi_k |v|^2 \]_j dt + \sum_{j,k=1}^n (b^{jk}\Psi_k)_j |v|^2dt. \end{array} \end{equation} Finally, we have that \begin{equation}\label{th1eq11} \begin{array}{ll}\ds \q I_1\overline{I_2^3} + I_2^3 \overline{I_1} \\ \ns\ds = -i\b \ell_t v \overline{Av}dt + Av\overline{ (-i\b \ell_t v) }dt \\ \ns\ds = - 2\sum_{j,k=1}^n (b^{jk}\ell_j A|v|^2)_k dt + 2\[ \sum_{j,k=1}^n (b^{jk}\ell_j A)_k + A\Psi \]|v|^2dt . \end{array} \end{equation} {\bf Step 3.} Combining (\ref{th1eq2})--(\ref{th1eq11}), we conclude the desired formula (\ref{c2a1}). \section{Carleman Estimate for Stochastic Schr\"{o}dinger Equations} This section is devoted to the proof of Theorem \ref{thcarleman est}. \vspace{0.1cm} {\em Proof of Theorem {\ref{thcarleman est}}}: The proof is divided into three steps. \medskip \textbf{Step 1.} We choose $\b = 1$ and $(b^{jk})_{1\leq j,k\leq n}$ to be the identity matrix. Put $$ \d^{jk} = \left\{\begin{array}{ll}\ds 1,&\mbox{ if } j=k,\\ \ns\ds 0,&\mbox{ if } j\neq k.\end{array}\right. $$ Applying Theorem \ref{identity1} to the equation \eqref{system1} with $\theta$ given by \eqref{lvarphi}, $z$ replaced by $y$ and $v = \theta z$. We obtain that \begin{equation}\label{identity2.1} \begin{array}{ll} \ds\q\theta\cP y {\( i\b \ell_t \bar{v} - 2\sum_{j,k=1}^n b^{jk}\ell_j \bar{v}_k + \Psi \bar{v}\)} + \theta\overline{\cP y} {\(- i\b \ell_t v - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v\)}\\ \ns \ds \q + \;dM + \div V \\ \ns\ds = 2\Big|- i\b \ell_t v - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v\Big|^2dt + \sum_{j,k=1}^nc^{jk}(v_k\ov_j+\ov_k v_j) dt + D|v|^2dt \\ \ns \ds \q + 2i\sum_{j=1}^n (\ell_{jt} + \ell_{tj})(\ov_j v - v_j\ov)dt + i(\Psi + \D \ell)(\ov dv - v d\ov) \\ \ns \ds \q + \ell_t dv d\ov + i\sum_{j=1}^n \ell_j (d\ov_j dv - dv_j d\ov). \end{array} \end{equation} Here \begin{equation}\label{Id2eq1.1} \begin{array}{ll} M \3n& \ds = \b^2\ell_t |v|^2 + i\b\sum_{j,k=1}^nb^{jk}\ell_j(\ov_kv-v_k\ov)\\ \ns & \ds = \ell_t |v|^2 + i\sum_{j=1}^n \ell_j (\ov_j v - v_j \ov); \end{array} \end{equation} \begin{equation}\label{Id2eq1.2} \begin{array}{ll} A \3n& \ds =\sum_{j,k=1}^nb^{jk}\ell_j\ell_k - \sum_{j,k=1}^n(b^{jk}\ell_j)_k -\Psi \\ \ns & \ds = \sum_{j=1}^n (\ell_j^2 - \ell_{jj}) -\Psi; \end{array} \end{equation} \begin{equation}\label{Id2eq1.3} \begin{array}{ll} D \3n & \ds =(\b^2\ell_t)_t +\sum_{j,k=1}^n(b^{jk}\Psi_k)_j + 2\[\sum_{j,k=1}^n(b^{jk}\ell_j A)_k + A\Psi\]\\ \ns & \ds = \ell_{tt} + \sum_{j=1}^n \Psi_{jj} + 2\sum_{j=1}^n (\ell_j A)_j + 2 A\Psi; \end{array} \end{equation} \begin{equation}\label{Id2eq1.4} \begin{array}{ll} c^{jk} \3n &\ds = \sum_{j',k'=1}^n\[2(b^{j'k}\ell_{j'})_{k'}b^{jk'} - (b^{jk}b^{j'k'}\ell_{j'})_{k'}\Psi\] - b^{jk}\\ \ns & \ds = \[2(b^{kk}\ell_{k})_{j}b^{jj} - \sum_{j'=1}^n (b^{jk}b^{j'j'}\ell_{j'})_{j'} - b^{jk}\Psi\]\\ \ns & \ds = 2\ell_{jk} - \d^{jk}\D \ell - \d^{jk}\Psi; \end{array} \end{equation} and \begin{equation}\label{Id2eq1.5} \begin{array}{ll} V_k \3n &\ds = -i \b\sum_{j=1}^n\[b^{jk}\ell_j(vd\ov -\ov dv ) + b^{jk}\ell_t(v_j\ov-\ov_jv) dt\]\\ \ns & \ds \q - \Psi\sum_{j=1}^n b^{jk}(v_j\ov+\ov_jv) dt + \sum_{j=1}^n b^{jk}(2A\ell_j+\Psi_j)|v|^2 dt \\ \ns & \ds \q +\sum_{j,j',k'=1}^n\(2b^{jk'}b^{j'k}-b^{jk}b^{j'k'}\)\ell_j(v_{j'}\ov_{k'}+\ov_{j'}v_{k'}) dt\\ \ns & \ds = -i\big[ \ell_k(vd\ov - \ov dv) + \ell_t(v_j\ov -\ov_j v)dt \big] - \Psi(v_k\ov + \ov_k v)dt + (2A\ell_k + \Psi_k)|v|^2dt\\ \ns & \ds \q + 2\sum_{j=1}^n \ell_j (\ov_j v_k + v_j \ov_k)dt - 2\sum_{j'=1}^n \ell_k(v_j\ov_j)dt. \end{array} \end{equation} \textbf{Step 2.} In this step, we estimate the terms in the right-hand side of the equality \eqref{identity2.1} one by one. First, from the definition of $\ell$, $\f$(see \eqref{lvarphi}) and the choice of $\psi$(see \eqref{psi}), we have that \begin{equation}\label{lt1} \begin{array}{ll}\ds |\ell_t| & \ds = \Big| s\frac{2(2t-T)}{t^3(T-t)^3}\big( e^{4\l\psi} - e^{5\l |\psi|_{L^\infty(G)}} \big) \Big| \\ \ns& \ds \leq \Big| s\frac{2(2t-T)}{t^3(T-t)^3} e^{5\l |\psi|_{L^\infty(G)}} \Big| \\ \ns &\ds \leq \Big| s\frac{C}{t^3(T-t)^3} e^{5\l \psi} \Big|\\ \ns & \ds \leq Cs\varphi^{1+\frac{1}{2}}, \end{array} \end{equation} and that \begin{equation}\label{ltt1} \begin{array}{ll} \ds |\ell_{tt}| & \ds = \Big| s\frac{20t^2 - 20tT + 6T^2}{t^4(T-t)^4} \big( e^{4\l\psi} - e^{5\l |\psi|_{L^\infty(G)}} \big) \Big| \\ \ns& \ds \leq \Big| s\frac{C}{t^4(T-t)^4} e^{5\l |\psi|_{L^\infty(G)}} \Big| \\ \ns& \ds \leq \Big| s\frac{C}{t^4(T-t)^4} e^{8\l \psi } \Big|\\ \ns &\ds \leq Cs\f^2\leq Cs\f^3. \end{array} \end{equation} We choose below $\Psi = -\D \ell$, then we have that \begin{eqnarray}\label{Id2eq2} A = \sum_{j=1}^m \ell_j^2 = \sum_{j=1}^m \big(4s\l\f \psi )^2 =16s^2\l^2\varphi^2 |\nabla\psi|^2. \end{eqnarray} Hence, we find \begin{equation}\label{B} \begin{array}{ll}\ds D \3n & \ds = \ell_{tt} + \sum_{j=1}^n \Psi_{jj} + 2\sum_{j=1}^n (\ell_j A)_j + 2 A\Psi \\ \ns & \ds = \ell_{tt} + \D(\D\ell) + 2\sum_{j=1}^n\big(4s\l\f\psi_j 16s^2\l^2\f^2|\nabla\psi|^2\big)_j - 32s^2\l^2\f^2|\nabla\psi|^2\D \ell \\ \ns & \ds = 384s^3\l^4\varphi^3|\nabla\psi|^4 - \l^4\varphi O(s) - s^3\varphi^3 O(\l^3) + \ell_{tt}. \end{array} \end{equation} Recalling that $x_0\in (\mathbb{R}^n\setminus \overline G)$, we know that $$|\nabla\psi|>0\;\;\mbox{ in }\overline G.$$ From \eqref{B} and \eqref{ltt1}, we know that there exists a $\l_0>0$ such that for all $\l>\l_0$, one can find a constant $s_0 = s_0(\l_0)$ so that for any $s>s_0$, it holds that \begin{equation}\label{B1} D|v|^2 \geq s^3\l^4\varphi^3|\nabla\psi|^4|v|^2. \end{equation} Since $$ \begin{array}{ll}\ds c^{jk} = 2\ell_{jk} - \d^{jk}\D \ell - \d^{jk}\Psi \\ \ns\ds\q\,\,\, = 32s\l^2\varphi\psi_j \psi_k + 16s\l\varphi\psi_{jk}, \end{array} $$ we see that \begin{equation}\label{cjk} \begin{array}{ll}\ds \q \ds\sum_{j,k=1}^n c^{jk}(v_j\ov_k + v_k\ov_j)\\ \ns \ds = 32s\l^2\varphi\sum_{j,k=1}^n\psi_j \psi_k(v_j\ov_k + v_k\ov_j) + 16s\l\varphi \sum_{j,k=1}^n \psi_{jk}(v_j\ov_k + v_k\ov_j)\\ \ns\ds = 32s\l^2\varphi\[\sum_{j=1}^n(\psi_jv_j)\sum_{k=1}^n (\psi_k \ov_k) + \sum_{k=1}^n(\psi_kv_k)\sum_{j=1}^n (\psi_j \ov_j) \] + 32s\l\varphi \sum_{j=1}^n(v_j\ov_j + \ov_j v_j)\\ \ns \ds = 64s\l^2\varphi |\nabla\psi\cd\nabla v|^2 + 64 s\l\f |\nabla v|^2\\ \ns \ds \geq 64 s\l\f |\nabla v|^2. \end{array} \end{equation} Now we estimate the other terms in the right-hand side of the equality \eqref{identity2.1}. The first one satisfies that \begin{eqnarray}\label{ltj} \begin{array} {ll} \ds 2i\sum_{j=1}^n (\ell_{jt} + \ell_{tj})(\ov_j v - v_j\ov) & \ds = 4i\sum_{j=1}^n s\l\psi_j \ell_t(\ov_j v - \ov v_j)\\ \ns & \ds \leq 2 s\varphi |\nabla v|^2 + 2 s\l^2\varphi^3 |\nabla\psi|^2|v^2|. \end{array} \end{eqnarray} The second one reads \begin{eqnarray}\label{liiPsi} i(\Psi + \D \ell)(\ov dv - v d\ov) = 0. \end{eqnarray} For the estimate of the third and the fourth one, we need to take mean value and get that \begin{equation}\label{dvdov} \begin{array}{ll}\ds \mathbb{E}\big(\ell_t dv d\ov\big) \3n& \ds= \mathbb{E}\big[\ell_t(\theta \ell_t ydt + \theta dy)(\overline{\theta \ell_t ydt + \theta dy)}\big] = \mathbb{E}(\ell_t \theta^2 dy d\bar{y}) \\ \ns & \ds \leq 2s\theta^2 \varphi^{\frac{3}{2}}\mathbb{E}( a_3^2|y|^2 + g^2)dt. \end{array} \end{equation} Here we utilize inequality \eqref{lt1}. \vspace{0.1cm} Since $$ \begin{array}{ll}\ds \mathbb{E}(d\ov_j dv) & = \mathbb{E}\big[\overline{\big( \theta \ell_t v dt + \theta dy \big)}_j \big( \theta \ell_t v dt + \theta dy \big)\big] \\ \ns& \ds = \mathbb{E} \big[\, \overline{(\theta dy)}_j (\theta dy) \big]\\ \ns& \ds = \mathbb{E} \big[\, \overline{\big( s\l\f\psi_j\theta dy + \theta dy_j \big)}\theta dy \big]\\ \ns & \ds = s\l\f\psi_j\theta^2 \mathbb{E}d\bar y dy + \theta^2 \mathbb{E}d\bar y_j dy \\ \ns & \ds = s\l\f\psi_j\theta^2 \mathbb{E}|a_3y + g|^2dt + \theta^2 \mathbb{E}\big[\,\overline{ (a_3 y + g) }_j (a_3 y + g) \big]dt \end{array} $$ and $$ \begin{array}{ll}\ds \q\theta^2 \mathbb{E}\big[\,\overline{ (a_3 y + g) }_j (a_3 y + g) \big]dt\\ \ns\ds = \theta^2 \mathbb{E}\big[(\overline{a_3 y})_j (a_3 y) + (\overline{a_3 y})_j g + (a_3 y )\bar g_j + g\bar g_j \big] dt\\ \ns\ds =\theta^2 \mathbb{E}\big[(\overline{a_3 y})_j (a_3 y) + (\overline{a_3 y})_j g + g\bar g_j \big] dt + [\mE\theta^2(a_3 y )\bar g]_j \\ \ns\ds \q - s\l\f\psi_j\theta^2\mE(a_3 y \bar g)-\th^2\mE[(a_3y)_j]\bar g, \end{array} $$ we get that $$ \begin{array}{ll}\ds \mathbb{E}(d\ov_j dv) \3n&\ds= s\l\f\psi_j\theta^2 \mathbb{E}|a_3y + g|^2dt + \theta^2 \mathbb{E}\big[(\overline{a_3 y})_j (a_3 y) + (\overline{a_3 y})_j g + g\bar g_j \big] dt\\ \ns&\ds \q + \mE(\theta^2 a_3 y \bar g)_j - s\l\f\psi_j\theta^2\mE(a_3 y \bar g)-\th^2\mE[(a_3y)_j\bar g]. \end{array} $$ Similarly, we can get that $$ \begin{array}{ll}\ds \mathbb{E}(dv_j d\ov)\3n& \ds= s\l\f\psi_j\theta^2 \mathbb{E}|a_3y + g|^2dt + \theta^2 \mathbb{E}\big[(\overline{a_3 y}) (a_3 y)_j + (a_3 y )_j\bar g + g_j\bar g \big] dt\\ \ns&\ds \q + \mE(\theta^2 \overline{a_3 y} g)_j - s\l\f\psi_j\theta^2\mE(\overline{a_3 y} g)-\th^2\mE[(\overline{a_3 y})_j g]. \end{array} $$ Therefore, fourth one enjoys that \begin{equation}\label{dvjdv} \begin{array}{ll} \ds \q i\mathbb{E}\sum_{j=1}^n \ell_j (d\ov_j dv - dv_j d\ov)\\ \ns\ds = s\l\varphi\sum_{j=1}^n \psi_j \[\mathbb{E}\big(d\ov_j dv\big) - \mathbb{E}\big(dv_j d\ov\big) \] \\ \ns \ds = s\l\varphi \psi \sum_{j=1}^n \psi_j \theta^2 \mathbb{E}\Big\{\big[(\overline{a_3 y})_j (a_3 y) + (\overline{a_3 y})_j g + g\bar g_j - s\l\f\psi_j a_3 y \bar g - (a_3y)_j \bar g\big]\\ \ns \ds \q - \big[(\overline{a_3 y}) (a_3 y)_j + (a_3 y )_j\bar g + g_j\bar g - s\l\f\psi_j (\overline{a_3 y} g)- [(\overline{a_3 y})_j g]\big]\Big\} dt \\ \ns\ds \q + s\l\varphi \psi \sum_{j=1}^n \psi_j \mathbb{E}\big(\th^2 a_3 y \bar g - \theta^2 \overline{a_3 y} g \big)_j. \end{array} \end{equation} \textbf{Step 3.} Integrating the equality \eqref{identity2.1} in $Q$, taking mean value in both sides, and noting \eqref{Id2eq2}--\eqref{dvjdv}, we obtain that \begin{equation}\label{inep1} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \Big(s^3\l^4\varphi^3 |v|^2 \!+\! s\l^2\varphi |\nabla v|^2\Big) dxdt + 2\mathbb{E}\int_Q \Big|\!- i\b \ell_t v - 2\!\!\sum_{j,k=1}^n b^{jk}\ell_j v_k + \!\Psi v\Big|^2dxdt\\ \ns \ds \leq \mathbb{E}\int_Q \Big\{ \theta\cP y {\Big( i\b \ell_t \bar{v}\! -\! 2\!\! \sum_{j,k=1}^n\! b^{jk}\ell_j \bar{v}_k + \!\Psi \bar{v}\Big)} + \theta\overline{\cP y} {\Big(\!-\! i\b \ell_t v\! - \!2\! \sum_{j,k=1}^n\! b^{jk}\ell_j v_k + \!\Psi v\Big)} \Big\}dx\\ \ns \ds \q +\; C\mathbb{E}\int_Q \theta^2\Big[s^2\l^2 \varphi^2(a_3^2|y|^2 + g^2) + a_3^2|\nabla y|^2 + |\nabla a_3|^2 y^2 + |\nabla g|^2\Big] dxdt\\ \ns\ds \q + \;\mathbb{E}\int_Q dM dx + \mathbb{E}\int_Q \div V dx. \end{array} \end{equation} Now we analyze the terms in the right-hand side of the inequality \eqref{inep1} one by one. The first term satisfies that \begin{equation}\label{intprin} \begin{array}{ll} \ds \mathbb{E}\int_Q \Big\{ \theta\cP y {\Big( i\b \ell_t \bar{v} - 2\sum_{j,k=1}^n b^{jk}\ell_j \bar{v}_k + \Psi \bar{v}\Big)}\\ \ns \ds \q +\; \theta\overline{\cP y} {\Big(- i\b \ell_t v - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v\Big)} \Big\}dx \\ \ns \ds = \ds \mathbb{E}\int_Q \Big\{ \theta (a_1 \cdot \nabla y + a_2 y + f) {\( i\b \ell_t \bar{v} - 2\sum_{j,k=1}^n b^{jk}\ell_j \bar{v}_k + \Psi \bar{v}\)}\\ \ns\ds \q +\; \theta {(a_1 \cdot \nabla \bar{y} + \overline{a_2 y} + \bar{f})} {\Big(- i\b \ell_t v - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v\Big)} \Big\}dxdt\nonumber\\ \ns \ds \leq 2\mathbb{E}\int_Q \Big\{\theta^2\big|a_1 \cdot \nabla y + a_2 y + f\big|^2 + \Big|- i\b \ell_t v - 2\sum_{j,k=1}^n b^{jk}\ell_j v_k + \Psi v\Big|^2 \Big\}dxdt. \end{array} \end{equation} From the definition of $\theta$, we know that $v(0)=v(T)=0$. Hence, it holds that \begin{equation}\label{idm} \int_Q dM dx = 0. \end{equation} By means of Stokes' Theorem, we have that \begin{eqnarray}\label{intV} \begin{array} {ll} \ds \mathbb{E}\int_Q \div V dx \3n&\ds = \ds \mathbb{E}\int_{\Si} 2\sum_{k=1}^n\sum_{j=1}^n\Big[ \ell_j\big(\ov_j v_k + v_j \ov_k\big)\nu^k - \ell_k \nu_k v_j \ov_j \Big]d\Si\\ \ns &\ds = \ds \mathbb{E}\int_{\Si} \Big(4\sum_{j=1}^n \ell_j \nu_j \Big| \frac{\pa v}{\pa \nu} \Big|^2 - 2\sum_{k=1}^n \ell_k \nu_k \Big| \frac{\pa v}{\pa \nu} \Big|^2\Big) d\Si\\ \ns &= \ds \mathbb{E}\int_{\Si} 2\sum_{k=1}^n \ell_k \nu_k \Big| \frac{\pa v}{\pa \nu} \Big|^2 d\Si \\ \ns &\ds \leq C\mathbb{E}\int_0^T \int_{\G_0} \theta^2 s\l\varphi \Big| \frac{\pa y}{\pa \nu} \Big|^2 d\G dt. \end{array} \end{eqnarray} By (\ref{inep1})--(\ref{intV}), we have that \begin{eqnarray}\label{car1} \begin{array}{ll} \q \ds \mathbb{E}\int_Q \Big(s^3\l^4\varphi^3 |v|^2 + s\l\varphi |\nabla v|^2\Big) dxdt \\ \ns \ds \leq C\,\mathbb{E}\int_Q \theta^2 |a_1 \cdot \nabla y + a_2 y + f|^2 dxdt + C\,\mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt\\ \ns \ds \q +\, C\mathbb{E}\int_Q \theta^2\Big[s^2\l^2 \varphi^2\big(a_3^2|y|^2 + g^2\big) + a_3^2|\nabla y|^2 + |\nabla a_3|^2 y^2 + |\nabla g|^2\Big] dxd t. \end{array} \end{eqnarray} Noting that $y_i = \theta^{-1}(v_i - \ell_i v) = \theta^{-1}(v_i - s\l\varphi\psi_i v)$, we get \begin{equation}\label{vtoy} \theta^2\big(|\nabla y|^2 + s^2\l^2\varphi^2 |y|^2\big)\leq C\big(|\nabla v|^2 + s^2\l^2\varphi^2 |v|^2\big). \end{equation} Therefore, it follows from (\ref{car1}) that \begin{equation}\label{car2} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \\ \ns \ds \leq C\mathbb{E}\int_Q \theta^2\Big( |a_1|^2 || \nabla y|^2 + a_2^2 |y|^2 + |f|^2\Big) dxdt + C\mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \\ \ns \ds \q + C\mathbb{E}\int_Q \theta^2\Big[s^2\l^2 \varphi^2\big(a_3^2|y|^2 + g^2\big) + a_3^2|\nabla y|^2 + |\nabla a_3|^2 y^2 + |\nabla g|^2\Big] dxdt. \end{array} \end{equation} Taking $\l_1 =\l_0$ and $s_1 = \max(s_0, Cr_1)$, and utilizing the inequality \eqref{car2}, we conclude the desired inequality \eqref{carleman est}. On the other hand, if $g\in L^2_\cF(0,T;H^1(G;\mathbb{R}))$, then $g\bar g_j - g_j\bar g=0$ for $j=1,\cds,n$. Thus, from \eqref{Id2eq2}--\eqref{dvjdv}, we get \begin{equation}\label{inep1z} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \Big(s^3\l^4\varphi^3 |v|^2 + s\l^2\varphi |\nabla v|^2\Big) dxdt + 2\mathbb{E}\int_Q \Big|\!- i\b \ell_t v \!- \!2\!\sum_{j,k=1}^n b^{jk}\ell_j v_k \! +\! \Psi v\Big|^2dxdt\\ \ns \ds \leq \mathbb{E}\int_Q \Big\{ \theta\cP y {\Big( i\b \ell_t \bar{v} \!-\! 2\!\!\sum_{j,k=1}^n b^{jk}\ell_j \bar{v}_k + \Psi \bar{v}\Big)} + \theta\overline{\cP y} {\Big(\!\!-\! i\b \ell_t v\! -\! 2\!\!\sum_{j,k=1}^n b^{jk}\ell_j v_k\! + \!\Psi v\Big)} \Big\}dx\\ \ns \ds \q +\; C\mathbb{E}\int_Q \theta^2\Big[s^2\l^2 \varphi^2\big(a_3^2|y|^2 + g^2\big) + a_3^2|\nabla y|^2 + |\nabla a_3|^2 y^2 \Big] dxdt + \mathbb{E}\int_Q dM dx \\ \ns\ds \qq + \mathbb{E}\int_Q \div V dx. \end{array} \end{equation} Then, by a similar argument, we find that \begin{equation}\label{car2z} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \\ \ns \ds \leq C\mathbb{E}\int_Q \theta^2\Big( |a_1|^2 || \nabla y|^2 + a_2^2 |y|^2 + |f|^2\Big) dxdt + C\mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \\ \ns \ds \q + C\mathbb{E}\int_Q \theta^2\Big[s^2\l^2 \varphi^2\big(a_3^2|y|^2 + g^2\big) + a_3^2|\nabla y|^2 + |\nabla a_3|^2 y^2 \Big] dxdt. \end{array} \end{equation} Now taking $\l_1 =\l_0$ and $s_1 = \max(s_0, Cr_1)$, and using the inequality \eqref{car2z}, we obtain the desired inequality \eqref{carleman est1}. \section{Proof of Theorem \ref{observability}} In this section, we prove Theorems \ref{observability}, by means of Theorem \ref{thcarleman est}. {\em Proof of Theorem \ref{observability}}: By means of the definition of $\ell$ and $\theta$(see \eqref{lvarphi}), it holds that \begin{eqnarray}\label{final1} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \theta^2\Big(\varphi^3 |y|^2 + \varphi |\nabla y|^2\Big) dxdt\\ \ns \ds \geq \min_{x\in\overline{G}}\Big(\varphi\Big(\frac{T}{2},x\Big) \theta^2\Big(\frac{T}{4},x\Big)\Big)\mathbb{E}\int_{\frac{T}{4}}^{\frac{3T}{4}}\int_G\big(|y|^2+|\nabla y|^2\big)dxdt, \end{array} \end{eqnarray} \begin{equation}\label{final2} \begin{array}{ll} \ds \q\mathbb{E}\int_Q \theta^2\big(|f|^2 + \varphi^2|g|^2 + |\nabla g|^2\big)dxdt \\ \ns \ds\leq \max_{(x,t)\in \overline{Q}}\big(\varphi^2(t,x)\theta^2(t,x)\big)\mathbb{E}\int_Q\big(|f|^2 + |g|^2 + |\nabla g|^2\big)dxdt \end{array} \end{equation} and that \begin{equation}\label{final3} \mathbb{E}\int_0^T\int_{\G_0}\theta^2 \varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \leq \max_{(x,t)\in \overline{Q}}\big(\varphi(t,x)\theta^2(t,x)\big)\mathbb{E}\int_0^T\int_{\G_0} \Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt. \end{equation} From \eqref{carleman est} and \eqref{final1}--(\ref{final3}), we deduce that \begin{equation}\label{final4} \begin{array}{ll} \ds \q\mathbb{E}\int_{\frac{T}{4}}^{\frac{3T}{4}}\int_G\big(|y|^2+|\nabla y|^2\big)dxdt\\ \ns \ds\leq C r_1 \frac{\max_{(x,t)\in \overline{Q}}\Big(\varphi^2(t,x)\theta^2(t,x)\Big)}{\min_{x\in\overline{G}}\Big(\varphi(\frac{T}{2},x)\theta^2(\frac{T}{4},x)\Big)}\\ \ns \ds \q\times\left\{ \mathbb{E}\int_Q\big(|f|^2 + |g|^2 + |\nabla g|^2\big)dxdt + \mathbb{E}\int_0^T\int_{\G_0} \Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt\right\}\\ \ns \ds \leq e^{ Cr_1 }\left\{ \mathbb{E}\int_Q\big(|f|^2 + |g|^2 + |\nabla g|^2\big)dxdt + \mathbb{E}\int_0^T\int_{\G_0} \Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt\right\}. \end{array} \end{equation} Utilizing (\ref{final4}) and (\ref{energyesti1}), we obtain that \begin{equation}\label{final5} \begin{array}{ll} \ds \q\mathbb{E}\int_G\big(|y_0|^2 + |\nabla y_0|^2\big)dx \\ \ns \ds \leq e^{C r_1 }\left\{ \mathbb{E}\int_Q\big(|f|^2 + |\nabla f|^2 + |g|^2 + |\nabla g|^2\big)dxdt + \mathbb{E}\int_0^T\int_{\G_0} \Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt\right\}, \end{array} \end{equation} which concludes Theorem \ref{observability} immediately.\endproof \medskip \section{Two applications}\label{Sec app} This section is addressed to applications of the observability/Carleman estimates shown in Theorems \ref{observability}--\ref{thcarleman est}. We first study a state observation problem for semilinear stochastic Schr\"{o}dinger equations. Let us consider the following equation: \begin{equation}\label{system2} \!\!\left\{ \begin{array}{ll} \ds idz + \D zdt=\big[\,a_1 \cdot \nabla z + a_2 z +F_1(|z|)\big]dt + \big[a_3 z +F_2(|z|)\big] dB(t)&\mbox{ in } Q,\\ \ns\ds z=0&\mbox{ on }\Si,\\ \ns\ds z(0)=z_0, &\mbox{ in }G. \end{array} \right. \end{equation} Here $a_i$ ($i=1,2,3$) are given as in \eqref{coai}, $F_1(\cd)\in C^1(\mathbb{R}; \mathbb{C})$ with $F(0)=0$ and $F_2(\cd)\in C^1(\mathbb{R}; \mathbb{R})$ are two known nonlinear global Lipschitz continuous functions with Lipschitzian constant $L$, while the initial datum $z_0\in L^2(\O,\cF_0,P; H_0^1(G))$ is unknown. The solution to the equation \eqref{system2} is understood similar to Definition \ref{def1}. \begin{remark} From the choice of $F_1$ and $F_2$, one can easily show that the equation \eqref{system2} admits a unique solution $z\in H_T$ by the standard fixed point argument. We omit the proof here. \end{remark} The state observation problem associated to the equation \eqref{system2} is as follows. \vspace{0.1cm} \begin{itemize} \item {\bf Identifiability}. Is the solution $z\in H_T$ (to \eqref{system2}) determined uniquely by the observation $\ds\frac{\pa z}{\pa\nu}\Big|_{(0,T)\times \G_0}$? \vspace{0.1cm} \item {\bf Stability}. Assume that two solutions $z$ and $\hat z$ (to \eqref{system2}) are given, and let $\ds\frac{\pa z}{\pa\nu}\Big|_{(0,T)\times \G_0}$ and $\ds\frac{\pa \hat z}{\pa\nu}\Big|_{(0,T)\times \G_0}$ be the corresponding observations. Can we find a positive constant $C$ such that $$ |\!| z-\hat z |\!| \leq C\|\!\| \frac{\pa z}{\pa\nu}-\frac{\pa \hat z}{\pa\nu} \|\!\|, $$ with appropriate norms in both sides? \vspace{0.1cm} \item {\bf Reconstruction}. Is it possible to reconstruct $z\in H_T$ to \eqref{system2}, in some sense, from the observation $\ds\frac{\pa z}{\pa\nu}\Big|_{(0,T)\times \G_0}$? \end{itemize} The state observation problem for systems governed by deterministic partial differential equations is studied extensively (See \cite{Kli,Li1,Yamamoto} and the rich references therein). However, the stochastic case attracts very little attention. To our best knowledge, \cite{Zhangxu3} is the only published paper addressing this topic. In that paper, the author studied the state observation problem for semilinear stochastic wave equations. By means of Theorem \ref{observability}, we can give positive answers to the above first and second questions. We claim that $\frac{\pa z}{\pa\nu}|_{(0,T)\times \G_0}\in L^2_\cF(0,T;L^2(\G_0))$ (and therefore, the observation makes sense). Indeed, from the choice of $F_1$, it follows that $$ \begin{array}{ll}\ds \mathbb{E}\int_0^T\!\int_G \big|\n \big(F_1(|z|)\big)\big|^2dxdt \3n& \ds= \mathbb{E}\int_0^T\!\int_G \| F_1' (|z|)\n |z| \|^2dxdt \leq L\mathbb{E}\int_0^T\!\int_G \big|\n |z|\big|^2dxdt\\ \ns&\ds \leq L\mathbb{E}\int_0^T\int_G \big|\n z\big|^2dxdt, \end{array} $$ and $$ F(|z(t,\cd)|)=0 \q\mbox{ on } \G \mbox{ for a.e. }\ t\in [0,T]. $$ Hence, $$ F_1(|z|)\in L^2_{\cF}(0,T;H_0^1(G)) \mbox{ for any } z\in H_{T}. $$ Similarly, $$ F_2(|z|)\in L^2_{\cF}(0,T;H^1(G)) \mbox{ for any } z\in H_{T}. $$ Consequently, by Proposition~\ref{hregularity}, we find that $\frac{\pa z}{\pa\nu}|_{(0,T)\times\G_0}\in L^2_\cF(0,T;L^2(\G_0))$. Now, we define a nonlinear map as follows: $$ \left\{ \begin{array}{ll}\ds \cM:\ L^2(\O,\cF_0,P;H_0^1(G))\to L^2_{\cF}(0,T;L^2(\G_0)),\\ \ns\ds \cM(z_0)= {\frac{\pa z}{\pa \nu}}\Big|_{(0,T)\times \G_0}, \end{array} \right. $$ where $z$ solves the equation \eqref{system2}. We have the following result. \begin{theorem}\label{th2} There exists a constant $\wt C=\wt C(L,T,G)>0$ such that for any $z_0, \hat z_0\in L^2(\O,\cF_0,P;H_0^1(G))$, it holds that \begin{equation}\label{th2eq1} |z_0-\hat z_0|_{L^2(\O,\cF_0,P;L^2(G))} \le \wt C|\cM(z_0) -\cM(\hat z_0)|_{L^2_{\cF}(0,T;L^2(\G_0))}, \end{equation} where $\hat z=\hat z(\cd\, ;\hat z_0)\in H_{T}$ is the solution to \eqref{system2} with $z_0$ replaced by $\hat z_0$. \end{theorem} \begin{remark} From the well-posedness of the equation \eqref{system2}, Theorem~\ref{th2} indicates that the state $z(t)$ of \eqref{system2} (for $t\in [0,T]$) can be uniquely determined from the observed boundary data $\ds{\frac{\pa z}{\pa \nu}} \Big|_{(0,T)\t\G_0}$, $P$- a.s., and continuously depends on it. Therefore, we answer the first and second questions for state observation problem for the system \eqref{system2} positively. \end{remark} {\it Proof of Theorem~\ref{th2}}\,: Set $$ y=z-\hat z. $$ Then, it is easy to see that $y$ satisfies $$ \left\{ \begin{array}{ll}\ds idy + \D y dt = \big[ a_1 \cdot \nabla y + a_2 y +F_1(|z|)-F_1(|\hat z|) \big]dt \\ \ns\ds \hspace{2.2cm} + \big[ a_3 y + F_2(|z|)-F_2(|z|) \big]dB(t) &\mbox{ in } Q,\\ \ns\ds y=0 &\mbox{ on }\Si,\\ \ns\ds y(0)=z_0-\hat z_0 &\mbox{ in } G. \end{array} \right. $$ Also, it is clear that $$ F_1(|z|)-F_1(|\hat z|)\in L^2_{\cF}(0,T;H_0^1(G)) $$ and $$ F_2(|z|)-F_2(|\hat z|)\in L^2_{\cF}(0,T;H^1(G)). $$ Hence, we know that $y$ solves the equation \eqref{system1} with $$ \left\{ \begin{array}{ll}\ds f=F_1(|z|)-F_1(|\hat z|),\\ \ns\ds g=F_2(|z|)-F_2(|\hat z|). \end{array} \right. $$ By means of the inequality \eqref{carleman est1} in Theorem \ref{thcarleman est}, there exist an $s_1>0$ and a $\l_1>0$ so that for all $s\geq s_1$ and $\l\geq \l_1$, it holds that $$ \begin{array}{ll}\ds \q\mathbb{E}\int_Q \theta^2\Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \\ \ns \ds \leq C \Big\{\mathbb{E}\int_Q \theta^2 \Big(|f|^2 + s^2\l^2\varphi^2 |g|^2 \Big)dxdt + \mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \Big\}. \end{array} $$ By the choice of $f$, we see that $$ \begin{array}{ll}\ds \mathbb{E}\int_Q \theta^2 |f|^2dxdt \3n&\ds\leq \mathbb{E}\int_Q \theta^2 |F_1(|z|)-F_1(|\hat z|)|^2dxdt \leq L\mathbb{E}\int_Q \theta^2 (|z|-|\hat z|)^2dxdt\\ \ns&\ds \leq L\mathbb{E}\int_Q \theta^2 |z - \hat z|^2dxdt \leq L\mathbb{E}\int_Q \theta^2 |y|^2dxdt. \end{array} $$ Similarly, $$ s^2\l^2\mathbb{E}\int_Q \theta^2 \f^2 |g|^2dxdt \leq L\mathbb{E}\int_Q \theta^2\f^2 |y|^2dxdt. $$ Hence, we obtain that $$ \begin{array}{ll}\ds \q\mathbb{E}\int_Q \theta^2\Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \\ \ns \ds \leq C\Big\{L \mathbb{E}\int_Q \theta^2 \Big(|y|^2 + s^2\l^2\varphi^2 |y|^2 \Big)dxdt + \mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt \Big\}. \end{array} $$ Thus, there is a $\l_2\geq \max\{\l_1, CL\}$ such that for all $s\geq s_1$ and $\l\geq \l_2$, it holds that \begin{equation}\label{10.30eq1} \mathbb{E}\int_Q \theta^2\Big(s^3\l^4\varphi^3 |y|^2 + s\l\varphi |\nabla y|^2\Big) dxdt \leq C \mathbb{E}\int_0^T\int_{\G_0}\theta^2 s\l\varphi\Big| \frac{\pa y}{\pa \nu}\Big|^2d\G dt. \end{equation} Further, similar to the proof of the inequality \eqref{Eyt}, we can obtain that for any $0\leq t \le s\leq T$, it holds \begin{equation}\label{Eyt1} \begin{array}{ll}\ds \mathbb{E}| y(t)|^2_{ L^2(G)} - \mathbb{E}| y(s)|^2_{ L^2(G)}\3n &\ds \leq 2\mathbb{E}\int_t^s\int_G \Big[ |f|^2 + |g|^2 \Big]dxd\si\\ \ns&\ds \leq CL\mathbb{E}\int_t^s\int_G |y|^2 dxd\si. \end{array} \end{equation} Then, by Gronwall's inequality, we find that \begin{equation}\label{Eyt2} \mathbb{E}| y(t)|^2_{ L^2(G)} \leq e^{CL}\mathbb{E} | y(s)|^2_{ L^2(G)}, \mbox{ for any } 0\leq t\leq s \leq T. \end{equation} Combining \eqref{10.30eq1} and \eqref{Eyt2}, similar as the derivation of the inequality \eqref{final5}, we obtain the inequality \eqref{th2eq1}. \endproof \vspace{0.2cm} Now we consider the unique continuation property for the equation \eqref{system1}. There are numerous works on the unique continuation property for deterministic partial differential equations. The study in this respect began at the very beginning of the 20th century; while a climax appeared in the last 1950-70's. The most powerful tools at that period is the local Carleman estimate (See \cite{Hor1} for example). Nevertheless, most of the existing works are devoted to the local unique continuation property at that time. In the recent 20 years, motivated by Control/Inverse Problems of partial differential equations, the study of the global unique continuation is very active (See \cite{Castro-Zuazua,Zhangxu1,Zhang-Zuazua} and the rich references therein). Compared with the fruitful works on the unique continuation property in the deterministic settings, there exist few results for stochastic partial differential equations. As far as we know, \cite{Zhangxu4,Zhangxu2} are the only two published articles addressed to this topic, and there is no result on the global unique continuation property for stochastic Schr\"{o}dinger equations in the previous literature. We remark that the powerful approach based on local Carleman estimate in the deterministic settings is very hard to apply to the stochastic counterpart. Indeed, the usual approach to employ local Carleman estimate for the unique continuation needs to localize the problem. Unfortunately, one cannot simply localize the problem as usual in the stochastic situation, since the usual localization technique may change the adaptedness of solutions, which is a key feature in the stochastic setting. In this paper, as a consequence of Theorem \ref{observability} (which is based on the global Carleman estimate established in Theorem \ref{thcarleman est}), we obtain the following unique continuation property for solutions to the equation \eqref{system1}. \vspace{0.1cm} \begin{theorem}\label{ucp} For any $\e>0$, let $$O_\e([0,T]\t\G_0)\=\Big\{(x,t)\in Q :\,\dist\big((x,t),[0,T]\t\G_0\big)\leq \e \Big\}.$$ Let $f=g=0$, $P$-a.s. For any $y$ which solves the equation \eqref{system1}, if \bel{zx11} y = 0\ \ \hbox{ in }O_\e([0,T]\t\G_0), \ P\hbox{-a.s.}, \ee then $y=0$ in $Q$, $P$-a.s. \end{theorem} {\em Proof}\,: By \eqref{zx11}, we see that $\ds\frac{\pa y}{\pa\nu}=0$ on $(0,T)\t\G_0$, $P$-a.s. Hence, by means of Theorem \ref{observability}, we find that $y(0)=0$ in $L^2(\O,\cF_0,P;H_0^1(G))$. Consequently, we conclude that $y=0$ in $Q$, $P$-a.s. \endproof \section{Further comments and open problems} The subject of this paper is full of open problems. Some of them seem to be particularly relevant and could need important new ideas and further developments: \begin{itemize} \item {\bf Observability estimate for backward stochastic Schr\"{o}dinger equations} Compared with Theorem \ref{observability}, it is more interesting and difficult to establish the boundary observability estimate for backward stochastic Schr\"{o}dinger equations. More precisely, let us consider the following backward stochastic Schr\"{o}dinger equation: \begin{equation}\label{bsystem1} \!\!\left\{ \begin{array}{lll} \ds idu + \D u dt = (a_1\cd \nabla u + a_2 u + f)dt + (a_3 u + U+ g)dB(t) &\mbox{ in } Q,\\ \ns\ds u = 0 &\mbox{ on } \Si,\\ \ns\ds u(T) = u_T &\mbox{ in } G. \end{array} \right. \end{equation} Here the final state $u_T\in L^2(\O,\cF_T,P;H_0^1(G))$ and $\{\cF_t\}_{t\geq 0}$ is the natural filtration generated by $\{B(t)\}_{t\geq 0}$. We expect the following result:\\ {\it Under the assumptions \eqref{G0}--\eqref{cA}, any solution of the equation \eqref{bsystem1} satisfies that \begin{equation} \label{bobser esti2} \begin{array}{ll}\ds \q |u_T|_{L^2(\Omega,{ \mathcal{F}}_T, P; H_0^1(G))} \\ \ns\ds \leq e^{C r_1} \Big(\Big|\frac{\partial u}{\partial \nu}\Big |_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))} + |f|_{L^2_{ \mathcal{ F}}(0,T;H_0^1(G))} + |g|_{L^2_{ \mathcal{ F}}(0,T;H^1(G))}\Big), \end{array} \end{equation} or at least, \begin{equation} \label{bobser esti3} \begin{array}{ll}\ds \q |u(0)|_{L^2(\Omega,{ \mathcal{F}}_0, P; H_0^1(G))} \\ \ns\ds \leq e^{C r_1} \Big(\Big|\frac{\partial u}{\partial \nu}\Big |_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))} + |f|_{L^2_{ \mathcal{ F}}(0,T;H_0^1(G))} + |g|_{L^2_{ \mathcal{ F}}(0,T;H^1(G))}\Big). \end{array} \end{equation}} Unfortunately, following the method in this paper, one could obtain only an inequality as follows: \begin{equation} \label{bobser esti4} \begin{array}{ll}\ds \q |u_T|_{L^2(\Omega,{ \mathcal{F}}_T, P; H_0^1(G))} \\ \ns\ds \leq e^{C r_1} \Big(\Big|\frac{\partial u}{\partial \nu}\Big |_{L^2_{ \mathcal{ F}}(0,T;L^2(\Gamma_0))} + |U|_{L^2_\cF(0,T;H^1(G))} + |f|_{L^2_{\mathcal{ F}}(0,T;H_0^1(G))}\\ \ns\ds \qq\qq + |g|_{L^2_{ \mathcal{ F}}(0,T;H^1(G))}\Big). \end{array} \end{equation} It seems to us that getting rid of the undesired term $|U|_{L^2_\cF(0,T;H^1(G))}$ in the inequality \eqref{bobser esti4} is a very challenging task. \item {\bf Construction of the solution $z$ from the observation} In this paper, we only answer the first and the second questions in the state observation problem. The third one is still open. Since the equation \eqref{system2} is time irreversible, some efficient approaches (See \cite{Li1} for example), which work well for time reversible systems, become invalid. On the other hand, we may consider the following minimization problem: {\it Find a $\bar z_0\in L^2(\O,\cF_0,P;H_0^1(G))$ such that $$ \| \frac{\pa \bar z}{\pa\nu} - h \|_{L^2_\cF(0,T;L^2(\G_0))}=\min_{z_0\in L^2(\O,\cF_0,P;H_0^1(G))}\| \frac{\pa z}{\pa\nu} - h \|_{L^2_\cF(0,T;L^2(\G_0))}, $$ where $h\in L^2_\cF(0,T;L^2(\G_0))$ is the observation and $z$ (\resp$\bar z$) is the solution to the equation \eqref{system2} with initial datum $z_0$ (\resp$\bar z_0$).} \no It seems that one may utilize the method from optimization theory to study the construction of $z_0$. Because of the stochastic nature, this is an interesting but difficult problem and the detailed analysis is beyond the scope of this paper. \item{\bf Unique continuation property with less restrictive conditions} In this paper, we show that, under the condition \eqref{zx11}, $y=0$ in $Q$, $P$-a.s. Compared to the classical unique continuation result for deterministic Schr\"{o}dinger equations with time independent coefficients (see \cite{Es1,Lebeau} for example), the condition \eqref{zx11} is too restrictive. It would be quite interesting but maybe challenging to prove whether the result in \cite{Es1} is true or not for stochastic Schr\"{o}dinger equations. In fact, as far as we know, people even do not know whether the results in \cite{Es1,Lebeau} are true or not for deterministic Schr\"{o}dinger equations with time-dependent lower order term coefficients, which is a particular case of the equation \eqref{system1}. \end{itemize} \section*{Acknowledgments} This paper is an improved version of one chapter of the author's Ph D thesis (\cite{Luqi2}) accomplished at Sichuan University under the guidance of Professor Xu Zhang. The author would like to take this opportunity to thank him deeply for his help. The author also highly appreciates the anonymous referees for their constructive comments.
proofpile-arXiv_067-12324
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} In recent years, the strong and interdisciplinary effort towards the realization of topological phases of matter has evolved, particularly bringing topological insulators~\cite{hasan_colloquium:_2010} and superconductors~\cite{qi_topological_2011} into focus. Here, chiral $p_x + i\,p_y$ superconductors in two dimensions (2D) are of particular interest, as the appearance of robust Majorana modes might play an important role in realizing fault-tolerant quantum computation~\cite{nayak_non-abelian_2008}. Currently, Majorana modes are discussed in 1D nanowires (see e.g.~\cite{mourik_signatures_2012, albrecht_exponential_2016}) and other solid-state systems; for example, ${\rm Sr}_2{\rm RuO}_4$ is shown to form a $p$-wave superconductor~\cite{kallin_chiral_2012}. Fine-tuned control of the possible Majorana modes in these systems seems difficult, and it is highly desirable to introduce additional platforms to realize topological superfluids with good control of their properties. Ultracold atom systems could be promising candidates for this application~\cite{goldman_topological_2016}. Additionally, using multi-species systems is a familiar pathway for broadening the range of accessible physical questions and, in particular, realizing mixed-dimensional systems~\cite{nishida_universal_2008}. Following these ideas, it was shown as early as ten years ago that a two-species Fermi gas with one species confined in a 2D plane and immersed into a 3D Fermi sea of the other species could lead to $p_x + i\,p_y$ superfluidity facilitated by inter-species $s$-wave interaction~\cite{nishida_induced_2009}. Later, using Berezinskii-Kosterlitz-Thouless theory, it was found that Fermi-Bose mixtures in mixed dimensions also support this topological superfluid~\cite{wu_topological_2016}. More detailed calculations revealed that in considering higher-order contributions, the $p$-wave gap can be even larger than previously expected~\cite{caracanhas_fermibose_2017}. The authors of~\cite{caracanhas_fermibose_2017} also provide detailed calculations of the transition behavior of the two-species fermionic Ytterbium (${}^{173}$Yb) and bosonic Lithium (${}^7$Li) system. Assuming suitable inter-species scattering lengths, critical temperatures on the order of $0.07\, T_{\rm F}$, with $T_{\rm F}$ the Fermi temperature, are predicted. These limits are certainly challenging. However, they are not too low for state-of-the-art experiments~\cite{navon_equation_2010}. Indeed, the present work reports on the realization and characterization of a quantum degenerate, mixed dimensional ${}^{173}$Yb-${}^7$Li\ system. Much effort has been spent in understanding the interactions in the sister system of bosonic ${}^{174}$Yb\ and fermionic ${}^6$Li~\cite{dowd_magnetic_2015, schafer_spectroscopic_2017}, where even the realization of a double superfluid was reported~\cite{roy_two-element_2017}. In this context, our work strives to expand on these efforts by reversing the roles of the particles, switching out bosonic for fermionic Yb and fermionic for bosonic Li. To complete the picture, we also report on a quantum degenerate ${}^{174}$Yb-${}^7$Li\ mixture. The present paper is organized as follows. In Sec.~\ref{sec:expt}, we introduce the experimental setup and details for reaching double quantum degeneracy. Section~\ref{sec:results} summarizes our first characterization of the mixtures obtained, that is, the measurement of the inter-species scattering lengths (Sec.~\ref{sec:thermalization}), the creation of a mixed dimensional system (Sec.~\ref{sec:modulationspectroscopy}) and the high-resolution spectroscopy of the mixture in a 3D optical lattice (Sec.~\ref{sec:3P0spectroscopy}). A final discussion of the results in Sec.~\ref{sec:discussion} concludes our work. \section{Experiment} \label{sec:expt} The experiment proceeds along similar lines as our earlier reported works~\cite{konishi_collisional_2016}. In brief, a hot atomic beam is formed starting from a dual species oven heated to about $350~^\circ {\rm C}$, which contains Yb and Li in natural abundances as well as enriched ${}^6$Li. The isotope shifts of the optical transition frequencies are, for Yb, in the few GHz range and, for Li, in the 10 GHz range. Thus, isotope-selective slowing and cooling of both species is possible with only slight adjustments of the cooling laser frequencies. We first slow down Yb in a Zeeman slower operating on the strong ${}^1\mathrm{S}_0\,$-$^1{\rm P}_1$ transition followed by a magneto-optical trap (MOT) on the narrow ${}^1\mathrm{S}_0\,$-$^3{\rm P}_1$ intercombination line. In a second step, Li is slowed and trapped relying on the D2 transition. As a comment, we note that the hyperfine splitting for the ${}^7$Li($^2{\rm S}_{1/2}$) ground state is, at $803.5~\mathrm{MHz}$, much larger than that for ${}^6$Li\ at $228.2~\mathrm{MHz}$. Therefore, the inclusion of a $^2{\rm S}_{1/2}(F = 1)$-$^2{\rm P}_{3/2}$ repumper laser to the standard $^2{\rm S}_{1/2}(F = 2)$-$^2{\rm P}_{3/2}$ Zeeman slowing light is crucially important for slowing sufficient numbers of ${}^7$Li\ atoms for loading into the MOT. Compression of the MOT followed by reduction of MOT beam detunings and intensities further cools the atomic clouds. This improves phase matching for loading into our crossed far-off-resonance trap (FORT), where forced evaporative (Yb) and sympathetic (Li) cooling are performed. For the case of a ${}^{174}$Yb-${}^7$Li\ mixture, we load the Yb and Li MOT for $14~\mathrm{s}$ and $0.5~\mathrm{s}$, respectively, which results in a typical value of $60\times10^5$ Yb atoms at $95~\mu{\rm K}$ and $1.5\times10^5$ Li atoms at $210~\mu{\rm K}$ in the crossed FORT at the beginning of the evaporation ramp. For ${}^{173}$Yb-${}^7$Li, atom numbers after $20~\mathrm{s}$ and $0.3~\mathrm{s}$ of loading are $46\times10^5$ for Yb at $80~\mu{\rm K}$ and $0.7\times10^5$ for Li at $115~\mu{\rm K}$. During forced evaporation and sympathetic cooling, the crossed FORT lasers are reduced from their initial powers to the final values in individually optimized ramps within $7.65~\mathrm{s}$ ($9.8~\mathrm{s}$) for ${}^{174}$Yb-${}^7$Li\ (${}^{173}$Yb-${}^7$Li). All experiments reported in the present work were performed at low magnetic bias fields, typically a few $100\,\mathrm{mG}$. Development of phase-space density (PSD) and atom number ($N$) during evaporation normalized by their respective initial values, PSD$_0$ and $N_0$, for both mixtures are shown in Fig.~\ref{fig:evap}. From the double logarithmic representation, we see that the initial stages of the evaporation are well described by a power law and we extract $\gamma = -{\rm d\,ln} ({\rm PSD}/{\rm PSD}_0)/{\rm d\,ln} (N/N_0)$, a measure for the evaporation efficiency~\cite{ketterle_evaporative_1996}, by linear fit to the data. For the ${}^{174}$Yb-${}^7$Li\ mixture $\gamma_{\rm Yb} = 2.9(1)$ and $\gamma_{\rm Li} = 6.5(2)$ are obtained. Similarly, in the ${}^{173}$Yb-${}^7$Li\ evaporation sequence $\gamma_{\rm Yb} = 2.7(1)$ and $\gamma_{\rm Li} = 12.8(6)$ are observed. The cooling efficiencies for both Yb isotopes are similar while sympathetic cooling of ${}^7$Li\ seems to proceed more effectively in combination with ${}^{173}$Yb. In the latter case, saturation of the ${}^{173}$Yb\ PSD, in particular, is also visible. The generally large $\gamma_{\rm Li}$ are a result of the sympathetic cooling where the Li PSD is increased not at the expense of the number of Li atoms, but at the cost of the coolant species, Yb. \begin{figure}[tb] \centering \includegraphics[width=7.5cm]{figure1} \caption{The path to double quantum degeneracy in mixtures of ${}^{174}$Yb-${}^7$Li\ (upper panel) and ${}^{173}$Yb-${}^7$Li\ (lower panel). In each case, the development of the normalized phase-space density (PSD$/$PSD$_0$) and the normalized atom numbers ($N/N_0$) during forced evaporation and sympathetic cooling are shown as points. PSD$_0$ and $N_0$ are the respective values at the beginning of the evaporation ramp. Error bars account for uncertainties in the cloud and trap parameters. The trajectories for Yb (blue) and Li (red) can be roughly described and fitted by power laws (straight lines). See the main text for details on their interpretation.} \label{fig:evap} \end{figure} At the end of the evaporation ramp, the trap frequencies for both mixtures are $(\omega_x,\omega_y,\omega_z) = 2\pi \times (38, 58, 221)~\mathrm{Hz}$ for Yb and correspondingly $2\pi \times (250, 395, 1795)~\mathrm{Hz}$ for Li, where $z$ is in the vertical direction. (The uncertainties of those values are on the order of 10\%.) The obtained quantum degenerate ${}^{173}$Yb-${}^7$Li\ mixture is shown in Fig.~\ref{fig:173Yb7Li}. We have $N \approx 62\,000$ ${}^{173}$Yb\ atoms at $T = 87~{\rm nK}$ and from the fugacity of the Fermi gas distribution, we find by fit $T/T_{\rm F} \approx 0.4$, where $T_{\rm F}$ is the Fermi temperature. No optical pumping techniques have been performed on ${}^{173}$Yb\ during evaporation, and we expect about equal populations of the six spin ground states. A precise determination of the condensate fraction using an unrestricted bimodal fit for the ${}^7$Li\ cloud is difficult. We therefore opted to fix the temperature to the value obtained from the fit to the ${}^{173}$Yb\ cloud and obtain $N_{\rm BEC} \approx 4\,200$ atoms in the Bose-Einstein condensate (BEC) and $N_{\rm th} \approx 7\,400$ atoms in the thermal component for ${}^7$Li. The time-of-flight absorption image of ${}^7$Li, see left panel of Fig.~\ref{fig:173Yb7Li}, shows a slight fragmentation of the atomic cloud, the origin of which is not yet understood. By application of a magnetic field gradient after release of the atoms from the trap, we determined that all Li atoms are actually spin polarized in the $m_F = 0$ state and remaining field gradients cannot cause this splitting. We therefore surmise that the probable cause is some unresolved dynamics occurring as the crossed FORT is being turned off. Turning our attention to the Boson-Boson ${}^{174}$Yb-${}^7$Li\ mixture (not shown), we report $N_{\rm BEC} \approx 76\,000$, $N_{\rm th} \approx 47\,000$, $T = 110~{\rm nK}$ for ${}^{174}$Yb\ and $N_{\rm BEC} \approx 12\,000$, $N_{\rm th} \approx 14\,000$ for ${}^7$Li\ again assuming equal temperatures for both species. \begin{figure}[tb] \centering \includegraphics[width=7.5cm]{figure2_173} \caption{A quantum degenerate mixture of ${}^{173}$Yb\ (left panels) and ${}^7$Li\ (right panels). The clouds have been imaged $15~\mathrm{ms}$ ($5~\mathrm{ms}$) after release from the trap for Yb (Li). The top panels show false-color representations of the obtained absorption images. The lower panels show projections of the data (points) and fit results (lines) on the horizontal axis. Fermionic ${}^{173}$Yb\ was fitted by a Fermi gas distribution and bosonic ${}^7$Li\ by a bimodal distribution (dashed line gives thermal component). The apparent discrepancy between fit and data seen in the Li projection is due to some Li atoms being ejected from the main cloud and giving rise to an additional contribution to the projection, see main text for further discussion.} \label{fig:173Yb7Li} \end{figure} \section{Analysis and results} \label{sec:results} Towards the long-term goal of forming a topological superfluid, three main ingredients are of particular importance~\cite{caracanhas_fermibose_2017}: (i) the creation of a Fermi-Boson quantum degenerate mixture at only few nK, (ii) the formation of a mixed-dimensional system and (iii) information on the inter-species $s$-wave scattering length. In the present initial survey, the experimental setup was not designed to reach the required temperatures to address the first point. Instead, we concentrate on the remaining points by both determining the inter-species background scattering length, Sec.~\ref{sec:thermalization}, and realizing and probing two different mixed-dimensional systems, Secs.~\ref{sec:modulationspectroscopy} and~\ref{sec:3P0spectroscopy}. \subsection{Inter-species scattering length} \label{sec:thermalization} Similar to earlier determinations of the ${}^{174}$Yb-${}^6$Li\ inter-species scattering length~\cite{ivanov_sympathetic_2011, hara_quantum_2011}, we perform cross-thermalization measurements in cold, but thermal mixture samples of ${}^{174}$Yb-${}^7$Li\ and ${}^{173}$Yb-${}^7$Li\ mixtures. The experiment starts by loading either mixture into the crossed FORT and confining the sample for $3~\mathrm{s}$ to reach a steady state of $60$--$80~\mu{\rm K}$. Then, by slight power modulation of the horizontal FORT laser beam at $9.5~\mathrm{kHz}$, the Li sample is selectively heated to $150$--$200~\mu{\rm K}$. Through keeping the temperature-imbalanced mixture in the trap for a variable time before releasing it, we gain access to the temperatures by taking a series of absorption images at different expansion times and the result for the ${}^{173}$Yb-${}^7$Li\ case is shown in Fig.~\ref{fig:thermalization}. During the measured thermalization times, the Yb temperature $T_{\rm Yb}$ is found to vary little, while the Li temperature $T_{\rm Li}$ quickly approaches $T_{\rm Yb}$. This shows that Yb can be treated as a sufficiently large heat bath at constant temperature. We attribute the gap in final temperatures to a residual small miscalibration of the two independent imaging systems. A control measurement in which Yb is blasted by resonant ${}^1\mathrm{S}_0\,$-$^1{\rm P}_1$ light at zero holding time confirms that all cooling of the Li is due to Yb. \begin{figure}[tb] \centering \includegraphics[width=7.5cm]{figure3} \caption{Overview of experimental data to determine the ${}^{173}$Yb-${}^7$Li\ inter-species scattering length. After preparation of a temperature-imbalanced thermal mixture, the temperatures of Yb (blue points) and Li (red points) have been measured for thermalization times of up to $6~\mathrm{s}$. For reference, the temperature evolution of a Li-only sample has also been taken (yellow points). While the temperature of Li in the mixture has been fitted by an exponential decay (red line), the other data have been approximated by straight line fits (blue, yellow lines). For each thermalization time, the individual temperatures have been estimated from a series of measurements at different expansion times and the error bars correspond to the uncertainties in the temperature estimation from these data.} \label{fig:thermalization} \end{figure} Standard cross-thermalization analysis~\cite{ivanov_sympathetic_2011} then gives access to the modulus of the inter-species scattering length. In the error analysis uncertainties of the atom numbers by $20\%$, the temperatures by $10\%$ and the densities by $30\%$ are considered. For the Fermi-Bose mixture, $|a_{\rm bg}({}^{173}{\rm Yb}$-${}^{7}{\rm Li})|=(1.16 \pm 0.18)~\mathrm{nm}$ is obtained. In the corresponding experiment for the Bose-Bose mixture, we find $|a_{\rm bg}({}^{174}{\rm Yb}$-${}^{7}{\rm Li})|=(1.11 \pm 0.17)~\mathrm{nm}$. These values should be compared to previous calculations, albeit done for different Li hyperfine states, where scattering lengths of $+1.80~\mathrm{nm}$ and $+1.74~\mathrm{nm}$ have been reported~\cite{brue_magnetically_2012}. This shows that while good order-of-magnitude agreement is achievable, the details of the inter-species interaction potentials are quite challenging to model correctly. \subsection{Band structure of ${}^{173}$Yb\ in a 1D optical lattice} \label{sec:modulationspectroscopy} The aspired realization of a $p_x + i p_y$ superfluid heavily depends on the formation of a novel mixed-dimensional system. Here, by means of a strong 1D optical lattice, we realize an array of 2D ${}^{173}$Yb\ fermionic systems in a 3D ${}^7$Li\ bosonic bath. The 1D optical lattice is formed by two horizontally counter-propagating laser beams~\cite{konishi_collisional_2016} with wavelength $\lambda_{\rm L} = 532~\mathrm{nm}$. To reduce the impact of the differential gravitational sag between Yb and Li in this and the following experiments, a compensating beam at the same wavelength, focused slightly above the atoms, is utilized~\cite{konishi_collisional_2016}. The lattice depth for Yb is set to $15~E_R^{\mathrm{Yb}}$, with $E_R^{\mathrm{Yb}} = \hbar^2 k_{532}/(2m_{\rm Yb})$ being the Yb recoil energy, where $m_{\rm Yb}$ is the Yb atomic mass and $k_{532} = 2\pi/\lambda_{\rm L}$. In this situation the Li lattice depth is only $0.7~E_R^{\mathrm{Li}}$ which is too shallow to support a bound state in the optical lattice, permitting the Li atoms to still freely move in 3D space. To confirm formation of the mixed-dimensional system, we then perform lattice modulation spectroscopy~\cite{heinze_multiband_2011} to reveal the Yb band structure. After adiabatically ramping up the optical lattice, its depth is modulated by about $5~E_R^{\mathrm{Yb}}$ for $0.3~\mathrm{ms}$ and then ramped down to zero in $0.2~\mathrm{ms}$ converting higher band excitations via band mapping to real momenta that can be observed after time-of-flight absorption imaging. The process is schematically depicted in Fig.~\ref{fig:modulation}(a), where the first four bands of the 1D lattice for Yb are shown. The Fermions initially occupy the complete first band (lower dots) and modulation of the lattice at the proper frequency leads to a particle-hole pair (filled and empty circles) between predominantly the first and third band. Experimental data is shown in Fig.~\ref{fig:modulation}(b), where the real momentum distribution after time-of-flight is given versus the modulation frequency. The formation of particle-hole pairs is visible as areas of increased-decreased density. The black lines give the expected structure for a lattice at $14~E_R^{\mathrm{Yb}}$ which might indicate that the previously done calibration of the lattice depth slightly overestimated the actual potential depth. Also the weak signals at $\pm 4\, \hbar k_{\rm 532}$ could indicate imperfections in the band mapping procedure. A repetition of the experiment with ${}^7$Li\ blasted away before ramping up the optical lattice (not shown) yields identical results. \begin{figure}[tb] \centering \includegraphics[width=7.5cm]{figure4} \caption{Lattice modulation spectroscopy of a double quantum degenerate ${}^{173}$Yb-${}^7$Li\ mixture in a deep 1D optical lattice. (a) Calculated first four bands of the lattice at $14~E_R^{\mathrm{Yb}}$ lattice depth (purple to orange lines). The population of the first band by Yb Fermions (filled circles) and the excitation of a particle to the third band by lattice modulation (arrows), creating a hole in the first band population (open circle) are schematically indicated. (b) Experimental modulation spectroscopy data. The momentum distribution recorded by a band mapping procedure for different modulation frequencies reveals the creation of particle-hole pairs in the first and third lattice bands. The expected momentum-frequency dependence for this process is indicated (black lines). In the false-color representation blue (red) corresponds to a lower (higher) momentum population probability.} \label{fig:modulation} \end{figure} \subsection{Spectroscopy in a 3D optical lattice} \label{sec:3P0spectroscopy} We now proceed to further reduce the dimensionality of Yb in the mixed dimensional Yb-${}^7$Li\ systems. This can be done in almost the same manner for the ${}^{173}$Yb-${}^7$Li\ Fermi-Bose and the ${}^{174}$Yb-${}^7$Li\ Bose-Bose mixtures. For experimental ease, we choose here the ${}^{174}$Yb-${}^7$Li\ system and load in into a 3D cubic optical lattice at $15\,E_R^{\mathrm{Yb}}$, where Yb forms a Mott-insulating state while Li remains non-localized. In the case of ${}^{174}$Yb, by exploiting the narrow ${}^{174}$Yb(${}^1\mathrm{S}_0 \rightarrow {}^3\mathrm{P}_2$) transition, we are able to energetically distinguish lattice sites with different Yb occupation numbers as previously demonstrated for both a pure ${}^{174}$Yb\ system~\cite{kato_laser_2016} and the ${}^{174}$Yb-${}^6$Li\ mixture~\cite{konishi_collisional_2016, schafer_spectroscopic_2017}. The results are summarized in Fig.~\ref{fig:spectroscopy}, where a measurement in the presence of ${}^7$Li\ (red) is compared to a situation where the Li atoms were removed from the trap just before loading the ${}^{174}$Yb\ atoms into the lattice (blue). The well-resolved resonances corresponding to occupation numbers up to $n = 4$ demonstrate the successful formation of a Mott-insulating state in presence of a ${}^7$Li\ bosonic background gas. We also note that possible shifts of the resonance frequencies due to different interaction strengths of the Yb ${}^3\mathrm{P}_2$ and ${}^1\mathrm{S}_0$ states with Li have not been observed. \begin{figure}[tb] \centering \includegraphics[width=7.5cm]{figure5} \caption{Measured ${}^1\mathrm{S}_0 \rightarrow {}^3\mathrm{P}_2(m_J = 0)$ excitation spectrum (red points) of ${}^{174}$Yb\ with ${}^7$Li\ in a 3D optical lattice at $15~E_R^{\mathrm{Yb}}$. At this lattice depth, Li is not localized, while Yb forms a Mott-insulator state and atoms in lattice sites with different occupation numbers ($n = 1,2,3,4$) are separated due to interatomic interaction. For comparison, the experiment was repeated with the Li atoms removed before excitation (blue points). No significantly different excitation behavior is found. The lines are Lorentzian fits to the resonances.} \label{fig:spectroscopy} \end{figure} \section{Discussion and Conclusion} \label{sec:discussion} With the present set of experimental results, we demonstrate the realization of doubly quantum degenerate mixtures of either ${}^{174}$Yb\ or ${}^{173}$Yb\ and ${}^7$Li. In cross-thermalization measurements the background elastic scattering lengths have been determined. They roughly agree with earlier theoretical considerations and are generally, similar to the same mixtures involving ${}^6$Li, quite small. By then loading the Fermi-Bose mixture into a 1D optical lattice, a mixed dimensional regime has been achieved. As confirmation serve measurements of the ${}^{173}$Yb\ band structure and of the ${}^{174}$Yb\ Mott-insulator state. Thus, an important step towards topological $p_x + i\,p_y$ superfluids has been taken. As expected, inter-species interaction effects are found to be small and enhancement mechanisms such as suitable Feshbach resonances are advantageous for reaching the required Fermi-Bose interactions. The current setup strictly relies on sympathetic cooling to reach the Li quantum degenerate regime. It is desirable to improve this situation by e.\,g., implementation of Li gray molasses cooling techniques~\cite{burchianti_efficient_2014} and possibly the use of ${}^7$Li\ Feshbach resonances to reduce initial temperature and to enhance evaporation efficiency. Finally, suitable lattice geometries that support the necessary bosonic excitations, while remaining experimentally feasible to realize, need to be explored. The experiments detailed in the present work serve to establish additional quantum degenerate mixtures in the toolbox of ultracold atomic physics. It is the first large mass-imbalance Bose-Fermi system for the realization of mixed dimensional geometries with a 3D bosonic background and a fermionic component of reduced dimensionality. \section*{Acknowledgments} This work was supported by the Grant-in-Aid for Scientific Research of JSPS Grants No.\ JP25220711, No.\ JP17H06138, No.\ 18H05405, and No.\ 18H05228, JST CREST Grant No.\ JPMJCR1673 and the Impulsing Paradigm Change through Disruptive Technologies (ImPACT) program by the Cabinet Office, Government of Japan. \input{Yb7Li.bbl} \end{document}
proofpile-arXiv_067-12495
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Entanglement entropy of an eigenstate} \label{app1} All the properties of eigenstates of quadratic models are encoded in $L\times L$ one-body correlation matrices. They form a $2L\times2L$ matrix $iJ$, which is a linear complex structure \begin{align}\label{eq:lcs} iJ&=\left(\begin{array}{c|c} \langle m|\hat f_i^\dagger \hat f^{}_j - \hat f^{}_j \hat f_i^\dagger|m\rangle & \langle m|\hat f_i^\dagger \hat f_j^\dagger - \hat f_j^\dagger \hat f_i^\dagger|m\rangle\\ \hline \langle m|\hat f^{}_i \hat f^{}_j - \hat f^{}_j \hat f^{}_i|m\rangle & \langle m| \hat f^{}_i \hat f_j^\dagger - \hat f_j^\dagger \hat f^{}_i|m\rangle \end{array}\right) . \end{align} In the quantum Ising model, eigenstates belong either to the even or the odd particle-number sector. Each sector has a set of allowed $k$ vectors, which we denote as ${\cal K}^{+} $ (even sector) and ${\cal K}^{-}$ (odd sector) \cite{vidmar16}. The matrix elements of $iJ$ in Eq.~(\ref{eq:lcs}) are:\\ (i) If $|m\rangle$ belongs to the even sector \begin{align} \langle m | \hat f_j^\dagger \hat f^{}_l |m\rangle = & - \frac{1}{L} \sum_{k \in {\cal K}^{+}} N_k \cos[k(j-l)] u_k^2 \nonumber \\ & + \frac{1}{2L} \sum_{k \in {\cal K}^{+}} N_k e^{ik(j-l)} + \frac{1}{2} \delta_{j,l} \end{align} and \begin{equation} \langle m | \hat f_j^\dagger \hat f_l^\dagger |m\rangle = \frac{i}{L} \sum_{k \in {\cal K}^{+}} N_k \sin[k(j-l)] u_k v_k \, , \end{equation} where ${\cal K}^{+} = \{ \pi/L + n 2\pi/L \; | \; n = 0, ..., L/2-1 \}$. \noindent (ii) If $|m\rangle$ belongs to the odd sector \begin{align} \label{def_matele_odd1} \langle m | \hat f_j^\dagger \hat f^{}_l |m\rangle = & - \frac{1}{L} \sum_{k \in {\cal K}^{-}} N_k \cos[k(j-l)] u_k^2 \nonumber \\ & + \frac{1}{2L} \sum_{k \in {\cal K}^{-} \backslash \{ 0,\pi \}} N_k e^{ik(j-l)} + \frac{1}{2} \delta_{j,l} \end{align} and \begin{equation} \label{def_matele_odd2} \langle m | \hat f_j^\dagger \hat f_l^\dagger |m\rangle = \frac{i}{L} \sum_{k \in {\cal K}^{-} \backslash \{ 0,\pi \}} N_k \sin[k(j-l)] u_k v_k \, , \end{equation} where ${\cal K}^{-} = \{ n 2\pi/L \; | \; n = 0, ..., L/2-1 \}$. Note that in two of the three sums over $k$ in Eqs.~(\ref{def_matele_odd1}) and~(\ref{def_matele_odd1}), the vectors $0$ and $\pi$ are excluded from the sum. Correlations of a subsystem $A$ containing $L_A$ sites are encoded in the restricted complex structure $[iJ ]_{A}$, the $2L_A \times 2L_A$ matrix obtained by restricting the matrix $iJ$ in Eq.~(\ref{eq:lcs}) to the entries with $j,l\in A$. The entanglement entropy of subsystem $A$ in eigenstate $|m\rangle$ can be computed as~\cite{vidmar_hackl_17} \begin{align}\label{def_Salpha} S_m = - \mathrm{Tr} \left\{ \left(\frac{1\!\!1+[iJ]_A}{2}\right)\ln \left(\frac{1\!\!1+[iJ]_A}{2}\right) \right\}. \end{align} We diagonalize the matrix $[iJ]_A$ numerically for each eigenstate to calculate $S_m$, and then average over all eigenstates $|m\rangle$ to obtain the spectral average $S$ that is reported in the main text. \section{Derivation of Eq.~(\ref{trace_h1})} \label{app2} Since we express the spectral average of ${\rm Tr}[iJ]_A^2$ in the quantum Ising model as the mean of spectral averages over all eigenstates with periodic (using $k \in {\cal K}^{-}$) and antiperiodic (using $k \in {\cal K}^{+}$) boundary conditions, we can express Eq.~(\ref{def_tr2}) at $h=1$ as \begin{equation} \label{def_trace2} \langle {\rm Tr}[iJ]_A^2 \rangle = 2L_A f - \frac{1}{L^2} \sum_{k \in {\cal K}^{+} \cup {\cal K}^{-} \backslash \{ \pi \}} \frac{1}{2} \frac{\sin^2(L_A k)}{[1+\cos(k)]} \, . \end{equation} Here, $k=\pi$ is excluded from the sum since $u_{\pi} = 0$~\cite{vidmar16}. By inserting $k=\pi$ back to the sum in Eq.~(\ref{def_trace2}) we get \begin{equation} \label{def_trace3} \langle {\rm Tr}[iJ]_A^2 \rangle = 2L_A f - \frac{1}{L^2} \sum_{k \in {\cal K}^{+} \cup {\cal K}^{-} } \frac{1}{2} \frac{\sin^2(L_A k)}{[1+\cos(k)]} + f^2 \, . \end{equation} Moreover, realizing that \begin{equation} \frac{1}{2L} \sum_{k \in {\cal K}^{+} \cup {\cal K}^{-} } \frac{\sin^2(L_A k)}{[1+\cos(k)]} = L_A \, , \end{equation} we arrive at Eq.~(\ref{trace_h1}) in the main text. \end{document}
proofpile-arXiv_067-12507
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsection{Restricting Guards in \boldmath$\text{rPrompt-LDL}$} We say that a guard~$r$ is test-free if it does not contain tests as atoms, but only propositional formulas over the atomic propositions. A formula is test-free if each of its guards is test-free. In the remainder, we only consider test-free formulas. As the adaptions made to define $\mathcal{R}^{\textsc{rpd}}_i$ are only concerned with tests, they can be ignored when reasoning about test-free formulas. \begin{remark} Let $r$ be a test-free guard. Then, $\mathcal{R}^{\textsc{rpd}}_i(w,k,r)$ is independent of~$i$ and~$k$ for every trace~$w$. \end{remark} Hence, in the following, we use $\mathcal{R}(w,r)$ (as defined for $\text{LDL}$) instead of $\mathcal{R}^{\textsc{rpd}}_i(w,k,r)$, since the definitions coincide for test-free guards. We say that a test-free guard~$r$ is limit-matching if we have $\size{\mathcal{R}(w,r)} = \infty$ for every trace~$w$. This is well-defined due to the previous remark. Again, a test-free formula is limit-matching if each of its guards is limit-matching. \begin{lemma} \label{lemma-syntaxeffective} The problem \myquot{Given a test-free formula~$\varphi$, is $\varphi$ limit-matching?} is in~$\textsc{{PSpace}}$. \end{lemma} \begin{proof} The problem is in $\textsc{{PSpace}}$ if one can decide in polynomial space whether a single test-free guard is limit-matching. Hence, let $r$ be such a guard, which is limit-matching if and only if infinitely many prefixes of each trace~$w$ match $r$. An application of König's Lemma yields that the latter condition is equivalent to each $w$ being Büchi-accepted by $\mathfrak{G}_r$. Due to test-freeness, $\mathfrak{A}_r$ can indeed be seen as a Büchi automaton with $\epsilon$-transitions. Hence, $r$ is limit-matching if and only if $\mathfrak{G}_r$ is universal, which can be decided in polynomial space~\cite{SistlaVardiWolper85} (after eliminating $\epsilon$-transitions). The automaton being of the same size as the guard and being efficiently constructible concludes the proof.\qed \end{proof} \begin{example} Recall the formula~$\varphi = \bboxdot{((\neg t)^*\,; t\,; (\neg t)^*\,; t )^*} \promptddiamonddot{\mathtt{tt}^*} s$ from Example~\ref{example-rpromptldl}. It is test-free, but not limit-matching as traces with finitely many $t$ only have finitely many $((\neg t)^*\,; t\,; (\neg t)^*\,; t )^*$-matches. Nevertheless, test-free and limit-matching $\text{rPrompt-LDL}$ formulas can make use of arbitrary modulo counting, a significant advance in expressiveness over classical $\text{LTL}$, thus witnessing the usefulness of the fragment. For example, the formula~$\bboxdot{r}\promptddiamonddot{r}s$ with $r = (\mathtt{tt}\,;\mathtt{tt})^*$ expresses, when evaluated with respect to a bound~$k$, that the distance between synchronizations at even positions is bounded by $k$, i.e., we use the test-free limit-matching guards to \myquot{filter out} the odd positions. \end{example} Let us note that for $\text{LDL}$, the test-free fragment is of equal expressive power as full $\text{LDL}$, albeit potentially less succinct. This claim follows easily from translating Büchi automata into $\text{LDL}$ formulas, which results in test-free formulas. In the following, we consider the model checking and the synthesis problem for test-free limit-matching formulas. To this end, we proceed as in the case of $\text{rPrompt-LTL}$: We reduce these problems to those for $\text{Prompt-LDL}$, i.e., we present a reduction-based translation to Büchi automata. Due to only considering limit-matching formulas, we do not have to deal with the cases of having only finitely many matches of a guard. On the other hand, we have to \myquot{split} guards to capture the semantics of the robust diamond operator (recall the discussion in Section~\ref{sec-towardsrpldl}). Here, we exploit the formula under consideration being test-free. The main technical result on this fragment states that the logic can be derobustified, i.e., translated into $\text{Prompt-LDL}$. \begin{theorem} \label{thm-prldl2pldl} For every test-free limit-matching $\text{rPrompt-LDL}$ formula~$\varphi$ and every $\beta \in \bool_{4}$, there is a $\text{Prompt-LDL}$ formula~$\varphi_\beta$ such that $\rpromptldleval(w, k, \varphi) \succeq \beta$ if and only if $\promptldleval(w, k, \varphi_\beta) = 1$. \end{theorem} \begin{proof} Before we present the translation, we need to explain how to \myquot{split} guards, which is necessary to implement the semantics of the robust box operator (recall the discussion in Section~\ref{sec-towardsrpldl}). For example, we have to check that almost all $r$-matches are $\psi$-satisfying for some guard~$r$ and some subformula~$\psi$. In $\text{LTL}$, \myquot{almost all} is expressed by $\Diamond\Box$. We will use the analogous $\text{LDL}$ operators, i.e., a formula of the form~$\ddiamond{ \cdot} \bbox{\cdot }$. But now we need guards~$r_0$ and $r_1$ for the diamond and the box operator so that the concatenation $r_0r_1$ is equivalent to $r$. To this end, we transform $r$ into a deterministic automaton and then have such a pair of guards for every intermediate state that can be reached by the automaton. Ultimately, we then end up with a disjunction of formulas of the form~$\ddiamond{ \cdot} \bbox{\cdot }$. Let $r$ be a test-free guard. Applying Lemma~\ref{lemma-guards2automata} to $r$ yields an $\epsilon$-NFA~$\mathfrak{G}_r$ without tests. Hence, eliminating $\epsilon$-transitions and determinizing the resulting automaton yields a deterministic finite automaton~$\mathfrak{D}_r$ such that $\pref{w}{j}$ is accepted by $\mathfrak{D}_r$ if and only if $j \in \mathcal{R}(w,r)$. Furthermore, due to test-freeness, acceptance of $\pref{w}{j}$ by $\mathfrak{D}_r$ only depends on the prefix~$\pref{w}{j}$ of $w$, but not on the corresponding suffix~$\suff{w}{j}$. This property, which underlies the following construction, does not hold true for guards with tests. Now, let $Q$ be the set of states of $\mathfrak{D}_r$, $q_\init$ the initial state, and $F$ the set of final states. Then, one can efficiently construct regular expressions (i.e., guards)~$r_{q_\init,q}$ and $r_{q,F}$ such that $w \in (\pow{P})^*$ is in the language of $r_{q_\init,q}$ (of $r_{q,F}$) if and only if the unique run of $\mathfrak{D}_r$ starting in $q_\init$ (in $q$) ends in $q$ (in $F$). Now, we are ready to construct~$\varphi_\beta$. Again, the case~$\beta = 0000$ is trivial. Hence, we assume $\beta \succ 0000$ in the following. We proceed by induction over the construction of the formula: \begin{itemize} \item $p_\beta = p$ and $(\neg p)_\beta = \neg p$ for all atomic propositions~$p \in P$ and all $\beta \succ 0000$. \item $(\varphi_0 \wedge \varphi_1)_\beta = (\varphi_0)_\beta \wedge (\varphi_1)_\beta$ for all $\beta \succ 0000$. \item $(\varphi_0 \vee \varphi_1)_\beta = (\varphi_0)_\beta \vee (\varphi_1)_\beta$ for all $\beta \succ 0000$. \item $(\ddiamonddot{r}\varphi )_\beta = \ddiamond{r}(\varphi_\beta)$ for all $\beta \succ 0000$ \item $(\bboxdot{r} \varphi)_{1111} = \bbox{r}(\varphi_{1111})$, \item $(\bboxdot{r} \varphi)_{0111} = \bigvee_{q \in Q} \ddiamond{r_{q_\init,q}}\bbox{r_{q,F}}(\varphi_{0111})$, where $\mathfrak{D}_r = (Q, \pow{P}, q_\init, \delta, F)$, \item $(\bboxdot{r} \varphi)_{0011} = \bigwedge_{q \in Q} \bbox{r_{q_\init,q}}\ddiamond{r_{q,F}} (\varphi_{0011})$, where $\mathfrak{D}_r = (Q, \pow{P}, q_\init, \delta, F)$, \item $(\bboxdot{r} \varphi)_{0001} = \ddiamond{r} (\varphi_{0001})$, and \item $(\promptddiamonddot{r} \varphi )_\beta = \promptddiamond{r} (\varphi_\beta)$ for all $\beta \succ 0000$. \end{itemize} A straightforward induction over the construction of $\varphi$, relying on the fact that $\varphi$ is limit-matching, yields the correctness of the translation. The fact that $\varphi$ is limit-matching explains the construction of $(\bboxdot{r} \varphi)_{\beta}$, which only has to implement the first case (\myquot{$\size{\mathcal{R}(w,r)} = \infty$}) of the definition of the semantics. \qed \end{proof} Now, the model checking and the synthesis problem for $\text{rPrompt-LDL}$, which are defined as expected, can be solved by reducing them to their analogues for $\text{Prompt-LDL}$ (cf.~Section~\ref{sec-rprompt}). We obtain the following results. \begin{corollary} The $\text{rPrompt-LDL}$ model checking and synthesis problem are decidable for the test-free limit-matching fragment. \end{corollary} We refrain from specifying the exact complexity of the algorithms, as we conjecture them to be several exponents away from optimal algorithms: The guards~$r_{q_\init,q}$ and $r_{q,F}$ are already of doubly-exponential size and we still have to translate the formula~$\varphi_\beta$ containing these guards into (deterministic) automata to solve the problems. Note that our approach for the fragment, which relies on a translation to $\text{Prompt-LDL}$, cannot easily be extended to formulas with tests and to formulas with non-limit-matching guards. The existence of tests complicates the construction of the deterministic automaton required to \myquot{split} the guards. Consider, for example, the guard~$(\varphi_0?\,; a \,; \varphi_0' ) + (\varphi_1?\,; a \,; \varphi_1' )$: after processing an $a$, depending on which tests hold true before the $a$, the automaton still has to distinguish whether $\varphi_0'$ or $\varphi_1'$ has to hold after processing the $a$. Implementing this requires non-determinism that cannot be resolved while only reading a prefix of a trace. Complicating the situation even further, the lack of negations in prompt logics does not allow to \myquot{disambiguate} the guard. Similarly, allowing non-limit-matching guards requires us to implement the full case distinction in the definition of the semantics of the robust box operator. However, implementing a case distinction in $\text{Prompt-LDL}$ is again complicated by the lack of negations. \subsection{Translating Guards into Automata} \label{subsec-markedautomata} Recall that $P$ is the (finite) set of atomic propositions. An automaton with tests~$\mathfrak{G} = (Q, \pow{P}, q_\init, \delta, F, t)$ consists of a finite set~$Q$ of states, the alphabet~$\pow{P}$, an initial state~$q_\init \in Q$, a transition function~$\delta \colon Q \times (\pow{P}\cup\set{\epsilon}) \rightarrow \pow{Q}$, a set~$F$ of final states, and a partial function~$t$, which assigns to states~$q \in Q$ an $\text{rLDL}$ formula~$t(q)$. These should be thought of as the analogue of tests, i.e., if $t(q)$ is defined, then a run visiting $q$ is only successful, if the word that remains to be processed from $q$ onwards satisfies the formula $t(q)$. We write $q \xrightarrow{a} q'$ if $q' \in \delta(q, a)$ for $a \in \pow{P} \cup \set{\epsilon}$. An $\epsilon$-path~$\pi$ from $q$ to $q'$ in $\mathfrak{G}$ is a sequence~$\pi = q_1 \cdots q_k$ of $k \ge 1$ states with $q =q_1 \xrightarrow{\epsilon} \cdots \xrightarrow{\epsilon} q_k = q'$. Let $t(\pi) = \set{t(q_i) \mid 1 \le i \le k}$ denote the set of tests visited by $\pi$ and let $\Pi(q, q')$ denote the set of all $\epsilon$-paths from $q$ to $q'$. A run of $\mathfrak{G}$ on $w(0) \cdots w(j-1) \in (\pow{P})^*$ is a sequence~$q_0 q_1 \cdots q_j$ of states such that $q_0 = q_\init$ and for every $j'$ in the range~$0 \le j' \le j-1$ there is a state~$q_{j'}'$ reachable from $q_i$ via an $\epsilon$-path~$\pi_{j'}$ and such that $q_{j'+1} \in \delta(q_{j'}', w(j'))$. The run is accepting if there is a $q_{j}' \in F$ reachable from $q_j$ via an $\epsilon$-path~$\pi_j$. This slightly unusual definition of runs (but equivalent to the standard one) simplifies our reasoning below. Also, the definition is oblivious to the tests assigned by~$t$. To take them into account, we define for $i \in \set{1,2,3,4}$ \begin{multline*} \mathcal{R}^{\textsc{rd}}_i(w,\mathfrak{G}) = \{j \mid \text{$\mathfrak{G}$ has an accepting run on $\pref{w}{j}$ with $\epsilon$-paths $\pi_0, \ldots, \pi_j$ s.t.}\\ \text{$\rldleval_i(\suff{w}{j'}, \bigwedge t(\pi_{j'}))=1$ for every $j'$ in the range~$0 \le j' \le j$\}}. \end{multline*} Here, $\bigwedge t(\pi_{j'})$ is the conjunction of all formulas in $t(\pi_{j'})$. Every guard (which is just a regular expression with tests) can be turned into an equivalent automaton with tests via a straightforward generalization of the classical Thompson construction turning classical regular expressions into $\epsilon$-NFA. \begin{lemma}[\cite{FaymonvilleZimmermann17}] \label{lemma-guards2automata} Every guard~$r$ can be translated into an automaton with tests~$\mathfrak{G}_r$ such that $\mathcal{R}^{\textsc{rd}}_i(w,r) = \mathcal{R}^{\textsc{rd}}_i(w,\mathfrak{G}_r)$ for every $i \in \set{1,2,3,4}$ and with $\size{\mathfrak{G}_r} \in \bigo(\size{r})$. Furthermore, all final states of $\mathfrak{G}_r$ are terminal, i.e., they have no outgoing transitions. \end{lemma} The automaton~$\mathfrak{G}_r$ is independent of $i$, as this value only determines how tests are evaluated. These are handled \myquot{externally} in the definition of the semantics. Having thus demonstrated how to turn guards into automata, we now demonstrate how do the same for \text{rLDL}\ formulas. \subsection{Translating rLDL into Alternating Automata} \label{subsec-alternatingautomata} In this subsection, we translate $\text{rLDL}$ formulas into alternating parity automata, which are known to be translatable into Büchi automata of exponential size. Hence, the linear translation from $\text{rLDL}$ to alternating parity automata we are about to present implies the exponential compilation property for $\text{rLDL}$. An alternating parity automaton~$\mathfrak{A} = (Q,\Sigma,q_\init,\delta, \Omega)$ consists of a finite set~$Q$ of states, an alphabet~$\Sigma$, an initial state~$q_\init \in Q$, a transition function~$\delta \colon Q \times \Sigma \to \mathcal{B}^{+}(Q)$, and a coloring~$\Omega \colon Q \rightarrow \nats$ of the states. Here, $\mathcal{B}^{+}(Q)$ denotes the set of positive Boolean combinations over $Q$, which contains in particular the formulas $\mathtt{tt}$ (true) and $\mathtt{ff}$ (false). A run of $\mathfrak{A}$ on $w = w(0) w(1) w(2) \cdots \in \Sigma^\omega$ is a directed graph~$\rho = (V, E)$ where $V \subseteq Q \times \nats$ and $((q,n),(q',n')) \in E$ implies $n' = n +1$, such that $(q_\init, 0) \in V$, and such that for all $(q, n) \in V$ we have $\suc{ \rho}{(q,n)} \models \delta(q, w(n))$. Here $\suc{\rho}{(q,n)}$ denotes the set of successors of $(q,n)$ in $\rho$ projected to $Q$. A run~$\rho$ is accepting if all infinite paths (projected to $Q$) through $\rho$ satisfy the (max) parity condition, i.e., the maximal color occurring infinitely often on the path is even. The language~$L(\mathfrak{A})$ contains all $w \in \Sigma^\omega$ that have an accepting run of $\mathfrak{A}$. Alternating parity automata are easily seen to be closed under all Boolean operations. Fix automata~$\mathfrak{A}_0 = (Q_0,\Sigma,q_\init^0,\delta_0, \Omega_0) $ and $\mathfrak{A}_1= (Q_1,\Sigma,q_\init^1,\delta_1, \Omega_1)$. \begin{itemize} \item $(Q_0, \Sigma, q_\init^0, \overline{\delta_0}, \overline{\Omega})$ recognizes $\Sigma^\omega \setminus L(\mathfrak{A}_0)$, where $\overline{\Omega}(q) = \Omega(q)+1$ and where $\overline{\delta_0}$ is the dual of $\delta_0$, i.e., $\overline{\delta_0}(q, A)$ is obtained from $\delta_0(q, A)$ by replacing each disjunction by a conjunction, each conjunction by a disjunction, each $\mathtt{tt}$ by $\mathtt{ff}$, and each $\mathtt{ff}$ by $\mathtt{tt}$. \item The disjoint union of $\mathfrak{A}_0$ and $\mathfrak{A}_1$ with a fresh initial state~$q_\init$ and $\delta(q_\init, A) = \delta_0(q_\init^0, A) \wedge \delta_1(q_\init^1, A)$ recognizes $L(\mathfrak{A}_0) \cap L(\mathfrak{A}_1)$. \item The disjoint union of $\mathfrak{A}_0$ and $\mathfrak{A}_1$ with a fresh initial state~$q_\init$ and $\delta(q_\init, A) = \delta_0(q_\init^0, A) \vee \delta_1(q_\init^1, A)$ recognizes $L(\mathfrak{A}_0) \cup L(\mathfrak{A}_1)$. \end{itemize} The latter two constructions can obviously be generalized to unions and intersections of arbitrary arity while still only requiring a single fresh state. We prove the following lemma, which implies Theorem~\ref{theorem-translation-oldcor}, as alternating parity automata can be translated into non-deterministic Büchi automata of exponential size. \begin{lemma} \label{lemma-translation-alternating} For every $\text{rLDL}$ formula~$\varphi$ and every $\beta \in \bool_4$, there is an alternating parity automaton~$\mathfrak{A}_{\varphi, \beta}$ with $\bigo(\size{\varphi})$ states recognizing the language~$\set{w \in (\pow{P})^\omega \mid \rldleval(w,\varphi) \succeq \beta} $. \end{lemma} \begin{proof} We first construct the desired automaton by structural induction over the construction of $\varphi$. Then, we estimate its size. We begin by noting that $\mathfrak{A}_{\varphi,0000}$ is trivial for every formula~$\varphi$, as it has to accept every input. Hence, we only consider $\beta \succ 0000$ in the remainder of the proof. For an atomic proposition~$p \in P$, $\mathfrak{A}_{p,\beta}$ for $\beta \succ 0000$ is an automaton that accepts exactly those $w$ with $p \in w(0)$. Such an automaton can easily be constructed. Now, consider a negation~$\varphi = \neg \varphi'$: by definition, we have $\rldleval(w,\varphi) = 0000$ if $\rldleval(w, \varphi') = 1111$, and $\rldleval(w, \varphi) = 1111$ if $\rldleval(w, \varphi') \neq 1111$. Thus, $\mathfrak{A}_{\varphi, \beta}$ for $\beta \succ 0000$ has to accept the language~$\set{w \mid \rldleval(w, \varphi') \neq 1111}$, which is the complement of the language of $\mathfrak{A}_{\varphi',1111}$. Next, consider a conjunction~$\varphi = \varphi_0 \wedge \varphi_1$. To this end, recall that $\rldleval(w, \varphi) = \min\set{\rldleval(w, \varphi_0) , \rldleval(w, \varphi_1)}$. Hence, $\mathfrak{A}_{\varphi,\beta}$ has to recognize the language \[\bigcup_{\substack{\beta_0,\beta_1 \in \bool_4\\ \min\set{\beta_0,\beta_1} \succeq \beta}} L(\mathfrak{A}_{\varphi_0,\beta_0}) \cap L(\mathfrak{A}_{\varphi_1,\beta_1}) .\] Such an automaton can be constructed using the closure operations described above.\footnote{The description of the language (and thus the automaton) can be simplified by exploiting the fact that $ \beta \preceq \beta'$ implies $ \mathfrak{A}_{\varphi,\beta} \subseteq \mathfrak{A}_{\varphi,\beta'}$ for every $\varphi$. The same is true for disjunction and implication.} The construction for a disjunction~$\varphi = \varphi_0 \vee \varphi_1$ is dual to the conjunction: we have $\rldleval(w, \varphi) = \max\set{\rldleval(w, \varphi_0) , \rldleval(w, \varphi_1)}$ and thus construct $\mathfrak{A}_{\varphi,\beta}$ such that it recognizes the language \[\bigcup_{\substack{\beta_0,\beta_1 \in \bool_4\\ \max\set{\beta_0,\beta_1} \succeq \beta}} L(\mathfrak{A}_{\varphi_0,\beta_0}) \cap L(\mathfrak{A}_{\varphi_1,\beta_1}) .\] For an implication~$\varphi = \varphi_0 \implies \varphi_1$, we again implement the semantics via Boolean combinations of automata. Recall that $\rldleval(w, \varphi_0 \implies \varphi_1)$ is equal to $1111$ if $\rldleval(w,\varphi_0) \preceq \rldleval(w, \varphi_1)$. Otherwise, it is equal to $\rldleval(w, \varphi_1)$. Hence, we construct $\mathfrak{A}_{\varphi,\beta}$ so that it recognizes the language \[\left(\bigcup_{\substack{\beta_0,\beta_1 \in \bool_4\\ \beta_0 \preceq \beta_1}} L(\mathfrak{A}_{\varphi_0,\beta_0}) \cap L(\mathfrak{A}_{\varphi_1,\beta_1}) \right) \cup \left(\bigcup_{\substack{\beta_0,\beta_1 \in \bool_4\\ \beta_0 \succ \beta_1 \preceq \beta}} L(\mathfrak{A}_{\varphi_0,\beta_0}) \cap L(\mathfrak{A}_{\varphi_1,\beta_1})\right).\] The left part covers all cases in which the implication evaluates to $1111$. Due to $1111 \succeq \beta$ for every $\beta$, this part is equal for all automata. The right part covers all other cases, which depend on $\beta$. Now, we turn to the constructions for the guarded temporal operators, which are more involved as we have to combine automata for guards, for the tests occurring in them, and for formulas. We follow the general construction presented by Faymonville and Zimmermann~\cite{FaymonvilleZimmermann17}, but generalize it to deal with the richer truth values underlying the robust semantics. First, we consider a diamond formula~$\varphi = \ddiamonddot{r}\varphi'$ with tests~$\theta_1, \ldots, \theta_n$ in $r$. Recall that we have $\rldleval(w, \varphi) = b_1 b_2 b_3 b_4$ where $b_i = \max_{j \in \mathcal{R}^{\textsc{rd}}_i(w,r)} \rldleval_i(\suff{w}{j}, \varphi')$ for all~$i \in \set{1,2,3,4}$. Thus, $\mathfrak{A}_{\varphi, \beta}$ has to accept $w$ if and only if $w$ has an $r$-match of degree~$\beta$ that is $\varphi'$-satisfying of degree~$\beta$. By induction hypothesis, we have automata~$\mathfrak{A}_{\varphi',\beta}$ and $\mathfrak{A}_{\theta_j, \beta}$ for every test~$\theta_j$ in $r$. Also, we have an $\epsilon$-NFA with tests~$\mathfrak{G}_r$ equivalent to~$r$ due to Lemma~\ref{lemma-guards2automata}. We combine these automata to the alternating automaton~$\mathfrak{A}_{\varphi,\beta}$ by non-deterministically guessing a (finite) run of $\mathfrak{G}_r$. Whenever the run encounters a final state, the automaton may jump to the initial state of $\mathfrak{A}_{\varphi',\beta}$ and then behave like that automaton. Furthermore, while simulating $\mathfrak{G}_r$, $\mathfrak{A}_{\varphi,\beta}$ also has to verify that the tests occurring along the guessed run of~$\mathfrak{G}_r$ hold true by universally spawning copies of $\mathfrak{A}_{\theta_j,\beta}$ each time a transition state labeled with $\theta_j$ is traversed. Since we do not allow for $\epsilon$-transitions in alternating automata, we have to eliminate the $\epsilon$-transitions of $\mathfrak{G}_r$ during the construction of~$\mathfrak{A}_{\varphi, \beta}$. Finally, in order to prevent~$\mathfrak{A}_{\varphi, \beta}$ from simulating~$\mathfrak{G}_r$ ad infinitum, the states copied from $\mathfrak{G}_r$ are assigned an odd color, which forces the jump to $\mathfrak{A}_{\varphi, \beta}$ to be executed eventually. Formally, we define $\mathfrak{A}_{\varphi,\beta} = (Q, \pow{P}, q_\init, \delta, \Omega)$ where \begin{itemize} \item $Q$ is the disjoint union of the sets of states of the automata~$\mathfrak{G}_r$, $\mathfrak{A}_{\theta_j, \beta}$ for $j \in \set{1,\ldots,n}$, and $\mathfrak{A}_{\varphi', \beta}$, \item $q_\init$ is the initial state of $\mathfrak{G}_r$, \item $\Omega$ coincides with the colorings of the automata~$\mathfrak{A}_{\theta_j,\beta}$ and $\mathfrak{A}_{\varphi',\beta}$ on their states and assigns color~$1$ to every state of $\mathfrak{G}_r$, \end{itemize} and where $\delta$ is defined as follows: if $q$ is a state of $\mathfrak{G}_r$, then \[\delta(q, A) = \begin{cases} \bigvee_{q' \in Q^r}\bigvee_{\pi \in \Pi(q,q')} \bigvee_{p \in \delta^r (q', A)} (p \wedge \bigwedge_{\theta_j \in t(\pi)} \delta^j(q_\init^j, A))& \\ \hspace{4.cm}\vee &\\ \bigvee_{q' \in F^r}\bigvee_{\pi \in \Pi(q,q')} (\delta'(q_\init', A) \wedge \bigwedge_{\theta_j \in t(\pi)} \delta^j(q_\init^j, A)) & \\ \end{cases}\] where $q_\init^j$ and $q_\init'$ are the initial states of $\mathfrak{A}_{\theta_j,\beta}$ and $\mathfrak{A}_{\varphi',\beta}$, respectively, where $Q^r$ ($F^r$) is the set of (final) states of $\mathfrak{G}_r$, where $\delta_r$, $\delta'$, and $\delta_j$ are the transition functions of $\mathfrak{G}_r$, $\mathfrak{A}_{\varphi',\beta}$, and $\mathfrak{A}_{\theta_j,\beta}$ respectively, and where the sets $\Pi(q,q')$ of $\epsilon$-paths are induced by $\mathfrak{G}_r$. Furthermore, for states~$q$ of $\mathfrak{A}_{\varphi',\beta}$, we define $\delta(q, A) = \delta'(q, A) $ and for states~$q$ of $\mathfrak{A}_{\theta_j,\beta}$ we define $\delta(q, A) = \delta^j(q, A)$. The resulting automaton accepts $w$ if and only if $w$ has at least one $r$-match of degree~$\beta$ that is $\varphi'$-satisfying of degree~$\beta$ (cf.~\cite{FaymonvilleZimmermann17}). Finally, we consider the box operator, which requires the most involved construction due to the case distinction defining the $b_i'$ and the subsequent maximization to obtain the~$b_i$. First, recall that the semantics of the box operator is not dual to the semantics of the diamond operator. Nevertheless, the dual construction of the one for the diamond operator is useful as a building block. We first present this construction before tackling the construction for the box operator. In the dual construction, one interprets $\mathfrak{G}_r$ as a universal automaton whose transitions are ignored if the test on the source of the transition fails. Furthermore, each visit to a final state spawns a copy of the automaton~$\mathfrak{A}_{\varphi',\beta}$, as every $r$-match has to be $\varphi'$-satisfying. Thus, the states of $\mathfrak{G}_r$ are now accepting, as all $r$-matches have to be considered, and the automata for the tests are dualized in order to check for the failure of the test. Formally, this approach yields the alternating parity automaton~$(Q, \pow{P}, q_\init, \delta, \Omega)$ where $Q$ and $q_\init$ are as above, where \[\delta(q, A) = \begin{cases} \bigwedge_{q' \in Q^r}\bigwedge_{\pi \in \Pi(q,q')} \bigwedge_{p \in \delta^r (q', A)} (p \vee \bigvee_{\theta_j \in t(\pi)} \delta^j(q_\init^j, A))& \\ \hspace{4.cm}\wedge & \\ \bigwedge_{q' \in F^r}\bigwedge_{\pi \in \Pi(q,q')} (\delta'(q_\init', A) \vee \bigvee_{\theta_j \in t(\pi)} \delta^j(q_\init^j, A)) & \\ \end{cases}\] for states~$q$ of $\mathfrak{G}_r$, where $q_\init^j$ and $q_\init'$ are the initial states of $\mathfrak{A}_{\theta_j,\beta}$ and $\mathfrak{A}_{\varphi',\beta}$, respectively, where $\delta(q, A) = \delta'(q, A) $ for states~$q$ of $\mathfrak{A}_{\varphi',\beta}$, and where $\delta(q, A) = \overline{\delta^j}(q, A)$ for states~$q$ of $\mathfrak{A}_{\theta_j,\beta}$. Here, we use the fact that the final states of $\mathfrak{G}_r$ have no outgoing transitions, which implies that no match is missed by contracting an $\epsilon$-path. Finally, states from $\mathfrak{G}_r$ have color~$0$, states from $\mathfrak{A}_{\varphi',\beta}$ keep their color, and the colors from the automata~$\mathfrak{A}_{\theta_j,\beta}$ are incremented by one. Recall that dualizing the transition relation and incrementing the colors of the automata~$\mathfrak{A}_{\theta_j,\beta}$ amounts to complementation. This allows terminating runs of $\mathfrak{G}_r$ if a test does not hold true. The resulting automaton accepts a trace if and only if every $r$-match of degree~$\beta$ is $\varphi$-satisfying of degree~$\beta$ (cf.~\cite{FaymonvilleZimmermann17}). Now, we fix~$\varphi = \bboxdot{r}\varphi'$. Recall that we have $\rldleval(w, \varphi) = b_1 b_2 b_3 b_4$ with $b_i = \max\set{b_1', \ldots, b_i'}$ for some bits~$b_i'$. The maximization is easily implemented using the Boolean closure properties of alternating automata provided we have automata checking that some bit~$b_i'$ is equal to one. Two cases are trivial: Indeed, we have $b_1' = 1$ if and only if every $r$-match of degree~$1111$ is $\varphi$-satisfying of degree~$1111$. This property is checked by the dual automaton constructed above. Furthermore, $b_4' = 1$ if and only if $ \rldleval(w, \ddiamonddot{r}\varphi') \succeq 0001$ or if there is no $r$-match of degree~$0001$. The former language is recognized by $\mathfrak{A}_{\ddiamonddot{r}\varphi',0001}$, the latter one by an automaton we construct below. We then combine these two automata to obtain $\mathfrak{A}_{\varphi,0001}$. Hence, it remains to consider $b_2'$ and $b_3'$, which are both defined by a case distinction over the number of $r$-matches of the trace. These case distinctions are implemented using alternation. To this end, we first show how to test for the three cases, i.e., we argue that the following languages are recognizable by alternating parity automata, where $i \in \set{1,2,3,4}$: \begin{enumerate} \item $L_i^\emptyset(r) = \set{w \in (\pow{P})^\omega \mid \size{\mathcal{R}^{\textsc{rd}}_i(w,r)} = 0}$. \item $L_i^f(r) = \set{w \in (\pow{P})^\omega \mid 0 < \size{\mathcal{R}^{\textsc{rd}}_i(w,r) } < \infty}$. \item $L_i^\infty(r) = \set{w \in (\pow{P})^\omega \mid \size{\mathcal{R}^{\textsc{rd}}_i(w,r)} = \infty}$. \end{enumerate} Let $\theta_1, \ldots, \theta_n$ be the tests in $r$. By induction hypothesis, we have alternating parity automata~$\mathfrak{A}_{\theta_j, \beta}$ for every $\theta_j$ and every truth value~$\beta$. The first case is already solved, as we have $\mathcal{R}^{\textsc{rd}}_i(w,r) = \emptyset$ if and only if $\rldleval_i(w,\ddiamonddot{r}\mathtt{tt} ) = 0$, which is in turn equivalent to $\rldleval(w, \ddiamonddot{r}\mathtt{tt}) \prec \itotruthvalue{i} $, i.e., the complement of the automaton $\mathfrak{A}_{\ddiamonddot{r}\mathtt{tt}, \itotruthvalue{i} }$ recognizes $L_i^\emptyset(r)$. Next, we construct an automaton for the language~$L_i^\infty(r)$. Then, the automaton for~$L_i^f(r)$ is obtained as the intersection of the complement automata for the other two languages (for the given $r$ and $i$). Thus, we need to construct an automaton that accepts~$w$ if there are infinitely many $r$-matches of degree~$\itotruthvalue{i}$ or greater. The construction of an automaton for $L_i^\infty(r)$ is more involved than the previous one, as the automaton~$\mathfrak{G}_r$ checking for matches with $r$ is non-deterministic. Nevertheless, we show that standard arguments about such automata still yield the desired result. We say that an infinite sequence~$q_0 q_1 q_2 \cdots$ of states is an (infinity) witness for $w$ if $q_0$ is the initial state of~$\mathfrak{G}_r$ and if for every $j$, there is an accepting run of $\mathfrak{G}_r$ on some prefix of $w$ such that the tests on the associated $\epsilon$-paths hold w.r.t.~$\rldleval_i$, and such that $q_0 \cdots q_j$ is a prefix of this run. An application of König's Lemma shows that $\mathcal{R}^{\textsc{rd}}_i(w,r)$ is infinite if and only if $\mathfrak{G}_r$ has a witness for $w$. Thus, the automaton recognizing $L_i^\infty(r)$ has to find such a witness for $w$ while processing $w$. This is implemented as follows: we start with $\mathfrak{G}_r$, eliminate $\epsilon$-transitions non-deterministically as above and spawn a copy of $\mathfrak{A}_{\theta_j, \itotruthvalue{i} }$ when traversing a state with test~$\theta_j$. Furthermore, every time a letter is processed, another copy of $\mathfrak{G}_r$ is spawned (with a disjoint set of states). The coloring of the original copy is chosen such that the automaton has to process infinitely many letters and such that the disjoint copies have to reach a final state of $\mathfrak{G}_r$. Hence, the resulting automaton accepts $w$ if and only if there is a witness for~$w$. We leave the details to the industrious reader and just note that we have now constructed all automata we need to capture the cases in the case distinction. Extending the construction just presented also allows us to construct an automaton that accepts a trace~$w$ if and only if it has infinitely many $\varphi'$-satisfying $r$-matches (both of degree~$\itotruthvalue{i}$). To this end, the copies spawned to check for matches do not terminate in an accepting sink, but instead spawn a copy of $\mathfrak{A}_{\varphi', \itotruthvalue{i} }$ to check for satisfaction of $\varphi'$. Similarly, we can construct an automaton that accepts a trace~$w$ if and only if it has infinitely many $r$-matches of degree~$\itotruthvalue{i}$ that are \emph{not} $\varphi$-satisfying of degree~$\itotruthvalue{i}$. Again, we leave the details to the reader. These automata also allow us to construct an automaton that accepts a trace~$w$ if and only if $\mathcal{R}^{\textsc{rd}}_i(w,r)$ is infinite and almost all $r$-matches in $\mathcal{R}^{\textsc{rd}}_i(w,r)$ are $\varphi'$-satisfying (both of degree~$\itotruthvalue{i}$). This automaton is obtained by taking the automaton checking for infinitely many $\varphi'$-satisfying $r$-matches (both of degree~$\beta$) and intersecting it with the complement of the one checking for infinitely many $r$-matches that are not $\varphi$-satisfying of degree~$\itotruthvalue{i}$. Combining the automata checking the cases of the case distinction with the automata checking for $\varphi'$-satisfiability yields the desired automata for $b_2'$ and $b_3'$: A case distinction is easily implemented using the Boolean closure properties and all necessary auxiliary automata have been constructed above. It remains to argue that $\mathfrak{A}_{\varphi, \beta}$ is of linear size in $\size{\varphi}$. To this end, we say that an alternating parity automaton~$(Q', \Sigma, q_\init', \delta', \Omega')$ is a subautomaton of $(Q, \Sigma, q_\init, \delta, \Omega)$ if $Q' \subseteq Q$, $\delta'(q, A) = \delta(q,A)$ for every $q \in Q'$ and every $A \in \Sigma$, and $\Omega'(q) = \Omega(q) $ for every $q\in Q'$. Inspecting the construction above shows that an automaton~$\mathfrak{A}_{\varphi, \beta}$ is built from automata for immediate subformulas (w.r.t.\ all truth values if necessary), a test automaton (if applicable), and a constant number of fresh states. Furthermore, if formulas share subformulas, then the construction can also share these subautomata. Hence, we obtain the desired linear upper bound on the size of $\mathfrak{A}_{\varphi, \beta}$.\qed \end{proof} It is not straightforward that the equivalent Büchi automata as in Theorem~\ref{theorem-translation-oldcor} can be constructed efficiently, as the definition of the alternating automaton involves $\epsilon$-paths of arbitrary length. However, these can be restricted to simple paths, which are of bounded length. Then, as it is done for the similar construction for $\text{PLDL}$~\cite{FaymonvilleZimmermann17}, one can show that the Büchi automata can be constructed on-the-fly in polynomial space. This is sufficient for our applications later on. Furthermore, as non-deterministic Büchi automata can be translated into deterministic parity automata (see, e.g., \cite{GraedelThomasWilke02} for definitions), we obtain the following corollary of Theorem~\ref{theorem-translation-oldcor}. \begin{corollary} \label{corollary:rldl2detparity} Let $\varphi$ be an $\text{rLDL}$ formula, $n = \size{\varphi}$, and $\beta \in \bool_4$. There is a deterministic parity automaton~$\mathfrak{P}_{\varphi, \beta}$ with $2^{2^{\bigo(n \log n)}}$ states and with $2^{\bigo(n \log n)}$ colors recognizing the language~$\set{w \in (\pow{P})^\omega \mid \rldleval(w,\varphi) \succeq \beta}$. \end{corollary} \subsubsection*{Our Contributions} We develop logics that address more than one shortcoming of $\text{LTL}$ at a time. See Figure~\ref{fig:logics} for an overview. \begin{wrapfigure}{R}{.45\textwidth} \centering \vspace{-.4cm} \begin{tikzpicture}[thick] \draw[rounded corners,fill=black!10,draw=white] (-3.3, .5) |- (.9,2.25) |- (3.1,3.5) |- (0,.5) -- cycle; \node[align=center] (ltl) at (0,0.8) {\text{LTL}}; \begin{scope}[shift={(0,1.75)}] \node[align=center] (rltl) at (-2.25,0) {\text{rLTL}(\ensuremath{\Boxdot, \Diamonddot})}; \node[align=center] (promptltl) at (0,0) {\text{Prompt-LTL}}; \node[align=center] (ldl) at (2,0) {\text{LDL}}; \end{scope} \begin{scope}[shift={(0,3)}] \node[align=center] (rpromptltl) at (-2.1,0) {\text{rPrompt-LTL}}; \node[align=center] (rldl) at (0,0) {\text{rLDL}}; \node[align=center] (promptldl) at (2,0) {\text{Prompt-LDL}}; \end{scope} \node[align=center] (rpromptldl) at (0,4.2) {\text{rPrompt-LDL}}; \path[-stealth,] (ltl) edge[dashed] (rltl) edge[dashed] (promptltl) edge[dashed] (ldl) (rltl) edge[out=90,in=-90] (rpromptltl) edge[out=30,in=-90] (rldl) (promptltl) edge[out=150,in=-90] (rpromptltl) edge[dashed,out=30,in=-90] (promptldl) (ldl) edge[out=135,in=-90] (rldl) edge[dashed] (promptldl) (rpromptltl) edge[out=30,in=-90] (rpromptldl) (rldl) edge (rpromptldl) (promptldl) edge[out=150,in=-90] (rpromptldl); \end{tikzpicture} \caption{The logics studied in this work. Existing logics and influences are marked gray with dashed arrows.} \label{fig:logics} \end{wrapfigure} In Section~\ref{sec-rprompt}, we ``robustify'' $\text{Prompt-LTL}$. More precisely, we introduce a novel logic, named $\text{rPrompt-LTL}$, by extending the five-valued semantics from robust $\text{LTL}$ to $\text{Prompt-LTL}$. Our main result here shows that $\text{rPrompt-LTL}$ retains the exponential compilation property. Then, in Section~\ref{sec-rldl}, we ``robustify'' $\text{LDL}$: we introduce a novel logic, named $\text{rLDL}$, by lifting the five-valued semantics of robust $\text{LTL}$ to $\text{LDL}$. Our main result shows that $\text{rLDL}$ also retains the exponential compilation property. Hence, one can indeed combine any two of the three extensions of $\text{LTL}$ while still preserving the desirable algorithmic properties of $\text{LTL}$. In particular, let us stress again that all highly sophisticated algorithmic backends developed for $\text{LTL}$ are applicable to these novel logics as well, e.g., we show that the verification problem and the synthesis problem for each of these logics is solvable without an (asymptotic) increase in complexity. Tabuada and Neider gave two proofs showing that robust $\text{LTL}$ has the exponential compilation property. The first one presented a translation of robust $\text{LTL}$ into equivalent Büchi automata of exponential size while the second one is based on a polynomial translation of robust $\text{LTL}$ into (standard) $\text{LTL}$, which is known to be translatable into equivalent Büchi automata of exponential size. We refer to those two approaches as the \emph{direct} approach and the \emph{reduction-based} approach. To obtain our results mentioned above, we need to generalize both. To prove the exponential compilation property for $\text{rLDL}$, we generalize the direct approach by exhibiting a direct translation of $\text{rLDL}$ into Büchi automata via alternating automata. In contrast, to prove the exponential compilation property for $\text{rPrompt-LTL}$, we present a generalization of the reduction-based approach translating $\text{rPrompt-LTL}$ into equivalent $\text{Prompt-LTL}$ formulas of linear size, which have the exponential compilation property. Finally, in Section~\ref{sec-towardsrpldl}, we discuss the combination of all three aspects. Recall that we present a direct translation to automata for $\text{rLDL}$ and a reduction-based one for $\text{rPrompt-LTL}$. For reasons we discuss in Section~\ref{sec-towardsrpldl}, it is challenging to develop a reduction from $\text{rLDL}$ to $\text{LDL}$ or a direct translation for $\text{rPrompt-LTL}$ that witness the exponential compilation property. Hence, both approaches seem inadequate to deal with the combination of all three extensions. Ultimately, we leave the question of whether the logic combining all three aspects has the exponential compilation property for future work. Proofs omitted due to space restrictions can be found in the appendix. \section{Introduction} \label{sec-intro} \input{content/intro} \section{Preliminaries} \label{sec-prel} \input{content/prel} \subsection{Robust Linear Temporal Logic} \label{subsec-briefrltl} \input{content/prel-rltl} \subsection{Linear Dynamic Logic} \label{subsec-briefldl} \input{content/prel-ldl} \subsection{Prompt Linear Temporal Logic} \label{subsec-briefprompt} \input{content/prel-prompt} \section{Robust and Prompt Linear Temporal Logic} \label{sec-rprompt} \input{content/rprompt} \subsection{Model Checking} \label{subsec-rpromptresults-mc} \input{content/rprompt-mc} \subsection{Synthesis} \label{subsec-rpromptresults-synt} \input{content/rprompt-synt} \section{Robust Linear Dynamic Logic} \label{sec-rldl} \input{content/rldl} \subsection{Expressiveness} \label{subsec-rldl-expressiveness} \input{content/rldl-expressiveness} \subsection{Model Checking and Synthesis} \label{subsec-rldl-modelchecking} \input{content/rldl-mcsynt} \section{Towards Robust and Prompt Linear Dynamic Logic} \label{sec-towardsrpldl} \input{content/towardsrpromptldl} \section{Conclusion} \label{sec-conc} \input{content/conclusion} \bibliographystyle{splncs03}
proofpile-arXiv_067-12542
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Let $H^2(\D^2)$ be the Hardy space over the bidisk $\D^2$. If we denote the variables by $z_1$ and $z_2$, then $H^2(\D^2)$ can be identified with $H^2(z_1)\otimes H^2(z_2)$, where $H^2(z)$ is the Hardy space over the unit disk $\D$ with variable denoted by $z$. Let $M_{z_1}$ and $M_{z_2}$ be the multiplication operators with symbols $z_1$ and $z_2$, respectively. A closed subspace $\HM$ of $H^2(\D^2)$ is called a submodule if $\HM$ is invariant under $M_{z_1}$ and $M_{z_2}$. It is easy to see that a submodule is indeed a module over the polynomial ring $C[z_1,z_2]$ with module action defined by multiplication of functions. We denote the lattice of submodules by $Lat(H^2(\D^2))$. Beurling's theorem fully characterizes the submodule of the classical Hardy space $H^2(\D)$. It says that any submodule of $H^2(\D)$ is of the form $\theta H^2(\D)$ for some inner function $\theta$. If we denote by $R_z$ and $S_z$ the restriction of $M_z$ on $\HM$ and respectively the compression of $M_z$ on $\HM^\perp = H^2(\D) \ominus \HM$, then it is not hard to check that $R_z$ and $S_z$ are Fredholm operators, and their indices are $-1$ and $0$, respectively. However, submodules of $H^2(\D^2)$ are complicated (\cite{Ru69}) and they bear no similar characterization. The research on $H^2(\D^2)$ is ongoing. One approach to this problem is to study some relatively simple submodules, and hope that the study will generate concepts and techniques for the general picture. In analogy with the operators $R_z$ and $S_z$ on $H^2(\D)$, we are interested in the operator pairs $(R_{z_1},R_{z_2})$ and $(S_{z_1},S_{z_2})$ on $H^2(\D^2)$. It is clear that $(R_{z_1},R_{z_2})$ is a pair of commuting isometries, and $(S_{z_1},S_{z_2})$ is a pair of commuting contractions. These pairs contain much information about $\HM$ and they are the subjects of many recent studies. Suppose $\HM$ is a submodule of $H^2(\D^2)$, i.e. $\HM \in Lat(H^2(\D^2))$. Let $$C = I - R_{z_1}R_{z_1}^* - R_{z_2}R_{z_2}^* + R_{z_1}R_{z_2}R_{z_1}^*R_{z_2}^*.$$ $C$ is called the core operator or defect operator for $\HM$ (\cite{GY04}). $\HM$ is called a Hilbert-Schmidt submodule if the core operator $C$ is Hilbert-Schmidt. Hilbert-Schmidt submodules have many good properties and have been studied extensively in the literature, see e.g. \cite{III17, Ya99, Ya01, Ya04, Ya05} and the references therein. In particular, it was shown in \cite{Ya05} that $C^2$ is unitarily equivalent to $$\left( \begin{matrix} [R_{z_1}^*,R_{z_1}][R_{z_2}^*,R_{z_2}][R_{z_1}^*,R_{z_1}]&0\\ 0&[R_{z_1}^*,R_{z_2}][R_{z_2}^*,R_{z_1}] \end{matrix} \right). $$ This implies that $C$ is Hilbert-Schmidt (or compact) if and only if $[R_{z_1}^*,R_{z_1}][R_{z_2}^*,R_{z_2}]$ and $[R_{z_1}^*,R_{z_2}]$ are both Hilbert-Schmidt (or compact). It is known that if $C$ is Hilbert-Schmidt, then the pairs $(R_{z_1},R_{z_2})$ and $(S_{z_1},S_{z_2})$ are Fredholm. Almost all known examples of submodules are Hilbert-Schmidt. The only known non-Hilbert-Schmidt submodule is the submodule $\HM$ with $\dim \HM\ominus (z_1\HM + z_2\HM) = \infty$, in which case $[R_{z_1}^*,R_{z_1}][R_{z_2}^*,R_{z_2}]$ is not compact (\cite{Ya01}). Further, if $\HM$ is Hilbert-Schmidt then it can be shown that $z_1\HM + z_2\HM$ is closed. It is not clear whether this is true for all submodules $\HM$. For $\lambda \in \D^2$, let \[\ind_\lambda \HM = \dim \HM \ominus ((z_1-\lambda_1)\HM + (z_2-\lambda_2)\HM).\] The integer $\ind_\lambda \HM$ is called the index of $\HM$ at $\lambda$. It captures important information of $\HM$ and was studied in \cite{LR}. It is not hard to see that $\ind_\lambda \HM$ is less than or equal to the rank of $\HM$, so if there exists a sequence of $\lambda_n \in \D^2$ such that $\ind_{\lambda_n}\HM$ goes to infinity, then $\HM$ is not finitely generated. It is conjectured in \cite{Ya99} that every finitely generated submodule is Hilbert-Schmidt. This paper confirms the conjecture for submodules containing function $z_1 - \varphi(z_2)$, where $\varphi$ is a finite Blaschke product. In 2008, the second and the third author studied the submodules $\HM$ generated by $z_1 - \varphi(z_2)$, where $\varphi$ is an inner function (\cite{IY08}), and showed that $\HM = [z_1 - \varphi(z_2)]$ is Hilbert-Schmidt. Moreover, the quotient module $H^2(\D^2) \ominus [z_1 - \varphi(z_2)]$ can be identified with $(H^2(z_2) \ominus \varphi H^2(z_2)) \otimes L^2_a(\D)$ and $S_{z_1}$ is unitarily equivalent to $I \otimes M_z$ on $(H^2(z_2) \ominus \varphi H^2(z_2)) \otimes L^2_a(\D)$, where $L^2_a(\D)$ is the Bergman space. When $\varphi(z_2) = z_2$, this recovers the well-known fact that $S_{z_i} (i = 1, 2)$ on $H^2(\D^2) \ominus [z_1 - z_2]$ is unitarily equivalent to the Bergman shift. In this paper, we look at submodules $\HM$ which contain $z_1 - \varphi(z_2)$, where $\varphi$ is a finite Blaschke product. We obtain a necessary and sufficient condition for such $\HM$ to be Hilbert-Schmidt. As an application, submodules which contain $z_1 - z_2$ are fully characterized. The main result of the paper is the following theorem. \begin{theorem}\label{hlsmnmcdv} Let $\varphi$ be a finite Blaschke product and $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$. Then $\HM$ is a Hilbert-Schmidt submodule if and only if $\HM$ is finitely generated. \end{theorem} In Section 2, we define and study the fringe operator $F_\lambda$, where $\lambda \in \D^2$, and show that $F_\lambda$ is Fredholm if and only if the pair $(R_{z_1}-\lambda_1,R_{z_2}-\lambda_2)$ is Fredholm. This result will be used in the proof of Theorem \ref{hlsmnmcdv2} and Proposition \ref{dmfass}. In Section 3, we prove Theorem \ref{hlsmnmcdv}. When submodules contain $z_1 - z_2$, we also determine the dimensions of the cohomology vector spaces for the pairs $(R_{z_1}-\lambda_1,R_{z_2}-\lambda_2)$ and $(S_{z_1}-\lambda_1,S_{z_2}-\lambda_2), \lambda \in \D^2$ (see Proposition \ref{dmfass}). \section{Fringe operator} Suppose $\HM$ is a submodule of $H^2(\D^2)$. For $\lambda \in \D^2$, we define the fringe operator $F_\lambda$ on $\HM \ominus (z_1 - \lambda_1) \HM$ by $$F_\lambda f = P_{\lambda_1} M_{z_2 - \lambda_2} f,\quad f \in \HM \ominus (z_1 - \lambda_1) \HM,$$ where $P_{ \lambda_1}$ is the orthogonal projection from $\HM$ to $\HM \ominus (z_1 - \lambda_1) \HM$. The fringe operator was introduced and studied by the third author in \cite{Ya01}, where the fringe operator $F_{(0,0)}$ was mainly investigated. Let $\varphi_{\lambda_i}(z) = \varphi_{\lambda_i} (z_i) = \frac{z_i - \lambda_i}{1 - \overline{\lambda_i}z_i}, i =1, 2$, and define $\widetilde{F_\lambda} f = P_{\lambda_1} M_{\varphi_{\lambda_2}} f$ for $f \in \ran P_{\lambda_1}$. Then one verifies that $\ran \widetilde{F_\lambda} = \ran F_\lambda$ and $\ker \widetilde{F_\lambda} = \ker F_\lambda$. Let $R_{\varphi_{\lambda_i}} = M_{\varphi_{\lambda_i}}|\HM$ and $P_\HE$ stand for the orthogonal projection from $H^2(\D^2)$ to the closed subspace $\HE$. The following lemma and proposition generalize corresponding facts in \cite{Ya01}. \begin{lemma} $\ran F_\lambda = [(z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM]\ominus (z_1 - \lambda_1) \HM$. \end{lemma} \begin{proof} If $g \in \HM \ominus (z_1 - \lambda_1) \HM$, then $$F_\lambda g = (z_2 - \lambda_2)g - (P_\HM - P_{\lambda_1}) (z_2 - \lambda_2)g \in (z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM.$$ Conversely, let $h = (z_1 - \lambda_1) f + (z_2 - \lambda_2) g \in [(z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM]\ominus (z_1 - \lambda_1) \HM$. If $g \in (z_1 - \lambda_1)\HM$, then it is clear that $h = 0$ and $F_\lambda 0 = 0$. So suppose $g \in \HM \ominus (z_1 - \lambda_1) \HM$. Note that \begin{align*} h &= (z_1 - \lambda_1) f + (z_2 - \lambda_2) g\\ & = (z_1 - \lambda_1) f + P_{\lambda_1} (z_2 - \lambda_2) g + (P_\HM - P_{\lambda_1}) (z_2 - \lambda_2) g, \end{align*} and $(z_1 - \lambda_1) f + (P_\HM - P_{\lambda_1}) (z_2 - \lambda_2) g \in (z_1 - \lambda_1) \HM$. This implies $$(z_1 - \lambda_1) f + (P_\HM - P_{\lambda_1}) (z_2 - \lambda_2) g = 0.$$ It then follows that $$F_\lambda g = (z_2 - \lambda_2)g - (P_\HM - P_{\lambda_1}) (z_2 - \lambda_2)g = (z_2 - \lambda_2)g + (z_1 - \lambda_1) f = h.$$ The proof is complete. \end{proof} It follows from the above lemma that $\ker F_\lambda^* = \HM \ominus [(z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM]$ and $\dim \ker F_\lambda^* = \ind_\lambda \HM$. The following two propositions will be used in the proof of Proposition \ref{indeve}. Let $P_{ \lambda_2}$ be the orthogonal projection from $\HM$ to $\HM \ominus (z_2 - \lambda_2) \HM$. For convenience, we let $p = P_\HM$. \begin{prop}\label{rlspoaci} For $f \in \HM \ominus (z_1 - \lambda_1) \HM$, we have\\ (i) $f - \widetilde{F_\lambda}^*\widetilde{F_\lambda} f= [R_{\varphi_{\lambda_2}}^*, R_{\varphi_{\lambda_1}}] [R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}]f$;\\ (ii) $f - \widetilde{F_\lambda}\widetilde{F_\lambda}^* f= [R_{\varphi_{\lambda_1}}^*, R_{\varphi_{\lambda_1}}] [R_{\varphi_{\lambda_2}}^*,R_{\varphi_{\lambda_2}}]f$. \end{prop} \begin{proof} (i) If $f \in (z_1 - \lambda_1)\HM$, then $[R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] f = 0$. Thus $[R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] = [R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] P_{\lambda_1}$. Since $R_{\varphi_{\lambda_1}}^* P_{\lambda_1} = 0$, we have \begin{align*} &[R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] = [R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] P_{\lambda_1}\\ & = R_{\varphi_{\lambda_1}}^*R_{\varphi_{\lambda_2}} P_{\lambda_1}\\ & = p M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}}P_{\lambda_1}. \end{align*} Hence \begin{align}\label{dcpipt} [R_{\varphi_{\lambda_2}}^*, R_{\varphi_{\lambda_1}}] [R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] = P_{\lambda_1} M_{\varphi_{\lambda_2}}^* M_{\varphi_{\lambda_1}} p M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}}P_{\lambda_1}. \end{align} On the other hand, for $f \in \HM \ominus (z_1 - \lambda_1) \HM$, \begin{align}\label{fmftsft} & f - \widetilde{F_\lambda}^*\widetilde{F_\lambda}f = f - P_{\lambda_1} M_{\varphi_{\lambda_2}}^* P_{\lambda_1} M_{\varphi_{\lambda_2}} f \notag\\ & = f - [P_{\lambda_1} f - P_{\lambda_1} M_{\varphi_{\lambda_2}}^* (p -P_{\lambda_1}) M_{\varphi_{\lambda_2}} f] \notag\\ & = P_{\lambda_1} M_{\varphi_{\lambda_2}}^* (p -P_{\lambda_1}) M_{\varphi_{\lambda_2}} f \notag\\ & = P_{\lambda_1} M_{\varphi_{\lambda_2}}^* M_{\varphi_{\lambda_1}} p M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}} f, \end{align} where in the last equality we used $(p -P_{\lambda_1}) M_{\varphi_{\lambda_2}} f = M_{\varphi_{\lambda_1}} p M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}} f$. Therefore the conclusion follows from (\ref{dcpipt}) and (\ref{fmftsft}). (ii) Note that $P_{\lambda_1} M_{\varphi_{\lambda_2}}^*P_{\lambda_1} = p M_{\varphi_{\lambda_2}}^*P_{\lambda_1}$. Hence for $f \in \HM \ominus (z_1 - \lambda_1) \HM$, \begin{align*} f - \widetilde{F_\lambda}\widetilde{F_\lambda}^* f& = f - P_{\lambda_1} M_{\varphi_{\lambda_2}}P_{\lambda_1} M_{\varphi_{\lambda_2}}^*f\\ & = P_{\lambda_1}f - P_{\lambda_1} M_{\varphi_{\lambda_2}}p M_{\varphi_{\lambda_2}}^*f\\ & = P_{\lambda_1} [p - pM_{\varphi_{\lambda_2}}p M_{\varphi_{\lambda_2}}^*p]P_{\lambda_1} f\\ & = P_{\lambda_1} P_{\lambda_2} f. \end{align*} Since $[R_{\varphi_{\lambda_1}}^*, R_{\varphi_{\lambda_1}}] = P_{\lambda_i}$ are projections onto $\HM \ominus (z_i - \lambda_i)\HM$, the assertion follows from the above equation. \end{proof} If $[R_{z_1}^*, R_{z_2}]$ is compact, then $[R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}]$ is compact for every $\lambda \in \D^2$. Hence in this case the above proposition implies that for every $\lambda \in \D^2$, the fringe operator $F_\lambda = F_{(\lambda_1,0)} - \lambda_2$ is left semi-Fredholm. Similarly, for $\lambda \in \D^2$ we let $G_\lambda$ and $\widetilde{G_\lambda}$ be defined by $G_\lambda f = P_{\lambda_2} M_{z_1 - \lambda_1} f$ and $\widetilde{G_\lambda} f = P_{\lambda_2} M_{\varphi_{\lambda_1}} f$ for $f \in \ran P_{\lambda_2}$. Then $G_\lambda$ and $\widetilde{G_\lambda}$ have the same range and kernel. The following result is thus parallel to Proposition 2.2. \begin{prop}\label{srsfgl} (i) $\ran G_\lambda = [(z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM]\ominus (z_2 - \lambda_2) \HM$;\\ (ii) for $f \in \HM \ominus (z_2 - \lambda_2) \HM$, $f - \widetilde{G_\lambda}^*\widetilde{G_\lambda} f= [R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}] [R_{\varphi_{\lambda_2}}^*, R_{\varphi_{\lambda_1}}] f$;\\ (iii) for $f \in \HM \ominus (z_2 - \lambda_2) \HM$, $f - \widetilde{G_\lambda}\widetilde{G_\lambda}^* f=[R_{\varphi_{\lambda_2}}^*,R_{\varphi_{\lambda_2}}] [R_{\varphi_{\lambda_1}}^*, R_{\varphi_{\lambda_1}}] f$. \end{prop} Now we discuss the Koszul complex of the pair $R = (R_{z_1}, R_{z_2})$. The Koszul complex of $R$ is defined by $$K(R): 0 \xrightarrow{\partial_R^{-1}} \HM \xrightarrow{\partial_R^{0}} \HM \oplus \HM \xrightarrow{\partial_R^{1}} \HM \xrightarrow{\partial_R^{2}} 0,$$ where $\partial_R^{0} f = (R_{z_1} f, R_{z_2}f)$ and $\partial_R^{1} (f, g) = R_{z_1}g - R_{z_2} f$. The pair $R$ is called a Fredholm pair if all the maps have closed range and the cohomology vector space $\ker \partial_R^{i}/\ran \partial_R^{i-1}$ is finite dimensional for $i = 0, 1$ and $2$ (see \cite{Cu81, GRS05}). If $R$ is a Fredholm pair, then the index of $R$ is defined by $$\ind R = \sum_{i=0}^2 (-1)^i \dim (\ker \partial_R^{i}/\ran \partial_R^{i-1}) = \ind_{(0,0)}\HM - \dim (\ker \partial_R^{1}/\ran \partial_R^{0}).$$ The essential Taylor spectrum of $R$ is defined to be $$\sigma_e(R) = \{\lambda \in \C^2: R - \lambda ~~\text{is not Fredholm}\}.$$ For $\lambda \in \D^2$, we have $$\ker \partial_{R - \lambda}^1 = \{(f,g): (z_1 - \lambda_1)g = (z_2 - \lambda_2)f, f, g \in \HM\},$$ $$\ran \partial_{R - \lambda}^0 = \{((z_1 - \lambda_1)f, (z_2 - \lambda_2)f): f \in \HM\}.$$ Let $\ker \widetilde{\partial}^1 = \{(f,g): \varphi_{\lambda_1}g = \varphi_{\lambda_2}f, f, g \in \HM\}$ and $\ran \widetilde{\partial}^0 = \{(\varphi_{\lambda_1}f, \varphi_{\lambda_2}f): f \in \HM\}$. It is easy to see that the map $U: \ker \partial_{R - \lambda}^1 \rightarrow \ker \widetilde{\partial}^1$ defined by $$U (f,g) = ((1 - \overline{\lambda_2}z_2)f, (1 - \overline{\lambda_1}z_1)g)$$ is one-to-one and onto. Observe that $$U(\partial_{R - \lambda}^1 / \ran \partial_{R - \lambda}^0) = \ker \widetilde{\partial}^1 / \ran \widetilde{\partial}^0,$$ thus $\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0$ is isomorphic to $\ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0$. We show in the following that $\ker F_\lambda$ is isomorphic to $\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0$. \begin{lemma}\label{rlbfadots} Let $\ker \widetilde{\partial}^1$ and $\ran \widetilde{\partial}^0$ be as above, then \begin{align*} \ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0 &= \{(f,g): g = M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}}f, f \in \ran P_{\lambda_1}, M_{\varphi_{\lambda_2}} f \in \varphi_{\lambda_1} \HM\}.\\ & = \{(f,g): f = M_{\varphi_{\lambda_2}}^* M_{\varphi_{\lambda_1}}g, g \in \ran P_{\lambda_2}, M_{\varphi_{\lambda_1}} g \in \varphi_{\lambda_2} \HM\}. \end{align*} \end{lemma} \begin{proof} We prove the first equality, the other equality follows from a similar argument. We first show that the set $\HN = \{(f,g): g = M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}}f, f \in \ran P_{\lambda_1}, M_{\varphi_{\lambda_2}} f \in \varphi_{\lambda_1} \HM\}$ is contained in $\ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0$. Let $(f,g) \in \HN$. Then $M_{\varphi_{\lambda_2}} f \in \varphi_{\lambda_1} \HM$ and $g = M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}}f$. So $\varphi_{\lambda_1} g = \varphi_{\lambda_2}f$, i.e. $(f,g) \in \ker \widetilde{\partial}^1$. Note that $(\varphi_{\lambda_1} h, \varphi_{\lambda_2} h) \in \ran \widetilde{\partial}^0$, and \begin{align}\label{inpbtfo} \langle(f,g), (\varphi_{\lambda_1} h, \varphi_{\lambda_2} h)\rangle& = \langle f, \varphi_{\lambda_1} h\rangle + \langle g, \varphi_{\lambda_2} h\rangle\\ & = 2 \langle f, \varphi_{\lambda_1} h\rangle \notag\\ & = 0. \notag \end{align} It thus follows that $\HN \subseteq \ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0$. Conversely, if $(f,g) \in \ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0$, then $\varphi_{\lambda_1} g = \varphi_{\lambda_2}f \in \varphi_{\lambda_1} \HM$. So $g = M_{\varphi_{\lambda_1}}^* M_{\varphi_{\lambda_2}}f$. Using (\ref{inpbtfo}) we conclude that $f \in \ran P_{\lambda_1}$. Thus $\ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0 \subseteq \HN$, and hence they are the same. \end{proof} Since $\ker F_\lambda = \ker\widetilde{F_\lambda} = \{f \in \ran P_{\lambda_1}: M_{\varphi_{\lambda_2}} f \in \varphi_{\lambda_1} \HM\}$, the above lemma implies that $\ker F_\lambda$ is isomorphic to $\ker \widetilde{\partial}^1 \ominus \ran \widetilde{\partial}^0$, and hence $\ker F_\lambda$ is isomorphic to $\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0$. Recall that $\ker F_\lambda^* = \HM \ominus [(z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM]$. It follows that if $F_\lambda$ is left semi-Fredholm, then \begin{align}\label{indbfoaf} \ind F_\lambda &= \dim \ker F_\lambda - \dim \ker F_\lambda^*\notag\\ & = \dim (\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0) - \ind_\lambda \HM. \end{align} Thus $F_\lambda$ is Fredholm if and only if $R- \lambda$ is Fredholm, in which case the above equation implies \begin{align}\label{rlbindofr} \ind F_\lambda = -\ind (R - \lambda). \end{align} Next we look at the Koszul complex of $S = (S_{z_1}, S_{z_2})$, where $S_{z_i} = P_{\HM^\perp}M_{z_i}|\HM^\perp, i = 1, 2$. The Koszul complex of $S$ is defined similarly by $$K(S): 0 \xrightarrow{\partial_S^{-1}} \HM^\perp \xrightarrow{\partial_S^{0}} \HM^\perp\oplus\HM^\perp \xrightarrow{\partial_S^{1}} \HM^\perp \xrightarrow{\partial_S^{2}} 0.$$ The pair $S$ is a Fredholm pair if the vector spaces $\ker \partial_S^{i}/\ran \partial_S^{i-1}$ are finite dimensional. If $S$ is a Fredholm pair, then the index of $S$ is \begin{align}\label{indos} \ind S &= \sum_{i=0}^2 (-1)^i \dim (\ker \partial_S^{i}/\ran \partial_S^{i-1}) \notag\\ & = \dim \ker \partial_S^0 - \dim (\ker \partial_S^{1}/\ran \partial_S^{0}) + \dim (\ran \partial_S^{1})^\perp. \end{align} For earlier work on the index of $(S_{z_1}, S_{z_2})$ we refer readers to \cite{Ya06, LYY11} and the references therein. Observe that \begin{align*} \ker \partial_{S-\lambda}^{0} &= \{f \in \HM^\perp: S_{z_1 - \lambda_1} f = S_{z_2 - \lambda_2} f = 0\} \\ & = \ker S_{\varphi_{\lambda_1}} \cap \ker S_{\varphi_{\lambda_1}}. \end{align*} We show in the following that $\ker F_\lambda$ is isomorphic to $\ker \partial_{S-\lambda}^{0}$. \begin{lemma}\label{isobkfaks} $\ker \widetilde{F_\lambda} = M_{\varphi_{\lambda_1}} \ker \partial_{S-\lambda}^{0}$. \end{lemma} \begin{proof} Let $f \in \ker \partial_{S-\lambda}^{0}$. Then $\varphi_{\lambda_i} f \in \HM, i =1, 2$ and $\varphi_{\lambda_1} f \perp \varphi_{\lambda_1} \HM$. Hence $\varphi_{\lambda_1} f \in \ran P_{\lambda_1}$. Note that $\varphi_{\lambda_2} f \in \HM$ implies that $\varphi_{\lambda_2} \varphi_{\lambda_1} f \in \varphi_{\lambda_1} \HM$. We thus conclude that $\varphi_{\lambda_1} f \in \ker \widetilde{F_\lambda}$, and so $M_{\varphi_{\lambda_1}} \ker \partial_{S-\lambda}^{0} \subseteq \ker \widetilde{F_\lambda}$. For containment in the other direction, if $f \in \ker \widetilde{F_\lambda}$, then $\varphi_{\lambda_2} f \in \varphi_{\lambda_1} \HM$. This implies $f(\lambda_1, \cdot) = 0$ and $\frac{f}{\varphi_{\lambda_1}} \in \HM^\perp$. From $f \in \ran P_{\lambda_1}$ and $\varphi_{\lambda_2} \frac{f}{\varphi_{\lambda_1}} \in \HM$, we obtain$S_{\varphi_{\lambda_1}}\frac{f}{\varphi_{\lambda_1}} = S_{\varphi_{\lambda_2}}\frac{f}{\varphi_{\lambda_1}} = 0$. Therefore $\frac{f}{\varphi_{\lambda_1}} \in \ker \partial_{S-\lambda}^{0}$, and $\ker \widetilde{F_\lambda} \subseteq M_{\varphi_{\lambda_1}} \ker \partial_{S-\lambda}^{0}$. So $\ker \widetilde{F_\lambda} = M_{\varphi_{\lambda_1}} \ker \partial_{S-\lambda}^{0}$. \end{proof} Recall that $\ker F_\lambda$ is isomorphic to $\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0$, we thus obtain the following lemma. \begin{lemma}\label{isosps} Let $\HM \in Lat(H^2(\D^2))$. Then the spaces $\ker F_\lambda, \ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0$ and $\ker \partial_{S-\lambda}^{0}$ are isomorphic for each $\lambda \in \D^2$. \end{lemma} \section{Hilbert-Schmidtness} In this section, we study the Hilbert-Schmidtness of submodules containing some particular functions and prove our main theorem. \subsection{Submodules containing $z_1 - \varphi(z_2)$} In this subsection, we consider the submodules which contain $z_1 - \varphi(z_2)$, where $\varphi$ is an inner function. Let $\varphi$ be an inner function and $M_\varphi = [z_1 - \varphi(z_2)]$ be the submodule generated by $z_1 - \varphi(z_2)$. The submodule $M_\varphi$ was studied by the second and the third author in \cite{IY08}. Let $\{\lambda_k(z_2)\}$ be an orthonormal basis of $K_\varphi(z_2) = H^2(z_2) \ominus \varphi H^2(z_2)$, $$e_j = \frac{z_2^j + z_2^{j-1}z_1 + \cdots + z_1^j}{\sqrt{j+1}}, j \geq 0$$ and $E_{k,j} = \lambda_k(z_2) e_j(z_1, \varphi(z_2))$. Let $S_{z_1}^\varphi = P_{M_\varphi^\perp}M_{z_1}|M_\varphi^\perp$ and define the operator $$V: H^2(\D^2) \ominus M_\varphi \rightarrow K_\varphi(z_2) \otimes L^2_a(\D)$$ by $V(E_{k,j}) = \lambda_k(z_2) \sqrt{j+1}z^j$. It is shown in \cite{IY08} that $\{E_{k,j}\}$ is an orthonormal basis of $H^2(\D^2) \ominus M_\varphi$, $V$ is a unitary operator and $$V S_{z_1}^\varphi = (I\otimes M_z) V,$$ i.e., $S_{z_1}^\varphi$ is unitarily equivalent to $I\otimes M_z$. It is clear that $I\otimes M_z$ is a Fredholm operator on $K_\varphi(z_2) \otimes L^2_a(\D)$ if and only if $K_\varphi(z_2)$ is finite dimensional, or equivalently, if and only if $\varphi$ is a finite Blaschke product. Now we take a look at a submodule $\HM$ which contains $z_1 - \varphi(z_2)$ (but not necessarily generated by it) and study its Hilbert-Schmidtness under the assumption that $\varphi$ is a finite Blaschke product. Observe that in this case there exists a closed subspace $\HM_1 \subseteq H^2(\D^2) \ominus M_\varphi$ such that $\HM = \HM_1 \oplus M_\varphi$. We extend $V$ to be zero on $M_\varphi$ and denote the new operator also by $V$, then $V^*: K_\varphi(z_2) \otimes L^2_a(\D) \rightarrow H^2(\D^2)$ is an isometry with range $H^2(\D^2) \ominus M_\varphi$ and $V: H^2(\D^2) \rightarrow K_\varphi(z_2) \otimes L^2_a(\D)$ is a partial isometry. Let $\HN = V\HM = V \HM_1$. Then clearly $\HN$ is invariant under $I\otimes M_z$. Define $S_{z_i} = P_{\HM^\perp} M_{z_i} |_{\HM^\perp}, i = 1, 2$, and $S_\HN = P_{\HN^\perp} (I \otimes M_z) |_{\HN^\perp}$. We will see that $S_{z_1}$ is unitarily equivalent to $S_\HN$. Since it is well-known that submodules $\HM$ with $\dim \HM^\perp < \infty$ are Hilbert-Schmidt, we assume in the sequal that $\dim \HM^\perp = \infty$. \begin{lemma}\label{clrfsnczp} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$ and $\HN = V\HM$, where $\varphi$ is a finite Blaschke product. Then for every $\lambda\in {\mathbb D}$ the operator $S_\HN-\lambda$ has closed range. \end{lemma} \begin{proof} It is equivalent to show that $ran (S_\HN^*-\overline{\lambda})$ is closed. We only verify the case for $\lambda=0$ since the general case is similar. It is clear that $I\otimes M_z^*: K_\varphi(z_2) \otimes L^2_a(\D) \rightarrow K_\varphi(z_2) \otimes L^2_a(\D)$ has closed range and $\ker (I\otimes M_z^*) = K_\varphi(z_2)$. Observe that $S_\HN^* = (I\otimes M_z^*) |_{\HN^\perp}$. We then have $$S_\HN^* \HN^\perp = (I\otimes M_z^*) (\HN^\perp + K_\varphi(z_2)) = (I\otimes M_z^*) [(\HN^\perp + K_\varphi(z_2)) \ominus K_\varphi(z_2)].$$ Since $\varphi$ is a finite Blaschke product, we have $\dim K_\varphi(z_2) < \infty$. Thus $\HN^\perp + K_\varphi(z_2)$ is closed, and so $S_\HN^* \HN^\perp$ is closed. The proof is complete. \end{proof} Note that $\ker S_\HN^* = K_\varphi(z_2) \cap \HN^\perp$, we conclude from the above lemma that $S_\HN$ is a semi-Fredholm operator. \begin{lemma}\label{sFosnzcp} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$ and $\HN = V\HM$, where $\varphi$ is a finite Blaschke product. Then for every $\lambda \in \D$, the operator $S_\HN - \lambda$ is semi-Fredholm with \[\ind (S_\HN - \lambda) = \dim (\HN \ominus (I\otimes M_z)\HN) - \dim K_\varphi.\] \end{lemma} \begin{proof} In view of Lemma 3.1 we only need to consider the index of $S_\HN - \lambda$. To this end we write \begin{equation*} I \otimes M_z = \left( \begin{array}{cc} (I \otimes M_z)|_{\HN} & A\\ 0 & S_\HN\\ \end{array}\right) \end{equation*} with respect to the decomposition $K_\varphi(z_2) \otimes L^2_a(\D) = \HN \oplus \HN^\perp$. Then for $\lambda \in \D$, we have \begin{align}\label{madnanp} I \otimes M_z - \lambda& = \begin{pmatrix} (I \otimes M_z)|_{\HN} - \lambda & A\\ 0 & S_\HN - \lambda\\ \end{pmatrix}\\ &= \begin{pmatrix} I & 0\\ 0 & S_\HN - \lambda\\ \end{pmatrix} \begin{pmatrix} I & A\\ 0 & I\\ \end{pmatrix} \begin{pmatrix} (I \otimes M_z)|_{\HN} - \lambda & 0\\ 0 & I\\ \end{pmatrix}.\notag \end{align} It is clear that $ \begin{pmatrix} I & A\\ 0 & I\\ \end{pmatrix}$ is invertible. Since the Fredholm index of a product equals the sum of the indices, we obtain \begin{align*} -\dim K_\varphi &= \ind (I \otimes M_z - \lambda) = \ind (S_\HN - \lambda) + \ind ((I \otimes M_z)|_{\HN} - \lambda). \end{align*} Since $(I \otimes M_z)|_{\HN} - \lambda$ is known to be semi-Fredholm for every $\lambda\in {\mathbb D}$ and ${\mathbb D}$ is path connected, we have \[ \ind ((I \otimes M_z)|_{\HN} - \lambda)=\ind ((I \otimes M_z)|_{\HN})= \dim (\HN \ominus (I\otimes M_z)\HN).\] Thus we have \[\ind (S_\HN - \lambda) = \dim (\HN \ominus (I\otimes M_z)\HN) - \dim K_\varphi,\] when all the numbers involved are finite. Furthermore, if $\dim (\HN \ominus (I\otimes M_z)\HN) = \infty$, then $(I \otimes M_z)|_{\HN} - \lambda$ is not a Fredholm operator, so in the Calkin algebra its image is not invertible. Hence (\ref{madnanp}) implies $S_\HN - \lambda$ is not a Fredholm operator, i.e., $\ind (S_\HN - \lambda) = \infty$. The proof is complete. \end{proof} Recall that $S_{z_i} = P_{\HM^\perp} M_{z_i} |\HM^\perp, i = 1, 2$. Now we determine the essential spectrum for $S_{z_1}$. \begin{lemma}\label{espfszozp} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$ and $\HN = V\HM$, where $\varphi$ is a finite Blaschke product.\\ (i) If $\dim (\HN \ominus (I\otimes M_z)\HN) = \infty$, then $\sigma_e(S_{z_1}) = \overline{\D}$.\\ (ii) If $\dim (\HN \ominus (I\otimes M_z)\HN) < \infty$, then $\sigma_e(S_{z_1}) \subseteq \T$. \end{lemma} \begin{proof} Recall that $V^*: K_\varphi(z_2) \otimes L^2_a(\D) \rightarrow H^2(\D^2)$ is an isometry with range $H^2(\D^2) \ominus M_\varphi$. So $VV^* = I$ and $V^*V = P_{\HM_\varphi^\perp}$. Since $\HM = \HM_1 \oplus M_\varphi$ for some $\HM_1 \subseteq H^2(\D^2) \ominus M_\varphi$, and $V^* \HN = \HM_1$, we conclude that $V^* (\HN^\perp) = \HM^\perp$. Recall also that $V S_{z_1}^\varphi = (I\otimes M_z) V$, it then follows that \begin{align*} S_{z_1}^\varphi V^* &= P_{\HM_\varphi^\perp}M_{z_1}P_{\HM_\varphi^\perp} V^*\\ &= V^*(V S_{z_1}^\varphi)V^* = V^*[(I\otimes M_z) V]V^*\\ & = V^*(I\otimes M_z). \end{align*} Thus $$S_{z_1}V^*|\HN^\perp = V^*|\HN^\perp P_{\HN^\perp} (I\otimes M_z) |\HN^\perp = V^*|\HN^\perp S_\HN.$$ So $S_{z_1}$ is unitarily equivalent to $S_\HN$. The assertions then follow from Lemma \ref{sFosnzcp}. \end{proof} We need the following theorem from \cite{Ya01} to study the Hilbert-Schmidtness of a submodule. \begin{theorem}[\cite{Ya01}]\label{grshcm} Let $\HM \subseteq H^2(\D^2)$ be a submodule. If $\D$ is not a subset of $\sigma_e(S_{z_1}) \cap \sigma_e(S_{z_2})$, then $\HM$ is a Hilbert-Schmidt submodule. \end{theorem} The following result is immediate. \begin{corollary}\label{mrischszp} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$ and $\HN = V\HM$, where $\varphi$ is a finite Blaschke product. If $\dim (\HN \ominus (I\otimes M_z)\HN) < \infty$, then $\HM$ is a Hilbert-Schmidt submodule. \end{corollary} \begin{proof} If $\dim (\HN \ominus (I\otimes M_z)\HN) < \infty$, then Lemma \ref{espfszozp} (ii) ensures that $\sigma_e(S_{z_1}) \subseteq \T$. Thus by Theorem \ref{grshcm}, we conclude that $\HM$ is a Hilbert-Schmidt submodule. \end{proof} Before we prove Theorem \ref{hlsmnmcdv}, we need some lemmas. \begin{lemma}\label{dmineqlm} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$, where $\varphi$ is an inner function. Then $$\dim (\HM \ominus (z_1\HM + \varphi(z_2)\HM)) \leq \dim (H^2(\D^2) \ominus [z_1, \varphi(z_2)]) \rank \HM.$$ \end{lemma} \begin{proof} If $\dim (H^2(\D^2) \ominus [z_1, \varphi(z_2)])$ or $\rank \HM$ is infinity, then there is nothing to prove. So suppose $\dim (H^2(\D^2) \ominus [z_1, \varphi(z_2)])$ and $\rank \HM$ are finite. Suppose $\{e_i\}$ is an orthonormal basis of $H^2(\D^2) \ominus [z_1, \varphi(z_2)]$, and $\HM = [f_1, f_2, \cdots, f_{n+1}]$, where $f_j \in H^2(\D^2), f_{n+1} = z_1 - \varphi(z_2), j = 1, \cdots, n$. Let $P_\varphi$ be the orthogonal projection onto $\HM \ominus (z_1\HM + \varphi(z_2)\HM)$. We claim that $\HM \ominus (z_1\HM + \varphi(z_2)\HM)$ is contained in $\text{span}\{P_{\varphi} (e_if_j), i, j \geq 1\}$. Then the conclusion will follow from this claim. Now we prove the claim. Suppose $g \in \HM \ominus (z_1\HM + \varphi(z_2)\HM)$ and $g$ is orthogonal to $\text{span}\{P_{\varphi} (e_if_j), i, j \geq 1\}$. Then for any polynomial $h$, there are $h_1 \in H^2(\D^2) \ominus [z_1, \varphi(z_2)], h_2 \in [z_1, \varphi(z_2)]$ such that $h = h_1 + h_2$. Note that $P_\varphi(h_1 f_j)$ is in $\text{span}\{P_{\varphi} (e_if_j), i, j \geq 1\}$ and $h_2f_j$ is in the closure of $z_1\HM + \varphi(z_2)\HM$. Thus $$\langle g, hf_j\rangle = \langle g, h_1 f_j\rangle + \langle g, h_2f_j\rangle = 0.$$ Since $\HM$ is generated by $\{f_j\}$, we conclude that $g = 0$. So the claim holds and the proof is complete. \end{proof} \begin{prop}\label{indeve} Let $\HM \in Lat(H^2(\D^2))$. If $[R_{z_1}^*,R_{z_2}]$ is compact, then $\forall \lambda, \eta \in \D^2$, $F_\lambda$ and $F_\eta$ are left semi-Fredholm operators and $\ind F_\lambda = \ind F_\eta$. \end{prop} \begin{proof} If $[R_{z_1}^*,R_{z_2}]$ is compact, then $[R_{\varphi_{\lambda_1}}^*,R_{\varphi_{\lambda_2}}]$ is compact for any $\lambda \in \D^2$. Hence Propositions \ref{rlspoaci} and \ref{srsfgl} ensure that $F_\lambda$ and $G_\eta$ are left semi-Fredholm operators. Since $F_\lambda = F_{(\lambda_1,0)} - \lambda_2, G_\eta = G_{(0,\eta_2)} - \eta_1$, it follows that $$\ind F_\lambda = \ind F_{(\lambda_1, \eta_2)}, \quad \ind G_\eta = \ind G_{(\lambda_1, \eta_2)}.$$ Note that $F_\lambda$ and $G_\lambda$ have the same cokernel, and Lemma \ref{rlbfadots} implies that $\ker F_\lambda$ and $\ker G_\lambda$ are isomorphic. Therefore the conclusion follows from the above two equations. \end{proof} \begin{lemma}\label{hsipfgcp} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - \varphi(z_2)$, where $\varphi$ is a finite Blaschke product. If $\HM$ is a Hilbert-Schmidt submodule, then the space $\HM \ominus (z_1\HM + \varphi(z_2)\HM)$ is of finite dimensional. \end{lemma} \begin{proof} If $\HM$ is a Hilbert-Schmidt submodule, then $[R_{z_1}^*,R_{z_1}][R_{z_2}^*,R_{z_2}]$ and $[R_{z_1}^*,R_{z_2}]$ are Hilbert-Schmidt operators. It then follows from Propositions \ref{rlspoaci} and \ref{indeve} that $(z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM$ is closed and $\ind_\lambda\HM = \dim (\HM \ominus (z_1 - \lambda_1)\HM + (z_2 - \lambda_2)\HM) < \infty$. Without loss of generality, suppose $\varphi(0) = 0$, i.e. $\varphi(z_2) = z_2 \psi(z_2)$, where $\psi(z_2)$ is a finite Blaschke product. Note that by induction, we only need to prove the case when $\psi(z_2)$ is a m\"{o}bius transform. So suppose $\psi(z_2) = \frac{\alpha - z_2}{1-\overline{\alpha}z_2}: = \phi_\alpha(z_2)$. Now we show $\dim(\HM \ominus (z_1\HM + z_2\phi_\alpha(z_2)\HM)) < \infty$. Notice that $\dim(\HM \ominus (z_1\HM + \phi_\alpha(z_2)\HM) < \infty$. Define $$T: \HM \Big/ (z_1\HM + \phi_\alpha(z_2)\HM) \rightarrow (z_1\HM + z_2\HM)\Big/(z_1\HM + z_2\phi_\alpha(z_2)\HM)$$ by $T([g])=[z_2 g], g \in \HM$. By a verification, we see that $T$ is well defined and $T$ is onto. Thus $\dim((z_1\HM + z_2\HM)/(z_1\HM + z_2\phi_\alpha(z_2)\HM)) < \infty$. Since $$\HM \ominus (z_1\HM + z_2\phi_\alpha(z_2)\HM) = \HM \ominus (z_1\HM + z_2\HM) \oplus [(z_1\HM + z_2\HM) \ominus (z_1\HM + z_2\phi_\alpha(z_2)\HM)],$$ we obtain that $\dim(\HM \ominus (z_1\HM + z_2\phi_\alpha(z_2)\HM)) < \infty$. The proof is complete. \end{proof} Now we can prove Theorem \ref{hlsmnmcdv}. \begin{proof}[Proof of Theorem \ref{hlsmnmcdv}] Suppose $\HM$ is finitely generated. Since $H^2(\D^2) \ominus [z_1, \varphi(z_2)] = K_\varphi(z_2)$, Lemma \ref{dmineqlm} asserts that $\dim (\HM \ominus (z_1\HM + \varphi(z_2)\HM))$ is finite. Let $\HN = V\HM$. It is not difficult to check that $V^*(\HN \ominus (I\otimes M_z)\HN) \subseteq \HM \ominus (z_1\HM + \varphi(z_2)\HM)$. Thus $\dim(\HN \ominus (I\otimes M_z)\HN) < \infty$. Hence by Corollary \ref{mrischszp}, the submodule $\HM$ is Hilbert-Schmidt. For the necessity, if $\HM$ is a Hilbert-Schmidt submodule, then Lemma \ref{hsipfgcp} implies $\dim(\HM \ominus (z_1\HM + \varphi(z_2)\HM))< \infty$. Thus $\dim(\HN \ominus (I\otimes M_z)\HN) < \infty$. Note that $K_\varphi(z_2) \otimes L^2_a(\D)$ is isomprphic to $\C^k \otimes L^2_a(\D)$, where $k$ is the order of $\varphi$. By Theorem 3.6 in \cite{Sh01}, we have $\HN = [\HN \ominus (I\otimes M_z)\HN]$. Therefore $\HN$ is finitely generated. One verifies that $\HM = [V^*(\HN \ominus (I\otimes M_z)\HN), z_1-\varphi(z_2)]$. So $\HM$ is finitely generated. \end{proof} \begin{corollary}\label{nchsfgszp} Let $\HM = [f_1, \cdots, f_n, z_1 - \varphi(z_2)]$, where $\varphi$ is a finite Blaschke product and $f_j \in H^2(\D^2), j = 1, \cdots, n$, are arbitrary. Then $\HM$ is a Hilbert-Schmidt submodule. \end{corollary} \subsection{Submodules containing $z_1 - z_2$} In this subsection, we consider the special case $\varphi(z_2) = z_2$ and fully characterize the submodules containing $z_1 - z_2$. In this case, since the space $K_\varphi(z_2) \otimes L^2_a(\D) = L^2_a(\D)$, we can write out the operators $V$ and $V^*$ more explicitly. Indeed, $$V: H^2(\D^2) \rightarrow L^2_a(\D)$$ is the operator defined by $Vf(\lambda) = f(\lambda, \lambda)$, and \[V^*g(z_1,z_2) = \frac{1}{z_2 - z_1} \int_{z_1}^{z_2} g(\lambda) d\lambda.\] One checks that $\ker V = [z_1 - z_2]$ and $V^*$ is an isometry. Suppose $\HN \in Lat(M_z, L^2_a(\D))$, let $\HM = \tau(\HN): = V^* \HN + \ker V$, then $\HM \in Lat(H^2(\D^2))$. Note that $\tau$ defines a one-to-one correspondence between $Lat(M_z, L^2_a(\D))$ and submodules in $Lat(H^2(\D^2))$ that contain $\ker V$ (\cite{Ri87}). Let $\HM_0 = [z_1 - z_2]$. Then $P_{\HM_0^\perp}M_{z_1}|_{\HM_0^\perp} = P_{\HM_0^\perp}M_{z_2}|_{\HM_0^\perp}$, and $P_{\HM_0^\perp}M_{z_1}|_{\HM_0^\perp}$ is unitarily equivalent to the Bergman shift $M_z$ on the Bergman space $L^2_a(\D)$. In fact, $P_{\HM_0^\perp}M_{z_1}|_{\HM_0^\perp} V^* = V^* M_z$ on $L^2_a(\D)$ (see also \cite{DP89, GSZZ09}). The following lemma is proved in \cite{LR}. \begin{lemma}[\cite{LR}]\label{rbhanthbs} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$, and $\HN = V\HM$. Then for every $\lambda \in \D$, we have $\HM = [V^*(\HN \ominus (z-\lambda)\HN), z_1 - z_2]$ and $\ind \HN \leq \ind_{(\lambda,\lambda)}\HM \leq \ind \HN + 1$, where $\ind \HN = \dim (\HN \ominus z\HN)$. \end{lemma} In fact, suppose $\HM \in Lat(H^2(\D^2))$ contains $z_1 - z_2$ and $\HN = V\HM$, by $\HN = [\HN \ominus (z-\lambda)\HN]$ (\cite{ARS96}), we have $\HM = [V^*(\HN \ominus (z-\lambda)\HN), z_1 - z_2], \lambda \in \D$. Note that for $f, g \in \HM, h \in \HN \ominus (z-\lambda)\HN$, $$\langle V^*h, (z_1 - \lambda)f + (z_2 - \lambda)g\rangle_{H^2(\D^2)} = \langle h, (z-\lambda)V(f+g)\rangle_{L^2_a} = 0.$$ So $V^*(\HN \ominus (z-\lambda)\HN) \subseteq \HM \ominus ((z_1 - \lambda)\HM + (z_2 - \lambda)\HM)$. Since $\ind_{(\lambda,\lambda)}\HM$ is less than or equal to the rank of $\HM$ and $\dim (\HN \ominus (z-\lambda)\HN) = \dim (\HN \ominus z\HN) = \ind \HN$, we immediately obtain $\ind \HN \leq \ind_{(\lambda,\lambda)}\HM \leq \ind \HN + 1$. Thus if $\HM \in Lat(H^2(\D^2))$ contains $z_1 - z_2$, then $\HM$ is finitely generated if and only if $\HN$ is finitely generated, which is equivalent to the condition that $\ind_{(0,0)}\HM < \infty$. By Lemmas \ref{espfszozp} and \ref{rbhanthbs}, we obtain the following result. \begin{lemma}\label{untebsamr} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$, and $\HN = V\HM$.\\ (i) If $\ind_{(0,0)}\HM = \infty$, then $\sigma_e(S_{z_i}) = \overline{\D}, i = 1, 2$.\\ (ii) If $\ind_{(0,0)}\HM < \infty$, then $\sigma_e(S_{z_i}) \subseteq \T, i = 1, 2$. \end{lemma} We need the following theorem from \cite{GRS05} to prove Theorem \ref{hlsmnmcdv2}. For $f \in H^2(\D^2)$, we write $Z(f) = \{\lambda \in \D^2: f(\lambda) = 0\}$ and $Z(\HM) = \bigcap_{f\in \HM} Z(f)$. \begin{theorem}[\cite{GRS05}]\label{GRS05np} If a submodule $\HM$ of $H^2(\D^2)$ contains a nonzero bounded function $\varphi$, then $$\sigma_e(R) \cap \D^2 \subseteq Z(\varphi)$$ and for every $\lambda \in \D^2 \setminus \sigma_e(R)$ the pair $R - \lambda$ has Fredholm index 1. In fact, for all $\lambda \in \D^2 \setminus Z(\varphi)$ we have $$\dim \HM /((z_1-\lambda_1)\HM + (z_2-\lambda_2)\HM) = 1.$$ \end{theorem} Now we can prove the following theorem. \begin{theorem}\label{hlsmnmcdv2} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. The following are equivalent.\\ (i) $\ind_{(0,0)}\HM < \infty$.\\ (ii) $\HM$ is a Hilbert-Schmidt submodule.\\ (iii) $[R_{z_1}^*,R_{z_2}]$ is compact.\\ (iv) $F_\lambda$ is a semi-Fredholm operator for some $\lambda = (\lambda_0, \lambda_0) \in \D^2$. \end{theorem} \begin{proof} (i) implies (ii). If $\ind_{(0,0)}\HM < \infty$, then Lemma \ref{untebsamr} ensures that $\sigma_e(S_{z_i}) \subseteq \T$. It then follows from Theorem \ref{grshcm} that $\HM$ is a Hilbert-Schmidt submodule.\\ (ii) implies (iii). This follows from definition.\\ (iii) implies (iv). If $[R_{z_1}^*, R_{z_2}]$ is compact, then Proposition \ref{indeve} asserts that $F_\lambda$ is a semi-Fredholm operator for all $\lambda \in \D^2$.\\ (iv) implies (i). Suppose $F_\lambda$ is semi-Fredholm for some $\lambda = (\lambda_0, \lambda_0)$. Note that Theorem \ref{GRS05np} and (\ref{rlbindofr}) imply that for $\lambda = (\lambda_1,\lambda_2) \in \D^2$ with $\lambda_1 \neq \lambda_2$, $1 = \ind (R - \lambda) = -\ind F_\lambda$. It thus follows that $F_{(\lambda_0, \lambda_0)}$ is Fredholm. So $\dim \ker F_{(\lambda_0, \lambda_0)}^* = \ind_{(\lambda_0, \lambda_0)} \HM < \infty$. Then Lemma \ref{rbhanthbs} ensures that $\ind_{\lambda}\HM < \infty$, $\forall \lambda \in \D^2$ with $\lambda_1 = \lambda_2$. In particular, $\ind_{(0,0)}\HM < \infty$. The proof is complete. \end{proof} The equivalence of (i) and (iv) in the above theorem generalizes Theorem 2.9 in \cite{III}. \begin{corollary}\label{hlsmmcdvc} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. Then $\HM$ is a Hilbert-Schmidt submodule if and only if $\sigma_e(S_{z_1}) \neq \overline{\D}$. \end{corollary} \begin{proof} If $\sigma_e(S_{z_1}) \neq \overline{\D}$, then by Theorem \ref{grshcm}, we conclude that $\HM$ is a Hilbert-Schmidt submodule. Conversely, if $\HM$ is a Hilbert-Schmidt submodule, then by Theorem \ref{hlsmnmcdv} and Lemma \ref{untebsamr}, we get the assertion. \end{proof} The following result characterizes the Fredholmness of the pairs $R - \lambda$ and $S - \lambda$ for $\lambda \in \D^2$. \begin{prop}\label{fhnotrs} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. Then the following are equivalent.\\ (i) $\ind_{(0,0)}\HM < \infty$.\\ (ii) $\forall \lambda \in \D^2$ the pair $R - \lambda$ is Fredholm with index 1.\\ (iii) $\forall \lambda \in \D^2$ the pair $S - \lambda$ is Fredholm with index 0. \end{prop} \begin{proof} By Lemma \ref{isosps} and Theorem \ref{hlsmnmcdv2}, we see that (ii) implying (i) and (iii) implying (i) hold. It is left to show that (i) implies (ii) and (iii). If $\ind_{(0,0)}\HM < \infty$, then Theorem \ref{hlsmnmcdv2} ensures that $\HM$ is a Hilbert-Schmidt submodule. Thus $R - \lambda$ and $S - \lambda$ are Fredholm pairs for $\lambda \in \D^2$ (\cite{Ya01, Ya06}). Since for $\lambda \in \D^2$ with $\lambda_1 \neq \lambda_2$, we have $R - \lambda$ and $S - \lambda$ are Fredholm with index $1$ and $0$, respectively (\cite{GRS05}). The assertion follows from this. \end{proof} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. It is proved in \cite{LR} that $\ran \partial_{R-\lambda}^1$ is closed for $\lambda \in \D^2$. It is also proved in \cite[Lemma 2.6]{III} that $\ran \partial_{R}^1 = z_1 \HM + z_2 \HM$ is closed. We use a similar argument as in \cite[Lemma 2.6]{III} to prove the closedness of $\ran \partial_{R-\lambda}^1$ in the following. Note that this result holds even when $\ind_{(0,0)}\HM = \infty$. \begin{lemma}\label{clsfdrd} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$ and $\HN = V \HM$. Then for $\lambda = (\lambda_0,\lambda_0) \in \D^2$, $\ran \partial_{R-\lambda}^1 = (z_1 - \lambda_0)\HM + (z_2 - \lambda_0)\HM$ is closed. \end{lemma} \begin{proof} Since $[z_1 - z_2]$ is generated by the polynomial $z_1 - z_2$, we have $[z_1 - z_2]$ is a Hilbert-Schmidt submodule (\cite{Ya99}). So for $\lambda = (\lambda_0,\lambda_0) \in \D^2$, $(z_1 - \lambda_0)[z_1 - z_2] + (z_2 - \lambda_0)[z_1 - z_2]$ is closed and \begin{align}\label{hsicark} [z_1 - z_2] = (z_1 - \lambda_0)[z_1 - z_2] + (z_2 - \lambda_0)[z_1 - z_2] + \C (z_1 - z_2). \end{align} Note that $\HM = V^*\HN \oplus [z_1 - z_2]$. Let $L_0 = (z_1 - \lambda_0)[z_1 - z_2] + (z_2 - \lambda_0)[z_1 - z_2]$, we then have \begin{align}\label{idfdrlz} (z_1 - \lambda_0)\HM + (z_2 - \lambda_0)\HM = (z_1 - \lambda_0)V^*\HN + (z_2 - \lambda_0)V^*\HN + L_0. \end{align} Notice that \begin{align*} &V\left((z_1 - \lambda_0)\HM + (z_2 - \lambda_0)\HM\right) = (z-\lambda_0)\HN = V(V^*(z-\lambda_0)\HN)\\ & = V(V^*(z-\lambda_0)\HN \oplus L_0) = V(V^*(z-\lambda_0)\HN \oplus [z_1 - z_2]). \end{align*} It follows from (\ref{hsicark}) and (\ref{idfdrlz}) that \begin{align}\label{cotibsa} V^*(z-\lambda_0)\HN \oplus L_0 \subseteq (z_1 - \lambda_0)\HM + (z_2 - \lambda_0)\HM \subseteq V^*(z-\lambda_0)\HN \oplus [z_1 - z_2]. \end{align} It is known that $(z-\lambda_0)\HN$ is closed, thus $V^*(z-\lambda_0)\HN$ is closed, so $V^*(z-\lambda_0)\HN \oplus L_0$ and $V^*(z-\lambda_0)\HN \oplus [z_1 - z_2]$ are closed. Since $$V^*(z-\lambda_0)\HN \oplus [z_1 - z_2] = V^*(z-\lambda_0)\HN \oplus L_0 + \C(z_1 - z_2),$$ we conclude from (\ref{cotibsa}) that $(z_1 - \lambda_0)\HM + (z_2 - \lambda_0)\HM$ is closed. \end{proof} Similar result holds for the pair $S - \lambda$. \begin{lemma}\label{clotosf} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. Then for $\lambda = (\lambda_0,\lambda_0) \in \D^2$, $ran \partial_{S-\lambda}^0$ and $\ran \partial_{S-\lambda}^1$ are closed. \end{lemma} \begin{proof} We prove the lemma for $\lambda = (0,0)$, the other cases follow by a similar argument. First we show $\ran \partial_S^0$ is closed. It is equivalent to show $\ran \partial_S^{0*}$ is closed. Let $\HN = V\HM$, then $\HM = V^*\HN + \ker V$ and $\HM^\perp = V^*(\HN^\perp)$. Note that $\ran \partial_S^{0*} = M_{z_1}^* \HM^\perp + M_{z_2}^* \HM^\perp$. So \begin{align*} \ran \partial_S^{0*} &= M_{z_1}^* V^*(\HN^\perp) + M_{z_2}^* V^*(\HN^\perp)\\ & = V^* M_z^* (\HN^\perp) + V^* M_z^* (\HN^\perp)\\ & = V^* M_z^* (\HN^\perp). \end{align*} Therefore $\ran \partial_S^{0*}$ is closed. Next we show $\ran \partial_S^1$ is closed. It is equivalent to show that $\ran \partial_S^{1*}$ is closed. Note that $$\partial_S^{1*} = \left( \begin{matrix} -M_{z_2}^*|_{\HM^\perp}\\ M_{z_1}^*|_{\HM^\perp} \end{matrix} \right) :\HM^\perp \rightarrow \left( \begin{matrix} \HM^\perp\\ \HM^\perp \end{matrix}. \right)$$ Let $\Lambda^* = \left( \begin{matrix} -M_{z_2}^*\\ M_{z_1}^* \end{matrix} \right)$. Since $\Lambda^*: H^2(\D^2)\rightarrow \left( \begin{matrix} H^2(\D^2)\\ H^2(\D^2) \end{matrix} \right)$ has closed range and $\ker \Lambda^* = \C$, applying the same reasoning as in Lemma \ref{clrfsnczp}, we see that $\ran \partial_S^{1*}$ is closed. \end{proof} The following two lemmas are needed to study the dimensions for the cohomology spaces for the pairs $R - \lambda$ and $S - \lambda$. \begin{lemma}\label{indfnnlz} Let $\HN \in Lat(M_z,L^2_a(\D))$.\\ (i) If $\lambda_0 \in Z(\HN)$, let $\HN_0 = \HN /\varphi_0$, where $\varphi_0(z) = \frac{z-\lambda_0}{1-\overline{\lambda_0}z}$, then $\ind \HN_0 = \ind \HN$.\\ (ii) If $\lambda_0 \not\in Z(\HN)$, let $\HN_1 = \{f \in \HN: f(\lambda_0) = 0\}$, then $\ind (\HN_1/\varphi_0) = \ind \HN$. \end{lemma} \begin{proof} (i) Let $U_0$ be the operator on $L^2_a(\D)$ defined by $U_0 f (z)= f(-\varphi_0(z)) \frac{1-|\lambda_0|^2}{(1-\overline{\lambda_0}z)^2}$, then $U_0$ is a unitary operator. Note that $U_0 \HN_0 = (U_0 \HN)/z$, so $\ind \HN_0 = \ind ((U_0 \HN)/z)$. By \cite[Lemma 2.1]{III} or \cite{Ja83} or \cite[Proposition 3]{Zhu98}, we have $\ind ((U_0 \HN)/z) = \ind (U_0 \HN)$. Thus $\ind \HN_0 = \ind (U_0 \HN) = \ind \HN$. (ii) Since $U_0 (\HN_1/\varphi_0) = (U_0 \HN_1)/z$, it follows that $\ind (\HN_1/\varphi_0) = \ind ((U_0 \HN_1)/z)$. Notice that $U_0 \HN_1 = \{g \in U_0 \HN: g(0) = 0\}$, so $(U_0 \HN_1)/z = \{h \in L^2_a(\D): zh \in U_0 \HN\}$. Then \cite[Proposition 5]{Zhu98} implies that $\ind ((U_0 \HN_1)/z) = \ind U_0 \HN$. Hence $\ind (\HN_1/\varphi_0) = \ind \HN$. \end{proof} \begin{lemma}\label{dimfmsfm} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$ and $\HN = V\HM$. Then for $\lambda = (\lambda_0,\lambda_0) \in \D^2$, $$ \dim (\ker \partial_{S-\lambda}^{1} \ominus\ran \partial_{S-\lambda}^{0}) = \begin{cases} \ind \HN + 1, \lambda \in Z(\HM),\\ \ind \HN - 1, \lambda \not\in Z(\HM). \end{cases} $$ \end{lemma} \begin{proof} Suppose $\lambda = (\lambda_0,\lambda_0) \in \D^2$. Note that $$\ker \partial_{S-\lambda}^1 = \{(f,g): q(z_1-\lambda_1)g = q(z_2-\lambda_2) f, f, g \in \HM^\perp\},$$ $$\ran \partial_{S-\lambda}^0 = \{(q(z_1-\lambda_1) f, q(z_2-\lambda_2) f): f \in \HM^\perp\}.$$ Let $$\Lambda^1 = \{(f,g): S_{\varphi_{\lambda_1}}g = S_{\varphi_{\lambda_2}} f, f, g \in \HM^\perp\},$$ $$\Lambda^0 = \{(S_{\varphi_{\lambda_1}} f, S_{\varphi_{\lambda_2}} f): f \in \HM^\perp\}.$$ We define $W: \ker \partial_{S-\lambda}^1 \rightarrow \Lambda^1$ by $$W(f,g) = (q(1-\overline{\lambda_2}z_2 )f, q(1-\overline{\lambda_1}z_1) g).$$ It is not difficult to verify that $W$ is one-to-one and onto, and $W (\ker \partial_{S-\lambda}^1 / \ran \partial_{S-\lambda}^0) = \Lambda^1 / \Lambda^0$. Thus $\dim (\ker \partial_{S-\lambda}^1 / \ran \partial_{S-\lambda}^0) = \dim \Lambda^1 / \Lambda^0$. Notice that $$\Lambda^1 \ominus \Lambda^0 = \{(f,g): S_{\varphi_{\lambda_1}}g = S_{\varphi_{\lambda_2}} f, M_{\varphi_{\lambda_1}}^* f + M_{\varphi_{\lambda_2}}^* g = 0, f, g \in \HM^\perp\}.$$ Let $\varphi_0(z) = \frac{z-\lambda_0}{1-\overline{\lambda_0}z}, S_\HN^* = M_{\varphi_{0}}^*|\HN^\perp$ on $\HN^\perp$ and set \[I = \{(f_1, g_1): S_\HN g_1 = S_\HN f_1, M_{\varphi_0}^* (f_1 + g_1) = 0, f_1, g_1 \in \HN^\perp\}.\] We define the map $$T: \Lambda^1 \ominus \Lambda^0 \rightarrow I$$ by sending $(f,g)$ to $(Vf, Vg)$. Then $T$ is one-to-one and onto. Thus $\dim (\Lambda^1 \ominus \Lambda^0) = \dim I$. Now we determine $\dim I$. (i) If $\lambda_0 \in Z(\HN)$, then $(\frac{1}{(1-\overline{\lambda_0}z)^2},\frac{1}{(1-\overline{\lambda_0}z)^2}) \in I$. Let $\HN_0 = \HN / \varphi_0$ and $(f_1, g_1) \in I$, then $\varphi_0g_1 - \varphi_0f_1 =h_1$ for some $h_1 \in \HN$. Thus $g_1 - f_1 = h_1 / \varphi_0 \in \HN^\perp \cap \HN_0$. Now define $$A: I \rightarrow \HN^\perp\cap\HN_0$$ by $A(f_1,g_1) = g_1 - f_1$. If $A(f_1,g_1) = g_1 - f_1 = 0$, then from $M_{\varphi_0}^* (f_1 + g_1) = 0$, we have $f_1 = g_1 = c\frac{1}{(1-\overline{\lambda_0}z)^2}, c \in \C$. On the other hand, for $h_1/\varphi_0 \in \HN^\perp \cap \HN_0$, let $f_1 = \frac{-h_1}{2\varphi_0}$ and $g_1 = \frac{h_1}{2\varphi_0}$. Then $(f_1,g_1) \in I$ and $A (f_1,g_1) = h_1/\varphi_0$. Hence $A$ is onto. Therefore $\dim I = 1 + \dim \HN^\perp \cap \HN_0$. Note that $\HN_0 = \left(\HN_0 \ominus \varphi_0\HN_0\right) \oplus \HN$, so $\dim I = 1+ \ind \HN_0$. It then follows from Lemma \ref{indfnnlz} that $\dim I = 1 + \ind \HN$. (ii) If $\lambda_0 \not\in Z(\HN)$, let $Q_\HN$ be the projection onto $\HN$ and $\HN_1 = \{h\in \HN: h(\lambda_0) = 0\}$, then $\HN_1 = \HN \ominus \C Q_\HN \frac{1}{(1-\overline{\lambda_0}z)^2}$. Let $(f_1, g_1) \in I$, then $\varphi_0g_1 - \varphi_0f_1 =h_1$ for some $h_1 \in \HN$. Hence $h_1 \in \HN_1$ and $h_1 / \varphi_0 \in \HN_1/\varphi_0 \cap \HN^\perp$. Similarly, we define $$X: I \rightarrow \HN^\perp\cap\HN_1/\varphi_0$$ by $X(f_1,g_1) = g_1 - f_1$. Then one checks that $X$ is one-to-one and onto. So $\dim I = \dim (\HN^\perp \cap \HN_1/\varphi_0)$. Notice that $\HN_1/\varphi_0 = (\HN_1/\varphi_0 \ominus \HN) \oplus (\HN \ominus \HN_1) \oplus \HN_1$, thus \[\dim I = \dim (\HN^\perp \cap \HN_1/\varphi_0) = \dim (\HN_1/\varphi_0 \ominus \HN_1) - 1.\] Lemma \ref{indfnnlz} then ensures that $\dim I = \dim (\HN_1/\varphi_0 \ominus \HN_1) - 1 = \ind \HN - 1$. The proof is complete. \end{proof} Now we determine the dimensions for the cohomology vector spaces for the pairs $R - \lambda$ and $S - \lambda$. \begin{prop}\label{dmfass} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. Then for $\lambda \in \D^2$\\ (i) $$\dim\ker F_\lambda = \dim (\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0) = \dim \ker \partial_{S-\lambda}^{0} = \ind_\lambda \HM - 1;$$ (ii) $$ \dim (\ker \partial_{S-\lambda}^{1} \ominus\ran \partial_{S-\lambda}^{0}) = \begin{cases} \ind_\lambda \HM, \lambda \in Z(\HM),\\ \ind_\lambda \HM - 1, \lambda \not\in Z(\HM); \end{cases} $$ (iii) $$ \dim(\ran \partial_{S-\lambda}^{1})^\perp = \begin{cases} 1, \lambda \in Z(\HM),\\ 0, \lambda \not\in Z(\HM). \end{cases} $$ \end{prop} \begin{proof} Note that $(\ran \partial_{S-\lambda}^{1})^\perp = \C \frac{1}{(1-\overline{\lambda_1}z_1)(1-\overline{\lambda_2}z_2)} \cap \HM^\perp$, thus (iii) is true. Recall from \cite{GRS05} that for $\lambda \in \D^2$ with $\lambda_1 \neq \lambda_2$, $R - \lambda$ and $S - \lambda$ are Fredholm with index $1$ and $0$, respectively. Recall also that $\ind (R- \lambda) = \ind_\lambda \HM - \dim (\ker \partial_{R - \lambda}^1 \ominus \ran \partial_{R - \lambda}^0)$. Therefore Lemma \ref{isosps} implies (i) is true for $\lambda \in \D^2$ with $\lambda_1 \neq \lambda_2$. Since $\ind (S - \lambda) = 0$ for $\lambda \in \D^2$ with $\lambda_1 \neq \lambda_2$, we conclude that (ii) also holds for $\lambda \in \D^2$ with $\lambda_1 \neq \lambda_2$. Now we consider $\lambda\in \D^2$ with $\lambda_1 = \lambda_2$. Suppose $\lambda = (\lambda_0,\lambda_0) \in \D^2$, we have two cases. If $\ind_\lambda \HM < \infty$, then $\ind_{(0,0)}\HM < \infty$. Thus Proposition \ref{fhnotrs} asserts that $R - \lambda$ and $S - \lambda$ are Fredholm with index $1$ and $0$. So the same argument as above implies (i) and (ii) hold in this case. If $\ind_\lambda \HM = \infty$, then $\ind \HN = \infty$. Hence by Lemma \ref{dimfmsfm}, we get $\dim (\ker \partial_{S-\lambda}^{1} \ominus\ran \partial_{S-\lambda}^{0}) = \infty$. So (ii) is true. Next we show that $\dim\ker F_\lambda = \infty$. Suppose $\dim\ker F_\lambda < \infty$. Lemma \ref{clsfdrd} assures that $\ran \partial_{R-\lambda}^1$ is closed. Thus $\ran F_\lambda$ is closed and $F_\lambda$ is semi-Fredholm. Then Theorem \ref{hlsmnmcdv2} shows $\ind_{(0,0)}\HM < \infty$, and so $\ind_\lambda \HM < \infty$. This is a contradiction. So (i) holds and the proof is complete. \end{proof} The following corollary is an immediate consequence of the above proposition. \begin{corollary}\label{eqftqiny} Let $\HM \in Lat(H^2(\D^2))$ contain $z_1 - z_2$. Then the following are equivalent.\\ (i) $\ind_{(0,0)}\HM = \infty$.\\ (ii) $\dim \partial_{S-\lambda}^0 = \infty, \forall \lambda = (\lambda_0,\lambda_0)\in \D^2$.\\ (iii) $\dim (\ker \partial_{S-\lambda}^{1} \ominus\ran \partial_{S-\lambda}^{0}) = \infty, \forall \lambda = (\lambda_0,\lambda_0)\in \D^2$. \end{corollary} Before ending the paper, let us take another look at Theorem 1.1. Let $\varphi(z_2)=\prod_{j=1}^n\frac{z_2-\lambda_j}{1-\overline{\lambda_j}z_2}$ be a finite Blaschke product. Since $|\lambda_j|<1$ for each $j$, the product $q(z_2)=\prod_{j=1}^n(1-\overline{\lambda_j}z_2)$ is a polynomial such that $|q(z_2)|\geq \prod_{j=1}^n(1-|\lambda_j|)>0$ on ${\mathbb D}$. Hence a submodule $\HM$ contains $z_1-\varphi(z_2)$ if and only if it contains the polynomial $z_1q(z_2)-\prod_{j=1}^n(z_2-\lambda_j)$. The next conjecture is thus a natural weakening of that in \cite{Ya99}.\\ {\bf Conjecture}. Let $\HM$ be a submodule that contains a nontrivial polynomial. Then $\HM$ is Hilbert-Schmidt if and only if it is finitely generated.
proofpile-arXiv_067-12663
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Supernova remnants (SNRs) are generally believed to be prime candidates for production of both hadronic and electronic components of galactic cosmic rays (CRs) via the diffusive shock acceleration (DSA) mechanism (see e.g. \cite{malkov01} for a review). While the main aspects of the theory are well understood, the key issue related to electrons is the so-called injection problem which despite certain theoretical attempts (see e.g. \cite{levinson94, bykov99}), remains an open question. The injection of electrons is a serious challenge because the electron gyroradius is small compared to the shock thickness which is of the order of the proton gyroradius. In fact this is a more general problem, related not only to DSA but also to other electron acceleration mechanisms, e.g. through different scenarios of stochastic acceleration \cite{fermiII}. In this paper we explore whether the pool of suprathermal electrons and positrons related to the decay products of radioactive nuclei $^{56}$Ni and $^{44}$Ti can serve as an effective injector for further acceleration of electrons in SNRs by the forward and reverse shocks. It is well established that the supernova ejecta contain huge amount of radioactive nuclei. The decays of these unstable nuclei have been proposed as a source of low energy positrons (see e.g. ref. \cite{chan93, martin10}) responsible for the 0.511 MeV annihilation line observed from the direction of the Galactic Center. In the case of the core-collapse supernova Cas~A, approximately 0.1$M_{\odot}$ of $^{56}$Ni has been ejected just after the explosion\cite{krause08}. The nuclei $^{56}$Ni decay with a half lifetime $t_{1/2}=6.1$~days into $^{56}$Co. Over the first years after the explosion, the decay products of $^{56}$Co ($t_{1/2}=77$ days) support the supernova optical light emission. At later epochs, less abundant radioactive nuclei with longer lifetimes contribute to the production of low energy suprathermal electrons, positrons and gamma-rays. In particular, the detection of characteristic gamma-ray \cite{iyudin94} and hard X-ray lines \cite{renaud06} from $^{44}$Ti gives a robust estimate of the total mass of radioactive $^{44}$Ti ($t_{1/2}=63$ years) produced in Cas~A: $2\cdot 10^{-4}M_{\odot}$ . Recently a comparable amount of $^{44}$Ti has been found also in the youngest galactic supernova remnant - SNR~G1.9+0.3 \cite{borkowski10}. \begin{figure}[t] \includegraphics[width=6.0cm,angle=270]{fig1.eps} \caption{Schematic view of a young supernova remnant in the context of the ``radioactive origin'' of relativistic electrons. The forward shock propagates in the circumstellar medium outward, while the reverse shock propagates into the ejecta (gray color) outward in the laboratory frame and inward in the frame of expanding ejecta. Pre-existing energetic electrons are produced in the circumstellar medium via the Compton scattering of gamma-rays from the decay of $^{56}$Co. The radioactive decays of $^{44}$Ti provide energetic electrons and positrons in the ejecta.} \end{figure} Cas A, an approximately 300 year old remnant, shows bright broad-band emission extending from radio to gamma-rays. It consists of both thermal and nonthermal components, indicating the presence of hot thermal plasma, strong magnetic field, relativistic electrons, and likely also protons, accelerated up to multi-TeV energies. All these components constitute a significant fraction of the bulk motion kinetic energy of the shell expanding with a speed of 4000 to 6000 km s$^{-1}$ \cite{patnaude09}. Most likely, acceleration of electrons takes place both in forward and reverse shocks. Thin non-thermal X-ray filaments detected at the periphery of the remnant \cite{gotthelf01} reveal the presence of a strong $\sim $ 1~mG magnetic field \cite{voelk05} and multi-TeV electrons accelerated at the forward shock of Cas A. Synchrotron X-rays are produced both in the reverse and forward shocks \cite{helder08}. The time variations of synchrotron X-radiation found for a number of filaments and knots associated with the reverse shock, indicate that magnetic field in these compact structures also is very large, close to 1~mG\cite{Uchi08}. Because of large magnetic fields, gamma-rays produced via inverse Compton scattering is strongly suppressed, except for some regions in the reverse shock with relatively small magnetic field. Even so, the total energy in protons, assuming that the detected GeV \cite{abdo10} and TeV gamma-rays \cite{aharonian01,albert07,acciari10} are of purely hadronic origin, does not significantly exceed $10^{49}$erg \cite{abdo10}. On the other hand, the bright synchrotron radio emission of Cas~A indicates to the existence of huge amount of relativistic electrons accelerated by forward and reverse shocks with total energy as large as $10^{48} \ \rm erg$ \cite{atoyan00}. That constitutes approximately $10^{-3}$ fraction of the explosion (mechanical) energy. A significant fraction of this energy is contained in compact radio-knots of Cas~A \cite{tuffs86, anderson95} where the pressure of relativistic electrons is comparable to the thermal pressure of the shell. \section{Production of energetic electrons and positrons} Below we assume that the radioactive elements are distributed uniformly throughout the ejecta. Although these elements are synthesized predominantly in the core of the ejecta, during the explosion they can be well mixed in the ejecta. The ratio of number density of energetic MeV positrons $n_+$ from $\beta $-decay of $^{44}$Ti to the baryonic density of the ejecta $n_{ej}$ is given by \begin{equation} \frac {n_+}{n_{ej}}=0.94\frac {M_{Ti}}{44M_{ej}}\left[ 1-\exp \left( -\frac {t\ln 2}{t_{1/2}}\right) \right] . \end{equation} Here $t$ is the time since supernova explosion, $M_{ej}$ is the mass of ejecta, and $M_{Ti}$ is the total mass of the ejected nuclei $^{44}$Ti. Eq. (1) takes into account that positrons appear in $94\% $ of the $^{44}$Sc decay. The rate of Coulomb energy losses of electrons and positrons is described as \[ \frac{\dot{E}}{E}=\frac {4\pi r_e^2m_ec^4n_{ej}}{vE}\Lambda \left< \frac ZA\right> = \frac {r_e^2m_ec^4}{vE}\Lambda \frac {3(k-3)M_{ej}}{2km_pV_{ej}^3t^3} \sim \] \begin{equation} \frac {0.025}{t}\frac {m_ec^3}{vE}\frac {\Lambda }{40} \left( \frac {M_{ej}}{M_{\odot }}\right) ^{5/2} \left( \frac {E_{SN}}{10^{51}\ \mathrm{erg}}\right) ^{-3/2}\left( \frac {t}{63\ \mathrm{yr}}\right) ^{-2}. \end{equation} Here $r_e$ is the classical electron radius, $m_p$ and $m_e$ are the proton and electron masses respectively, $\Lambda \simeq 40$ is the Coulomb logarithm in fully ionized plasma, $E_{SN}$ is the total energy of explosion, and $V_{ej} = \left( 10(k-5)E_{SN} /3(k-3)M_{ej}\right) ^{1/2}$ is the characteristic velocity of ejecta with a power-law density distribution characterized by the index $k \sim 10$ \cite{chevalier82b}. In Eq. (2) the mean ratio of the atomic number to the mass number $\left< \frac ZA\right> $ is taken 0.5. Note that, in addition to positrons with energy $E_+\sim $1 MeV, one electron of energy $E_-\sim $0.1 MeV is produced per a $^{44}$Ti decay. However, because of difference in energies the positrons have more chances to be accelerated before they are thermalized. Therefore the fraction of the accelerated positrons $n_+/(n_++ n_-) \geq 1/2$. For supernova explosions with small ejecta masses, $M_{ej}<5M_{\odot}$, the energy losses of positrons from decays of $^{44}$Ti are not significant (see also \cite{martin10}). For larger ejecta masses, the energetic positrons are thermalized before they are injected into the reverse shock. In any case, these particles cannot travel and approach the forward shock. In this regard, $^{44}$Ti cannot provide electrons and positrons for acceleration by the forward shock. Nevertheless, the forward shock can be supplied by suprathermal electrons, but through a different (indirect) way related to the Compton scattering of MeV gamma-rays - the products of $^{56}$Co decays. The number density of energetic electrons of Compton origin produced by MeV gamma-rays from $^{56}$Co decays in the circumstellar medium with number density $n$ is estimated as \begin{equation} \frac {n_-}{n}=\xi _{\gamma }\frac {M_{Ni}}{56m_p}\frac {\sigma _{\rm T}}{4\pi r^2} \sim 1.2\cdot 10^{-7}\xi _{\gamma }\frac {M_{Ni}}{M_{\odot }}r^{-2}_{\rm pc}. \end{equation} Here $\sigma _{\rm T}$ is the Thompson cross-section, $r$ is the distance from the center of the supernova explosion and $\xi _{\gamma }$ is the fraction of gamma-rays which escape the expanding ejecta. For photons of energy of $E \sim 0.5$ MeV the cross-section of the Compton scattering is $\sigma _{\rm C}=0.4\sigma _{\rm T}$. It is taken into account in Eq.(3) that in a single act of decay of $^{56}$Co on average 2.5 gamma-ray photons are produced. We should note that a similar idea for the production of energetic electrons in SNRs via the Compton scattering of gamma-rays from the annihilation of $^{56}$Co decay positrons has been earlier suggested by Bychkov \cite{bychkov77}. This gives additional 0.5 gamma-photons per a decay of $^{56}$Co. In the interstellar medium, the timescale of the Coulomb and ionization losses of energetic electrons is of the order of $10^5$ years. During $300$ years they cannot diffuse away beyond 3 pc, given that the diffusion coefficient that characterizes their propagation does not exceed the standard value of the diffusion coefficient in the interstellar medium, $D \sim 10^{28}$ cm s$^{-1}$. Therefore they will be picked up by the arriving SNR shock. The fraction of gamma-rays that escape the supernova ejecta is determined by the optical depth $\tau $: \[ \tau =\left< \frac ZA\right> \sigma _{\rm C}\int n_{ej}dr =\frac {3(k-3)M_{ej}\sigma _{\rm C}}{4\pi (k-1)m_pV^2_{ej}t^2}\left< \frac ZA\right> \sim \] \begin{equation} 0.6\left( \frac {M_{ej}}{M_{\odot }}\right) ^2 \left( \frac {E_{SN}}{10^{51}\ \mathrm{erg}}\right) ^{-1} \left( \frac {t}{77\mathrm{days}}\right) ^{-2}. \end{equation} In order to escape the ejecta without significant lose of energy, the Compton optical depth for gamma-rays $\tau$ should not significantly exceed 1. This determines the time $t$ and the corresponding amount of non-decaying $^{56}$Co. As it follows from Eq.(4) gamma-rays from decays of $^{56}$Co can escape the ejecta only if the mass of latter does not exceed several solar masses. For larger ejecta masses, the contribution of gamma-rays from longer-lived isotopes, e.g. $^{57}$Co ($t_{1/2}=272$ days, mass $\sim 0.003\ M_{\odot }$ \cite{meyer95}), becomes more important. Note that for any reasonable parameters, the Compton optical depth in the interstellar medium is much smaller than one (even in the galactic scales), therefore only a small fraction of energy released at $^{56}$Co decays is transferred to energetic electrons in the circumstellar medium. The main fraction of energy goes to the heating of the ejecta. \section*{Acceleration of electrons} At the plane non-modified shock with compression ratio $\sigma$, the far-upstream and downstream momentum distributions of particles, $F_0(p)$ and $F(p)$, respectively, are related as \begin{equation} F(p)=\gamma \int ^p_0\frac {dp'}{p'}\left( \frac {p'}{p}\right) ^{\gamma }F_0(p'). \end{equation} Here $\gamma =3\sigma/(\sigma -1)$ is the Krymsky's index. Let us assume now that the suprathermal electrons with a mean energy $E_{\mathrm{inj}}$ are injected into the plane shock. For a non-modified strong shock with compression ratio $\sigma =4$ we have the following expression for the pressure of accelerated electrons: \begin{equation} P_-=\frac 43n_-E_{\mathrm{inj}}\ln \frac {E_{\max }}{E_{\mathrm{inj}}}. \end{equation} Here $E_{\max }$ is the maximum energy of electrons accelerated at the shock. In young SNRs $E_{\max }$ is of the order of $10-100$ TeV. Using the number density given by Eq. (1), we can estimate the ratio of the pressure of positrons $P_+$ to the ram pressure of the reverse shock, $\rho u_r^2$, propagating at $t>>t_{1/2}$ into the ejecta with a speed $u_r$: \[ \frac {P_+}{\rho u_r^2}=\frac {4}{3}\frac {0.94M_{Ti}}{44M_{ej}} \frac{E_{\mathrm{inj}}}{m_pu_r^2}\ln \frac {E_{\max }}{E_{\mathrm{inj}}} \sim \] \begin{equation} 2.7\frac {M_{Ti}}{M_{ej}}E_{\mathrm{inj}}^{\mathrm{MeV}} \left( \frac {u_r}{10^3\mathrm{km}\ \mathrm{s}^{-1}}\right) ^{-2} \ln \frac {E_{\max }}{E_{\mathrm{inj}}}. \end{equation} A similar estimate for the ratio of the electron pressure to the ram pressure $\rho u_f^2$ of the forward shock propagating in the circumstellar medium with a speed $u_f$, gives \[ \frac {P_-}{\rho u_f^2}=\frac {4}{3}\xi _{\gamma }\frac {M_{Ni}}{56m_{p}}\frac {\sigma _{\rm T}}{4\pi r^2} \frac{E_{\mathrm{inj}}}{m_pu_f^2}\ln \frac {E_{\max }}{E_{\mathrm{inj}}}\sim \] \begin{equation} 1.5\cdot 10^{-5}\xi _{\gamma }\frac {M_{Ni}}{M_{\odot }}E_{\mathrm{inj}}^{\mathrm{MeV}} r^{-2}_{\mathrm{pc}} \left( \frac {u_f}{10^3\mathrm{km}\ \mathrm{s}^{-1}}\right) ^{-2} \ln \frac {E_{\max }}{E_{\mathrm{inj}}}. \end{equation} From these equations follows that the ratio of the electron pressure to the ram pressure can vary, depending on the several principal model parameters, within a broad range, from $10^{-7}$ to $10^{-3}$. We assume that electrons are injected with their original energy $\sim$1 MeV. However their energy can be significantly larger if particles are pre-accelerated in the upstream regions of the shocks. \section{Pre-acceleration of electrons} High-energy particles accelerated at strong shocks excite plasma waves and produce small-scale shocks and turbulence in the upstream region. The turbulence may amplify magnetic fields at the shocks of young SNRs \cite{bell04}. Also, the dissipation of the turbulence results in substantial gas heating upstream of the shock. The latter limits the total compression ratio of the shock modified by cosmic rays. This is an important feature of modern nonlinear shock acceleration models (see for a review ref.\cite{malkov01}). At these conditions, some pre-acceleration of energetic electrons via the stochastic (second order Fermi) mechanism which also energizes thermal electrons and ions in this region seems rather plausible. Note that in principle the stochastic acceleration can be realized also via ensemble of random shocks. Also we should emphasize that there is an essential difference between the pre-existing energetic (supra-thermal) electrons and those, which in principle could be injected at the shock front from the thermal pool. While the pre-existing energetic electrons pass through the whole extended turbulent region upstream of the shock, the particles injected in the shock front occupy a narrow region at the shock. That is why pre-acceleration of these electrons is not significant. The reacceleration of sub-keV electrons from the thermal pool of upstream plasma is problematic also because of strong Coulomb losses (see Eq. (2)). The energy $E_{\mathrm{inj}}$ is determined by the efficiency of stochastic acceleration upstream of the shock. The rate of stochastic (second order) acceleration is $\tau _{st}^{-1}\sim u_t^2/D$ while the rate of DSA is $\tau _{D}^{-1}\sim u^2/D$, where $u_t$ is the the velocity of turbulence (plasma waves) and $D$ is the diffusion coefficient. The maximum energy of protons is of the order of 100 TeV in young SNRs. Then for $u_t/u\sim 0.1$, the maximum energy accelerated through the stochastic mechanism is expected $E_{\mathrm{inj}}\sim 1$ TeV. However, this should be considered as an optimistic upper limit, given that the diffusion coefficient for the low-energy particles in the turbulent region upstream of the shock can be significantly larger than the Bohm diffusion. A more realistic estimate is given below. We shall consider the reacceleration of particles by multiple small-scale shocks in the upstream region of the SNR shock. A particle is picked-up by the small-scale shocks, accelerated and advected downstream where it loses energy adiabatically. Then the particle is picked-up by the next small-scale shock, {\it etc}. The energy density of relativistic electrons just downstream of the small-scale shock can be found after integration of Eq. (5). Because of the adiabatic expansion in the downstream region, this value drops by a factor of $\sigma _s^{4/3}$, where $\sigma _s$ is the compression ratio of the small-scale shock. So the energy density $\epsilon _-$ after one acceleration cycle is \begin{equation} \frac {\epsilon _-}{\epsilon _0}=\frac {\gamma _s}{\gamma _s-4}\sigma _s^{-4/3}= \frac {3\sigma _s}{4-\sigma _s}\sigma _s^{-4/3}. \end{equation} Here $\epsilon _0$ is the electron energy density at the beginning of the cycle. It is interesting to compare the relative change of the electron energy density to the relative change of the gas pressure $P$. Using the Rankine-Hugoniot conditions we find \begin{equation} \frac P{P_0}=\frac {4\sigma _s-1}{4-\sigma _s}\sigma _s^{-5/3}. \end{equation} Here $P _0$ is the gas pressure in the beginning of the cycle. One can see that the relative changes of the electron energy density and of the gas pressure are similar. For example, for $\sigma _s=3$ we have ${\epsilon _- }/{\epsilon _0}=2.08$ and ${P}/{P _0}=1.76$. For weaker shocks, the change of the electron energy density is higher than the change of the gas pressure. This means that after many cycles, the relative change of the gas pressure is comparable or smaller than the change of the electron energy density. In other words, the gas heating in the upstream region of the SNR shock is accompanied by a similar or stronger electron reacceleration. Although the gas heating can not directly estimated from observations of SNRs, one can constrain it (a lower bound) assuming non-negligible amplification of the magnetic field. Numerical studies of the Bell's instability show that the energy density of the heated gas is comparable or higher than the energy of the amplified magnetic field\cite{bell04,zirakashvili08,riquelme09}. Namely, within the synchrotron-loss interpretation of thin X-ray filaments in young SNRs (see e.g. ref.\cite{voelk05}), the field in the upstream region can be amplified by a factor of 5 to 10. Therefore the gas pressure should be increases by a factor as large as 100. The similar level of the gas heating is needed to limit the strong shock modification and to avoid the appearance of the concave CR spectra (see e.g. ref.\cite{malkov01}). It is sufficient to have 8 cycles to provide a 100-fold increase of the gas pressure at the shocks with $\sigma _s=3$. The corresponding increase of $E_{inj}=\epsilon _-/n_-$ equals several hundreds. The modeling of the Bell's instability with DSA \cite{zirakashvili08} shows that the upstream region of a young SNR of width $L\sim 10^{18}$ cm is filled with a supersonic MHD turbulence with Mach number 3-4, while the distance between small-scale shocks is $l\sim 10^{16}$ cm. For these parameters and for turbulent motions $u_t/u\sim 0.1$ the expected number of cycles is $Lu_t/lu\sim 10$. One should note that the pre-accelerated electrons may have an impact on the upstream turbulence and thus regulate their own acceleration efficiency. In particular, the higher number density of pre-existing electrons would make lower the energy $E_{\mathrm{inj}}$. Under these conditions, the energy density of pre-accelerated electrons may be of the order of the energy density of the upstream turbulence. The latter is believed to be several percent of the ram pressure $\rho u^2$ at cosmic ray modified shocks. So the upper limit for the number density of pre-accelerated electrons is $n_-E_{\mathrm{inj}}\sim 10^{-2}\rho u^2$ in Eq. (6). Even for a modest energy $E_{\mathrm{inj}}=100$ MeV, one can obtain, according to Eq.(7), quite high ratio $P_+/\rho u_r^2\sim 0.1$. The shock may be slightly modified by the pressure of accelerated electrons and positrons! \section{Applications to SNRs} The above discussed picture of pre-acceleration of electrons and positrons from the products of decays of radioactive short-lived elements can be relevant to the reverse shock of Cas A. This can explain why the pressure of energetic positrons (electrons) in the shocked ejecta is comparable to the gas pressure in the supernova shell. The same could be true also for the radio-knots if they are fast moving clumps of the shocked ejecta. At the present epoch, the pressure of energetic electrons at the forward shock of Cas~A is not very high, as it follows from Eq.(8). However, most likely it was much higher in the past when the radius of the remnant was smaller than 0.1 pc. Since the forward shock of Cas~A propagates in a dense stellar wind of the supernova progenitor with a density profile $\sim r^{-2}$, the accelerated electrons have been produced mainly in the past when the synchrotron cooling in the amplified field was significant. Now these electrons are located inside the forward shock. This can explain the rather steep radio-spectrum of Cas A. In Cas~A, the energy of pre-accelerated electrons $E_{\rm inj}$ can not exceed 100-200 MeV, otherwise this would be in conflict with the observed synchrotron radio spectrum. The spectral flatening seen at 20 MHz \cite{baars77} can be attributed, for the magnetic field at the reverse shock of the order of 100-200 $\mu $G, to the lower energy cut-off in the electron spectrum at 100 MeV. The magnetic field at the forward shock of Cas A is larger. However since radio-emitting electrons have been accelerated in this region in the past, because of adiabatic losses their low-energy cut-off is now located below 100 MeV We conclude that the high radio brightness of Cas~A is caused by the dense stellar wind where the forward shock propagates, and by a relatively high amount of radioactive $^{44}$Ti the decay of which provides supra-thermal electrons and positrons for the further acceleration by the reverse shock. This is in contrast to other historical young SNRs like Tycho, Kepler and SN1006. They are results of Ia supernova explosions in uniform medium. Therefore, in these objects the electrons accelerated by forward shocks, are produced predominantly at later epochs. In addition, the ejecta of Cas~A, because of the dense stellar wind has been shocked very early, likely just after the explosion. The radiative instabilities operated in the shocked ejecta, could result in the formation of ejecta clumps \cite{hwang03,hwang09}, which presently are observed as radio-knots. Since the reverse shock of Cas~A contains about 1$\%$ of the explosion energy, the energy fraction of electrons and positrons is close to $10^{-3}$. The electrons accelerated at the forward shock have a similar energetics. So we expect that in Cas~A approximately $10^{-3}$ fraction of supernova energy is transferred to the accelerated electrons and positrons. This conclusion is in agreement with estimates based on radio observations \cite{atoyan00}. The fraction of energy $10^{-3}$ found for positrons in the reverse shock of Cas~A is expected to be the same for all young core-collapse supernova. However GeV positrons leave the remnant only at late stages when its radius becomes a factor of 10 larger than the radius at the transition to the Sedov phase when the positrons have been accelerated. Since the energy of particles adiabatically drops (inverse proportional to the remnant's radius), the energy fraction of positrons will be reduced down to $10^{-4}$. The luminosity in galactic CR positrons at multi-GeV energies based on the recent measurements of the Pamela collaboration \cite{adriani09} is close to $10^{38}$ erg s$^{-1}$. Given the overall mechanical power of the galactic core collapse supernova $10^{42}$ erg s$^{-1}$, our model can explain the flux of the primary CR positrons by reverse shocks of young SNRs without invoking other source populations (for a review on different potential sources of galactic CR positrons see \cite{Positrons}). It is important to note that our model applied to Cas~A predicts the positron-to-electron ratio close to 1. The reason is that (i) the estimated energetics of leptons in forward and reverse shocks in Cas A based on radio observations are comparable, and (ii) our model implies that while electrons are accelerated in the forward shock, in the reverse shock the content of positrons is equal or larger than the content of electrons. If so, Cas A, as well as other young SNRs alone cannot provide the total flux of galactic CR electrons. In fact this is a model-independent statement based on the estimates of numbers of electrons in young SNRs. For old SNRs the situation is different. While the reverse shocks disappear in these objects, the forward shock continue to accelerate electrons (although to modest energies, $E \leq 1$~TeV). In our model, the electrons produced via the Compton scattering of gamma-rays from $^{56}$Co are accelerated by forward shocks in old Ia SNRs expanding in the uniform medium. According to the observed light curves, the ejecta of Ia supernova contains $\sim $ 0.6$M_{\odot }$ of $^{56}$Ni just after the supernova explosion. The energetics of galactic Ia supernova is of the order of $3\cdot 10^{41}$ erg s$^{-1}$, implying approximately one supernova per century. On the other hand, the production rate of galactic CR electrons is close to $10^{39}$ erg s$^{-1}$\cite{berezinsky90}. So a fraction of $0.3\% $ of energy of Ia supernova must be transferred to CR electrons. The similar ratio of cosmic ray electron pressure to the ram pressure is estimated for an old remnant with the radius $30$ pc and the shock speed $300$ km s$^{-1}$ if $E_{\mathrm{inj}}=3$GeV (see Eq. (9)). The required higher value of $E_{\mathrm{inj}}$ can be explained by a lower number density of the circumstellar medium where the Ia supernova explosions occur. We should note that another source of the suprathermal electrons at supernova shocks has been recently suggested by Morlino \cite{morlino11}. Partially ionized multi-GeV ions accelerated at the shock can produce multi-MeV electrons via photo-ionization by optical Galactic emission. The fraction $\eta =n_-/n$ of the corresponding electrons is estimated as $\eta \sim 0.1x_{He}\gamma ^{-1}u^2/c^2$ at cosmic-ray modified shocks. Here $x_{He}\sim 0.1$ is the fraction of Helium in the interstellar medium, $\gamma \sim I_{He}/\epsilon _{ph}$ is the gamma-factor of He$^{+}$ ion ionized by Galactic optical photons with energy $\epsilon _{ph}$, and $I_{He}=54$ eV is the ionization potential of Helium. This results in $\eta \sim 10^{-4}u^2/c^2$ in young SNRs where $\gamma \sim 100$ and ions are photo-ionized by eV optical photons, and $\eta \sim 10^{-3}u^2/c^2$ in the old remnants where $\gamma \sim 10$ and ions are photo-ionized by ultraviolet photons. These numbers are comparable or higher than numbers given by Eq. (3). Even without any preacceleration by MHD turbulence this mechanism results in the electron to proton ratio $K_{ep}\sim x_{He}m/m_p\sim 10^{-4}$. Although the preacceleration of these electrons is more problematic because they are produced closer to the shock by 10-100 GeV ions, it is not excluded. Then the corresponding injection energy necessary for explanation of galactic cosmic ray electrons can be below 1 GeV closer to $E_{inj}=100$ MeV as argued above for reverse shock of Cas A. Finally, in the context of the proposed model, one can expect harder CR positron spectrum. The positrons of higher energies leave the remnant earlier and are subject to lower adiabatic losses in comparison with the positrons of lower energies. This effect does not have an impact on the spectra of electrons accelerated predominantly by forward shocks in old SNRs. The harder source spectrum of positrons is in agreement with the recent Pamela measurements \cite{adriani09}. According to the scenario proposed in this paper, only forward shocks of young SNRs produced by supernova explosions with a small ejecta masses $M_{ej}<2M_{\odot }$, can contain large amount of accelerated electrons. The relevant SNRs belong to the Ia/b/c and, probably, IIb (like Cas~A) type supernovae. Note that the brightest in TeV gamma-rays young SNR RX J1713.7-3946 most likely belongs to Ib/c type SNR with a small ejecta mass \cite{zirakashvili10a}. In the case of {\bf IIP} supernova with large ejecta masses gamma-rays from $^{56}$Co decay cannot effectively escape the ejecta and "feed" the forward shock by suprathermal electrons for further acceleration. If so, we should expect forward shocks of IIP SNRs to be dim in radio and non-thermal X-rays. On the other hand, large amount of electrons and positrons from decays of $^{44}$Ti can be accelerated at reverse shocks of young SNRs of all types including the most frequent IIP supernovae. In this regard, the youngest galactic SNR G1.9+0.3 is of a special interest. It shows both large content of $^{44}$Ti and ongoing acceleration of electrons by reverse shock \cite{borkowski10} - two key components required in our model. \section{Summary} The "radioactive" origin of electron injection, related to both the forward and reverse shocks, seems to be a natural scenario in SNRs with the following key components: 1) the energetic positrons (and possibly also electrons) from $^{44}$Ti decay are accelerated at reverse shocks of young SNRs; 2) the energetic electrons from the Compton scattering of $^{56}$Co-decay gamma-rays are accelerated at forward shocks of both old and young SNRs of type Ia/b/c and IIb; 3) a modest pre-acceleration (presumably of stochastic origin) to energies $E_{\mathrm{inj}}\sim 0.1$ GeV in the upstream regions of the forward and reverse shocks is a necessary condition in Cas A for explanation of the energetics in relativistic electrons; 4) the proposed scenario can explain not only the overall flux of galactic CR electrons by SNRs, but also the recently reported tendency of gradual increase of the positron-to-electron ratio with energy.
proofpile-arXiv_067-13017
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Photoionization is the main source of electrons and ions in the dayside upper atmosphere of planets. Photoelectrons, generated due to photoionization process, can have enough kinetic energy to ionize the atmospheric constituents and produce secondary electrons. Similarly, energetic electrons precipitating along the magnetic field lines into the auroral atmosphere of planets can ionize the medium producing secondary electrons. Besides ionization, the electron energy is lost in excitation, attachment, and dissociation. Hence, the study of electron energy deposition in atmosphere is an important aspect in understanding processes like aurora, dayglow, nightglow [\textit{e.g., Bhardwaj and Gladstone}, 2000; \textit{ Fox et al.}, 2008]. To model the electron energy degradation in an atmosphere one has to first compile cross sections for various loss processes, and then develop an electron energy apportionment method, which will distribute the electron energy among different loss channels. The study of the electron energy degradation in {$\mathrm{CO_2}$}\ is of fundamental interest in various fields of science. {$\mathrm{CO_2}$}\ is one of the most important molecules in our solar system. It comprise more than 90\% of the atmospheres of Venus and Mars. It is also used in lasers, gaseous discharge or low power plasma device. Electron energy degradation in {$\mathrm{CO_2}$}\ gas has important applications to Mars and Venus. Earlier results from Mariner satellites and Mars 3 and Mars 4 spacecrafts have confirmed the presence of an ionosphere on Mars, and also detected various emission features on Mars [\textit{e.g., Barth et al.}, 1971; \textit{Dementyeva et al.}, 1972], which have been studied in detail by recent SPICAM ultraviolet spectrometer observations aboard Mars Express [\emph{e.g., Bertaux et al.}, 2006; \emph{Leblanc et al.}, 2006]. Emissions from Venus have been studied quite extensively by Pioneer Venus [\textit{e.g., Fox and Bougher}, 1991] and by the ongoing Venus Express [\textit{e.g., Bertaux et al}, 2007]. Electron impact excitation and dissociative excitation of {$\mathrm{CO_2}$}\ are the key processes in the production of several emissions on Mars and Venus. In this paper we present a Monte Carlo model which describes the energy degradation of $\leq$1000 eV electrons in an atmosphere of {$\mathrm{CO_2}$}. Earlier studies of electron degradation in {$\mathrm{CO_2}$}\ have been carried out by \textit{Sawada et al.} [1972], \textit{Green et al.} [1977], and \textit{Fox and Dalgarno} [1979]. Monte Carlo methods are class of numerical methods based on stochastic technique. Though it is time consuming, but due to its probabilistic nature, it is an excellent technique for studying the energy degradation of particles, provided sufficient sample size is taken. Hence, Monte Carlo methods have been widely used in problems dealing with energetic particle degradation in gases and in applications to the planetary atmospheres [\textit{e.g.}, \emph{Cicerone and Bowhill}, 1971; \emph{Ashihara}, 1978; \emph{Green et al.}, 1977, 1985; \emph{Singhal et al.}, 1980; \emph{Singhal and Green}, 1981; \emph{Singhal and Bhardwaj}, 1991; \emph{Bhardwaj and Singhal}, 1993; \emph{Michael and Bhardwaj}, 2000; \emph{Bhardwaj and Michael}, 1999a, b; \textit{Shematovich et al.}, 2008]. In section 2, we present a compilation of all the e-{$\mathrm{CO_2}$}\ loss processes cross sections available up to the present date and fitted them with a simple analytical form. These analytically fitted cross sections can be easily used in the Monte Carlo model, which is presented in section 3. The output of the Monte Carlo simulation is employed to generate a ``yield spectrum,'' which is presented in section 4. The concept of the yield spectrum was first introduced by \emph{Green et al.} [1977] and further developed by many workers [\textit{e.g.}, \emph{Green and Singhal}, 1979; \emph{Singhal and Green}, 1981; \textit{Singhal and Haider}, 1984; \emph{Green et al.}, 1985; \emph{Singhal and Bhardwaj}, 1991; \emph{Bhardwaj and Singhal}, 1993; \emph{Bhardwaj and Michael}, 1999a]. The yield spectra embodied the information about the electron degradation processes and can be used to calculate ``yield'' for any inelastic event. The numerical yield spectrum is represented in an analytical form resulting in an analytical yield spectrum (AYS). The AYS and its comparison with the numerical yield spectrum is also presented in the section 4. In sections 5 and 7, we present the calculated mean energy per ion pair and efficiencies for inelastic processes, respectively, using AYS and compare them with that obtained by using numerical yield spectra. The energy distribution of secondary and tertiary electrons produced during ionization events is presented in section 6. Summary of the paper is presented in section 8. \section{Cross sections} \subsection{Total} The laboratory measured total scattering cross section (TCS) is available between 0.1 eV and 5000 eV. The TCS for e-CO$_2$ collision has been measured by several authors in different energy ranges -- \emph{Ferch et al.} [1981] in the energy range 0.007-4.5 eV, \emph{Buckman et al.} [1987] 0.1-5 eV, \emph{Szmytkowski et al.} [1987] 0.5-3000 eV , \emph{Kimura et al.} [1997] 0.8-500 eV, \emph{Kwan et al.} [1983] 1-500 eV, and \emph{Garcia and Manero} [1996] 400-5000 eV. At low energies, the TCS of \emph{Szmytkowski et al.} [1987], \emph{Buckman et al.} [1987], and \emph{Ferch et al.} [1981] are in agreement to within 10\%. Recently, \emph{Zecca et al.} [2002] have determined the best value of TCS. In the lowest energy range ($<$1 eV) \emph{Zecca et al.} [2002] adopted the experimental data of \emph{Ferch et al.} [1981] and \emph{Buckman et al.} [1987], which are in good agreement with each other. In the 1-1000 eV energy range, \emph{Zecca et al.} [2002] averaged the cross sections obtained by \emph{Szmytkowski et al.} [1987], \emph{Kimura et al.}, [1997] and \emph{Kwan et al.} [1983], with equal weight, to obtain the recommended values, which are in good agreement with \emph{Garcia and Manero} [1996] at higher ($>$400 eV) energies. In his review, \emph{Itikawa} [2002] has recommended the TCS of \emph{Zecca et al.} [2002]. The TCS reaches a maximum value of $60\times10^{-16}$ cm$^2$ at 0.1 eV [\emph{Ferch {et al.}}, 1981; \emph{Buckman {et al.}}, 1987], it then goes through a minimum of $5.5\times10^{-16}$ cm$^2$ at 1.9 eV [\emph{Szmytkowski et al.} 1987]. At lower energies a resonant structure is present $\sim$3.8 eV. \subsection{Elastic} \subsubsection{Differential elastic} The differential elastic scattering cross section (DCS) for e-CO$_2$ collision has been measured by many authors [cf. review by \textit{Itikawa}, 2002; \textit{Karwasz et al.}, 2001]. In the 1-4 eV energy, the DCS values of \emph{Gibson et al.} [1999] and \emph{Tanaka et al.} [1998] are in good agreement at forward angles ($\leq$50{$^\circ$}), however at larger angles they differ by 20-30\%. Overall, at most of the energies there are good agreement in shape between these two DCS. At 30, 40, and 50 eV, the DCS measurements of \emph{Gibson et al.} [1999], {Kanik et al.} [1989], and \emph{Tanaka et al.} [1998] are in reasonable accord, within the uncertainties of each measurement, and at 50 eV the DCS of \emph{Gibson et al.} [1999] and {Register et al.} [1980] are consistent. At 100 eV, the measured DCS values of \emph{Iga et al.} [1999] are in good agreement with \emph{Kanik et al.} [1989] and \emph{Tanaka et al.} [1998]. We have taken the DCS values from \emph{Tanaka et al.} [1998] in the 1-100 eV range, however values at 40, 50, 70, 80, and 90 eV are taken from \emph{Kanik et al.} [1989] which agree well with the cross section of \emph{Tanaka et al.} [1998] in the entire energy range. The DCS values in 200-400 eV range are taken from \emph{Iga et al.} [1999], and those in 500-1000 eV are taken from \emph{Iga et al.} [1984]. In Table 1, we present the DCS values used in this work. \subsubsection{Total elastic} Based on the DCS measured by \emph{Register et al.} [1980], \emph{Tanaka et al.} [1998] and {Gibson et al.} [1999], \emph{Buckman et al.} [2002] have determined the total elastic cross section in 1-100 eV range with an estimated uncertainty of $\pm30\%$. \emph{Shirai et al.} [2001] have reported the recommended elastic cross section up to 1000 eV by considering the beam data of \emph{Iga et al.} [1999]. \emph{Itikawa} [2002] has recommended the elastic cross section of \emph{Buckman et al.} [2002] in the energy range 1-60 eV, and \emph{Shirai et al.} [2001] in the energy range 100-1000 eV. The two data sets merge smoothly . We have taken total elastic cross section as recommended by \emph{Itikawa} [2002]. The total elastic cross section is fitted using the semi-empirical formula [\emph{Bhardwaj and Michael}, 1999a]: \begin{eqnarray} \sigma(E) = \frac{1}{A_1+B_1E}+\frac{1}{A_2+B_2E}+ \frac{2}{E}\frac{\sqrt{A_1A_2}}{A_2B_1-A_1B_2} \ln\frac{(1+B_1E/A_1)}{(1+B_2E/A_2)}, \end{eqnarray} where $A_1, B_1, A_2,$ and $B_2$ are the fitting parameters, whose values are $8.090\times10^{-16}$ $\mathrm{\AA}^{-2}$, $2.184\times10^{-2}$ $\mathrm{\AA}^{-2}$ keV, $0.92$ $\mathrm{\AA}^{-2}$ and $5.0\times10^{-4}$ $\mathrm{\AA}^{-2}$ keV, respectively, and $E$ is the energy of the electron in eV. Lower limit of fit is 30 eV, and fitted cross section is shown in Figure 1. At energies below 30 eV it is difficult to fit the cross section using above equation due to resonance structure present at low energies ($\sim$4 eV), and hence these values are fed numerically in the Monte Carlo model. \subsection{Dissociative electron attachment} The dissociative attachment process in e-CO$_2$ collisions, which mainly occurs at energies $<$12 eV, leads to the formation of negative ions O$^{-}$, O$_2^-$, and C$^{-}$. \emph{Rapp and Briglia} [1965] measured absolute values of the total cross section for the production of negative ions from CO$_2$. \emph{Orient and Srivastava} [1983] obtained the cross section for the production of O$^{-}$ ions and showed that it is the dominant anion. Their values are in agreement with those of \emph{Rapp and Briglia} [1965] within the uncertainty of the cross sections ($\pm20\%$) and the energy scale ($\pm0.1$ eV). \emph{Spence and Schulz} [1974] measured the cross sections for the production of C$^{-}$ and O$_2^-$ ions. The cross section for O$_2^-$ production has two peaks of the order of $10^{-24}$ cm$^2$ at 11.3 and 12.9 eV, while cross section for C$^{-}$ production has three peaks with the largest value of $\sim$$2\times 10^{-21}$ cm$^2$. The cross sections for O$_2^-$ and C$^{-}$ are small compared to that of O$^{-}$, and hence are not considered in our study. We have adopted the cross section values of \emph{Rapp and Briglia} [1965] for the production of O$^{-}$ ions from CO$_2$. The cross section shows a double-peak structure -- peaks at 4.1 and 8.3 eV, with the later peak value ($4.28\times 10^{-19}$ cm$^2$) about 2.5 times the value of the former peak. The cross section for each peak has been fitted with the following analytical form [\textit{Bhardwaj and Michael}, 1999a]: \begin{equation} \sigma(E)=\frac{Ae^{t}/U}{(1+e^{t})^2}, \end{equation} Here $t=(E-W_p)/U$, where $W_p$ is the energy at the peak. The values of the overall normalization parameter $A$ and the effective width parameter $U$ for each of the peaks along with the parameter $W_p$ and threshold energy $W_{th}$ are presented in Table 2. The fitted cross sections along with laboratory measurements are given in Figure 2. \subsection{Ionization} The ionization and dissociative ionization of CO$_2$ by electron impact produce singly and doubly ionized ions (CO$_2^+$, CO$^+$, C$^+$, O$^+$, C$^{++}$, O$^{++}$, and CO$_2^{++}$). The cross sections for these processes have been reported by \emph{Rapp and Englander-Golden} [1965], \emph{Shyn and Sharp} [1979], \emph{Orient and Srivastava} [1987], \emph{Tian and Vidal} [1998], and \textit{Straub et al.} [1996]. Recently, \emph{McConkey et al.} [2008] have reviewed the electron impact dissociation cross sections for {$\mathrm{CO_2}$}. For the total ionization cross section, measurements of \emph{Orient and Srivastava} [1987], \emph{Tian and Vidal} [1998], and \emph{Straub et al.} [1996] are within the error limits with values of \emph{Rapp and Englander-Golden} [1965] upto 1000 eV, and with the data of \emph{Shyn and Sharp} [1979] in the energy range 50-400 eV. \emph{Tian and Vidal} [1998] have also measured the cross sections for double and triple ionization of {$\mathrm{CO_2}$}\ due to electron impact. After a survey of the available experimental data, \emph{Lindsay and Mangan} [2002] suggested recommended values of ionization cross section. Their partial cross sections are based on measurement of \emph{Straub et al.} [1996]. For total ionization cross section below 25 eV, \emph{Lindsay and Mangan} [2002] adopted the values of \emph{Rapp and Englander-Golden} [1965]. At energies above 25 eV, they reported uncertainties of 5\% for the partial cross sections for the production of CO$_2^+$, CO$^+$, C$^+$, O$^+$, and the total ionization cross section. The cross sections at energies below 25 eV have uncertainties of 7\%. There are also uncertainties in appearance energies of fragmented ions CO$^+$, C$^+$, O$^+$, C$^{++}$, and O$^{++}$. We have taken the appearance energies for the fragmented ions from \emph{Itikawa} [2002]. We have used the dissociative and direct ionization cross sections recommended by \emph{Lindsay and Mangan} [2002] [cf. \emph{Itikawa}, 2002; \textit{McConkey et al.}, 2008]. The CO$_2^+$ ion can be produced in four excited states, viz., X$^2\Pi_g$, A$^2\Pi_u$, B$^2\Sigma_u^+$, and C$^2\Sigma_g^+$. Cross sections for A$^2\Pi_u$ and B$^2\Sigma_u^+$ states have been taken from \textit{Itikawa} [2002], while the cross sections for X$^2\Pi_g$ and C$^2\Sigma_g^+$ states have been taken from \textit{Jackman et al.} [1977]. For double ionization, cross sections of (CO$^+$,O$^+$), (C$^+$,O$^+$), and (O$^+$,O$^+$) production have been taken from \emph{Tian and Vidal} [1998] up to 600 eV; these cross sections have not been added in the total ionization cross section because they are already accounted in the cross sections for the formation of CO$^+$, C$^+$, and O$^+$ ions. All these cross sections have been fitted using the analytical expression [\emph{Jackman et al.}, 1977; \textit{Bhardwaj and Michael}, 1999a]. \begin{equation} \sigma(E)=A\Gamma \left[arctan\frac{(T_M-T_0)}{\Gamma} + arctan\left(\frac{T_0}{\Gamma} \right)\right],\nonumbe \end{equation} where $$ A(E)=\left[\frac{K}{E+K_B}\right]\ln\left[\frac{E}{J}+J_B+\frac{J_C}{E} \right]; $$ $$ \Gamma(E)=\Gamma_S\left[\frac{E}{E+\Gamma_B}\right]; $$ $$ T_0(E)=T_S-\left[\frac{T_A}{E+T_B}\right]; \\\ T_M=\frac{E-I}{2}. $$ Here $E$ is the incident energy in eV, $I$ is the fitting ionization potential in the eV, which is generally close to the threshold potential (W$_{th}$), and $\sigma$ is in units of $10^{-16} $ cm$^2$. This form gives the asymptotic behavior $\sigma(E)\propto E^{-1}\ln E$ at high energies, which is expected from the Born approximation. The fitting parameters are presented in the Table 3. The fitted cross sections for single and double ionization are shown in Figures 3 and 4, respectively. \subsection{Excitation cross sections} \subsubsection{Vibrational excitation} CO$_2$ is a linear triatomic molecule, which has three normal modes of vibration, \textit{i.e.}, a bending mode (0 n 0), a symmetric stretching mode (n 0 0), and an asymmetric stretching mode (0 0 n), with excitation energy 83 meV, 172 meV, and 291 meV, respectively [\textit{Kochem et al.}, 1985]. Infrared active (010) bending and (001) asymmetric stretching modes in the near-to-threshold region follow the Born approximation. Moreover, the structure near the threshold of vibration excitation in CO$_2$ has been investigated by \textit{Kochem et al.} [1985], vibrationally inelastic DCS above 4 eV impact energies have been measured by \textit{Register et al.} [1980] for scattering angles $10^\circ-140^\circ$ and impact energies of 4, 10, 20, and 50 eV, and by \textit{Johnstone et al.} [1995] for only one scattering angle ($20^\circ$) in the energy region 1 to 7.5 eV. \textit{Nakamura} [1995] determined the vibrational cross section using swarm experiment. \textit{Kitajima et al.} [2001] made measurements of DCS for the electron impact excitation of CO$_2$ for (010), (100), (001), and (020) vibrational modes over the scattering angles $20^\circ-130^\circ$ and energy range 1.5-30 eV (except at 4 eV where the smallest angle was extended upto $10^\circ$), and assigned an uncertainty of 30\% to their measurements. Their DCS is consistent with the results of previous beam-type measurements. \textit{Itikawa} [2002] has extrapolated the DCS of \textit{Kitajima et al.} [2001] to obtain the total vibration cross sections for three modes, which are presented in Figure 1. In our studies we have taken cross sections for the three fundamental vibrational modes (010), (100), and (001) from \textit{Itikawa} [2002]. There are other modes also but their cross sections are small compare to these three fundamental modes. \subsubsection{Electronic excitation} There are several features in the optical and electron scattering spectrum of {$\mathrm{CO_2}$}\ in the energy loss range between 7 and 11 eV (\emph{Herzberg}, 1966; \emph{Rabalais et al.}, 1971; \emph{Hall et al.}, 1973). Except for Rydberg states, there is still no definite consensus about structure and assignment of the excited electronic states of {$\mathrm{CO_2}$}. In the energy loss spectra of {$\mathrm{CO_2}$}, \emph{Green et al.} [2002] have found four clearly distinct peaks at 10.98, 11.05, 11.16, and 11.40 eV, with an uncertainty of 30\% in their results. \emph{Itikawa} [2002] in his review paper has recommended the DCS of \emph{Green et al.} [2002], for the excitation of the 10.8-11.5 eV energy loss states. Recently, \emph{Kawahara et al.} [2008] have given the integral cross section for electronic states $^1\Sigma_u^+$ and $^1\Pi_u$ of {$\mathrm{CO_2}$}, based on the DCS measurement of \emph{Green et al.} [2002] in the energy range 20-200 eV. Theoretical calculations of electronic structure have also been made by several authors [\emph{Nakatsuji}, 1983; \emph{Spielfiedel et al.}, 1992; \emph{Buenker et al.}, 2000; \emph{Lee et al.}, 1999]. Using distorted-wave method, \emph{Lee and McKoy} [1983] calculated the cross section for the excitation of eight low lying-states. But there is not much agreement among these calculations. In summary, there is still a need for a detailed study of excitation of electronic states of {$\mathrm{CO_2}$}\ by electron impact. We have taken the empirical cross sections of \emph{Jackman et al.} [1977] for the electronic states of {$\mathrm{CO_2}$}. These cross sections have been obtained using equation: \begin{equation} \sigma(E)=\frac{(q_0F)}{W^2}\left[1-\left(\frac{W}{E}\right)^\alpha \right]^\beta \left[\frac{W}{E}\right]^\Omega \end{equation} where $q_0=4\pi a_0R^2$ and has the value $6.512\times10^{-14}$ eV$^2$ cm$^2$. The fitting parameter are given in Table 4. The parameters for the two states 12.4 and 13.6 eV, that corresponds to Cameron band of CO [cf. \emph{Sawada et al.}, 1972] have been modified. The peak cross section of their sum is $2.40\times 10^{-16}$ cm$^{2}$ at 80 eV [\emph{Erdman and Zipf}, 1983]. \subsection{Emission} Electron impact dissociation and ionization of CO$_2$ can result in the production of excited fragments of CO, O, and CO$_2$ in the neutral and ionized states, resulting in the emissions in the ultraviolet region. These emissions are important for understanding phenomena like aurora, dayglow that occur in the atmospheres of Mars, Venus, and {$\mathrm{CO_2}$}-containing atmospheres. The strong band systems observed on Mars are Fox-Duffendack-Barker bands ($A^2\Pi_u \rightarrow X^2\Pi_g$) and ultraviolet doublet ($B^2\Sigma_u^+\rightarrow X^2\Pi_g$) of CO$_2^+$, and Cameron bands ($a^3\Pi \rightarrow X^1\Sigma^{+}$) of CO [\emph{Ajello}, 1971; \emph{Barth et al.}, 1971; \emph{Bertaux et al.}, 2006; \emph{Leblanc et al.}, 2006]. \emph{Ajello} [1971] measured the emission cross sections for the $A^2\Pi_u \rightarrow X^2\Pi_g$ and $B^2\Sigma_u^+\rightarrow X^2\Pi_g$ bands of CO$_2^+$ from threshold to 300 eV. He also measured cross sections for the excitation of the fourth positive system of CO ($A^1\Pi\rightarrow X^1\Sigma^{+}$), the first negative system of CO$^+$ ($B^2\Sigma^{+}\rightarrow X^2\Sigma^{+}$) and several atomic multipletes of carbon and oxygen produced from dissociative excitation of {$\mathrm{CO_2}$}. \subsubsection{Emission from CO$_2^+$} \emph{McConkey et al.} [1968], \emph{Ajello} [1971], and \emph{Tsurubuchi and Iwai} [1974] have detected emissions corresponding to the following transitions: $$ A^2\Pi_u \rightarrow X^2\Pi_g \quad at\ 293.6 - 438.4\ nm $$ and $$ B^2\Sigma_u^+\rightarrow X^2\Pi_g \quad at\ 218.9 - 226.8\ nm $$ The peak value of cross sections measured by the three groups for the above transitions are in good agreement with each other. These emissions are well known in the Mars upper atmosphere. Both the ground and excited states of CO$_2^+$ are known to be linear [\emph{Herzberg}, 1966]. The cross section of \emph{Ajello} [1971] has too steep an energy dependence near threshold compared to \emph{McConkey et al.} [1968] and \emph{Tsurubuchi and Iwai} [1974]. In his review, \emph{Itikawa} [2002] recommended the cross sections of \emph{Tsurubuchi and Iwai} [1974], for which the peak values are $(8.0\pm2.0)\times10^{-17}$ cm$^2$ at 160 eV for the $ A - X $ transition, and $(4.7\pm1.2)\times10^{-17}$ cm$^2$ for the $B - X$ transition. We have taken the cross sections for $ A - X $ and $B - X$ emissions of CO$_2^+$ from \emph{Itikawa} [2002]. These cross sections have been fitted using equation (3). The fitting parameters are given in Table 3, and fitted cross sections in Figure 3. \subsubsection{Emission from CO$^+$} Only \emph{Ajello} [1971] has measured the cross section for the emission of first negative system ($B^2\Sigma^+ \rightarrow X^2\Sigma^+$) of CO$^+$. The cross section exhibits an appearance potential of 25.11 eV, and the peak value of cross section is $1.9\times 10^{-18}$ cm$^2$ around 100 eV. The cross section for the excitation of the first negative system of CO$^+$ from electron impact on {$\mathrm{CO_2}$}\ is about a factor of 25 less than for excitation of the same system from CO [\emph{Ajello}, 1971]. We have adopted the cross section of \emph{Ajello} [1971], which has been fitted analytically using equation (4); the fitting parameters are given in Table 4. Figure 3 shows the fitted cross section along with experimental cross section. \subsubsection{Emission from CO} Cross sections for the production of Cameron band system ($a^3\Pi \rightarrow X^1\Sigma^+$) and fourth positive system ($A^1\Pi \rightarrow X^1\Sigma^+$) of CO have been measured by \emph{Ajello} [1971]. The emission cross section for the fourth positive system is very weak and Ajello could not measure the cross section near threshold (13.48 eV). For the Cameron band system, \emph{Ajello} [1971] reported relative magnitudes of the cross section for the (0, 1) band at 215.8 nm. The upper state ($a^3\Pi$) of Cameron emission is metastable and has a long radiative lifetime ($\sim$3 ms) [\emph{Gilijamse et al.}, 2007], and kinetic energies of the CO($a^3\Pi$) fragments are in the range of 0--1.2 eV [\emph{Freund}, 1971]. \emph{Erdman and Zipf} [1983] measured the total cross section for CO ($a^3\Pi \rightarrow X^1\Sigma^+$) electronic transition. They estimated the absolute magnitude of total Cameron band emission cross section of $2.4\times10^{-16}$ cm$^2$ at 80 eV. The Cameron band is the brightest emission feature in the UV dayglows of both Mars and Venus as well as an important emission in {$\mathrm{CO_2}$}-containing atmospheres, \textit{e.g.} comets. \subsubsection{Emission from O and C} Both \textit{Ajello} [1971] and \textit{Mumma et al.} [1972] have reported cross section for the emission of the O 130.4 nm triplet from electron impact on {$\mathrm{CO_2}$}, but the measurements are not consistent with each other. There are many other atomic emissions produced in e-{$\mathrm{CO_2}$}\ collisions, but they have very small cross sections [cf. \textit{van der Burgt et al.}, 1989]. \emph{Kanik et al.} [1993] have reported the emission cross sections for O, O$^+$, C, C$^+$, CO, and CO$^+$ in the wavelength region 40 - 125 nm. All the cross sections of \emph {Kanik et al.} [1993] are less than $10^{-18}$ cm$^2$. We have adopted the O I and C I production cross sections of \textit{Jackman et al.} [1977]. \section{Monte Carlo Model} The transport of radiation is a natural stochastic process that is amenable to the Monte Carlo method due to its probabilistic nature. In the Monte Carlo simulation, modeling of an inherently stochastic system is carried out by artificial random sampling. In the present work we have developed a Monte Carlo model to simulate the local degradation of 1-1000 eV electrons in an atmosphere of CO$_2$ gas. The energy bin size is taken as 1 eV throughout the energy range. In the simulation we have considered elastic scattering between electrons and neutral {$\mathrm{CO_2}$}\ molecules, and various inelastic processes like ionization, excitation, attachment, dissociation, etc; the cross sections for these processes are described in section 2. Figure 5 illustrates how an individual electron is treated in the Monte Carlo simulation. The initial energy $E_0$ of the electron is fixed at the beginning of the simulation and the direction of movement of the electron ($\theta,\ \phi$) is decided with the help of two random numbers $R_1$ and $R_2$ [random numbers are uniformly distributed in the range (0, 1)] as \begin{equation} \theta=\cos^{-1}(1-2R_1), \end{equation} \begin{equation} \phi=2\pi R_2. \end{equation} The distance to next collision is calculated from \begin{equation} S = -\log(1-R_3)/n\sigma_T, \end{equation} where $R_3$ is a random number, $n$ is the number density of the neutral target species (taken as $1\times10^{10}$ cm$^{-3}$), and $\sigma_T$ is the total (elastic + inelastic) electron impact collision cross section. After generating a new random number $R_4$, the probability of elastic collision $P_{el}=\sigma_{el}/\sigma_T$ is calculated. if $P_{el} > R_4$, elastic collision takes place. if $P_{el}\leq R_4$, the inelastic event takes place, and in this case we further test for the type of inelastic event that has taken place with the help of another random number. For elastic scattering the energy loss is calculated as \begin{equation} \bigtriangleup E=\frac{m^2v^2}{m+M}-\frac{m^2vV_1\cos\delta}{m+M}, \end{equation} $$ V_1=v\left[\frac{m\cos\delta}{m+M}+\frac{[M^2+m^2(\cos\delta-1)]^{1/2}} {m+M}\right]. $$ Here $\delta$ is the scattering angle in the laboratory frame, $v$ and $m$ are the velocity and mass, respectively, of the electron, and $M$ is the mass of the target particle. Differential elastic cross sections (discussed in section 2.2.1) are used to obtain the scattering angle $\delta$. Differential cross sections are fed numerically in the Monte Carlo model at 28 unequally spaced energy points (1.5, 2, 3, 3.8, 4, 5, 6, 6.5, 7, 8, 9, 10, 15, 20, 30, 40, 50, 60, 70, 80, 90, 100, 200, 300, 400, 500, 800, and 1000 eV) and at 20 scattering angles ($0^\circ$, $5^\circ$, $10^\circ$, $15^\circ$, $20^\circ$, $30^\circ$, $40^\circ$, $50^\circ$, $60^\circ$, $70^\circ$, $80^\circ$, $90^\circ$, $100^\circ$, $110^\circ$, $120^\circ$, $130^\circ$, $135^\circ$, $150^\circ$, $165^\circ$, and $180^\circ$). At intermediate energies and angular points the values are obtained through linear interpolation. The energy $\bigtriangleup E$ is subtracted from the energy of the test particle. After the collision, the deflection angle relative to the direction ($\theta,\phi$) is obtained as $$ \cos\theta^{''}=\cos\theta\cos\theta^{'}-\sin\theta\sin\theta^{'} \cos\phi^{'} , $$ \begin{eqnarray} \cos\phi^{''}=(\cos\theta\cos\phi\sin\theta^{'} \sin\phi^{'}-\sin\phi\sin\theta^{'}\sin\phi^{'} +\sin\theta\cos\phi\cos\theta^{'})/\sin\theta^{''}, \end{eqnarray} \begin{eqnarray*} \sin\phi^{''}=(\cos\theta\cos\phi\sin\theta^{'}\cos\phi^{'} -\cos\phi\sin\theta^{'}\sin\phi^{'} +\sin\theta\sin\phi\cos\theta^{'})/\sin\theta^{''}. \end{eqnarray*} Here $\theta^{'}$, $\phi^{'}$ are the scattering angles. In the case of an inelastic collision, the next step is to find whether the event is ionization or any of the other type of inelastic collision. If the collision is an ionization event, a secondary electron is produced. The energy of the secondary electron $T$ is calculated with the help of a random number $R$ as [\textit{Bhardwaj and Michael}, 1999a] \begin{equation} T=\frac{\Gamma_S\ E_v}{E_v+\Gamma_B}[\tan(RK_1+(R-1)K_2)]+T_S -\left[\frac{T_A}{E_v+T_B}\right], \end{equation} where $$ K_1 = \tan^{-1}\left\{\left[\frac{(E_v-I)}{2}-T_S +\frac{T_A}{(E_v+T_B)}\right] /\frac{\Gamma_S\ E_v}{(E_v+\Gamma_B)}\right\}, $$ $$ K_2 = \tan^{-1}\left\{\left[T_S -\frac{T_A}{(E_v+T_B)}\right] /\frac{\Gamma_S\ E_v}{(E_v+\Gamma_B)}\right\}. $$ Here $E_v$ is the energy of the incident primary electron before the ionization event. $\Gamma_S$, $\Gamma_A$, $T_A$, $T_B$, and $T_S$ are the fitting parameters, and $I$ is the ionization threshold. The values of these parameters are given in Table 3. If the energy of secondary electron, produced in the ionization event, is more than the lowest cutoff energy (which is 1 eV in our simulation) then it is also tracked in a same manner as the primary electron (cf. Figure 5). The secondary electrons can also cause ionization, producing tertiary electrons, which are treated in a similar way as secondary electrons. In the Monte Carlo simulation we also follow tertiary and subsequent electrons. The number of secondary, tertiary, and subsequent electrons produced during the ionization events are stored in the appropriate energy bins. After the type of collision event has been decided, the appropriate energy is subtracted from the energy of the particle. All the collision events are recorded in the appropriate energy bins corresponding to the energy of the electron at the time of collision. The history (track view) of a particle with each interaction event is traced until the electron energy falls below an assigned cutoff value, which is 1 eV. The sample size in the present study is $10^6$ particles for each simulation. \section{Yield Spectra} When all the sampled electrons have been degraded, we get a two dimensional yield spectrum, which is a function of the spectral energy $E$ and incident primary electron energy $E_0$, defined as [\emph{Green et al.}, 1977]: \begin{equation} U(E,E_0)=\frac{N(E)}{\bigtriangleup E}, \end{equation} where $N(E)$ is the number of inelastic collision events for which the spectral energy of the electron is between $E$ and $E+\bigtriangleup E$, where $\bigtriangleup E$ is the energy bin width, which is 1 eV in our model. This yield spectrum is related to the degradation spectrum or equilibrium flux $f(E,E_0)$ of \emph{Spencer and Fano} [1954] by the equation \begin{equation} U(E,E_0)=\sigma_T(E)f(E,E_0), \end{equation} where $\sigma_T$ is the total inelastic collision cross section. The analytical yield spectrum $U(E,E_0)$ embodies the nonspatial information of the degradation process. It represents the equilibrium number of electrons per unit energy at an energy $E$ resulting from the local energy degradation of an incident electron of energy $E_0$, and can be used to calculate the yield $J_j$ of any state $j$ at energy $E_0$ with the help of following equation: \begin{equation} J_j(E_0)=\int_{W_{th}}^{E_0} U(E,E_0)\: P_j(E)\, dE \end{equation} where $P_j(E)=\sigma_j(E)/\sigma_T(E)$ is the probability of occurrence of the $j$th process whose threshold potential is $W_{th}$. The yield for a particular process obtained by using the above equation is used in the following sections to calculate the mean energy per ion pair and efficiencies for various loss processes. Except at very low energies, yield spectrum $U(E,E_0)$ and probability of excitation $P_j(E)$ both vary with $E$ in a much simpler manner than do $f(E,E_0)$ and $\sigma_j(E)$. For many application purposes yield spectrum obtained by equation (11) is represented in the following form: \begin{equation} U(E,E_0)=U_a(E,E_0)\ H(E_0-E-E_m)+\delta(E_0-E). \end{equation} Here $H$ is the Heavyside function, with $E_m$ being the minimum threshold of the processes considered, and $\delta(E_0-E)$ is the Dirac delta function which allows for the contribution of the source itself. In atmospheric and astrophysical applications it is convenient to represent $U_a(E,E_0)$ in an analytical form [\textit{Green et al.}, 1977]: \begin{equation} U_a(E,E_0)=A_1\xi _0^s+A_2(\xi _0^{1-t}/\epsilon^{3/2 +r}) \end{equation} Here $\xi=E_0/1000$ and $\epsilon=E/I$ ($I$ is equal to lowest ionization threshold). $A_1=0.027,\ A_2=1.20,\ t=0,\ r=0,$ and $s=-0.0536$ are the best fit parameters. We have also tried two other analytical forms given by \textit{Singhal et al.} [1980] and \textit{Green et al.} [1985]. The form given by \textit{Singhal et al.} [1980] is: \begin{equation} U_a(E,E_0)=C_0+C_1\ \chi + C_2\ \chi^2 \end{equation} Here $\chi=E_0^\Omega/(E+L)$; where $\Omega=0.585$ and $L=1.0$ and $E_0$ is in keV, $C_0=0.0185,\ C_1=5.98,$ and $C_2=210.4$ are fitted parameters. The analytical form given by \textit{Green et al.} [1985] is: \begin{equation} U_a(E,E_0)=C_0+C_1(E_k+K)/[(E-M)^2+L^2]. \end{equation} Here $E_k=E_0/1000$, and $C_0$, $C_1$, $K$, $M$, and $L$ are the fitted parameters which are independent of the energy. The values of these constant parameters are $C_0=0.0299 $, $C_1=430$, $K=0.0041$ keV, $M=0.31$ eV, and $L=1.9$ eV. In obtaining our analytical fits we did not include values of the yield spectra very close to $E_0$ because in this regime yield spectra contain the rapid oscillations known as ``Lewis effect'' [cf. \textit{Douthat}, 1975]. These oscillations are channels with a finite number of threshold energies, so that there are only certain energies near $E_0$ which an electron can acquire. Obviously, no electron can acquire an energy between $E_0$ and $E_0-E_m$, and that is why the Heavyside function $H$ is inserted in the first term on the right-hand side of equation (14). The numerical yield spectrum represented analytically using equations (15), (16), and (17) is the two-dimensional analytical yield spectrum (AYS). In our studies, we have used the AYS obtained using equation (15), which is presented in Figure 6 along with the numerical yield spectra obtained by using (14). It is clear from Figure 6 that the analytical spectra represents quite well the numerical yield spectra above the ionization threshold; however, at lower energies (below 15 eV) the AYS departs from the numerical yield spectra. Similar behavior is seen in the AYS of \textit{Green et al.} [1977]. To overcome this deficiency we introduce an additional function to modify the lower energy part of the AYS: \begin{equation} U_b(E,E_0)=\frac{E_0A_0e^{x}/A_1}{(1+e^{x})^2}, \end{equation} Here $x=(E-A_2)/A_1$, and $A_0$, $A_1$, and $A_2$ are the fitting parameters. The values of parameters are $A_0=10.095$, $A_1=5.5$, and $A_2=0.9$. Equation (18) only affects the lower energy ($\leq$15 eV) part of the fit. The final AYS is the sum of equations (15) and (18) which is shown in Figure 6 at several incident energies: depicting a better fit at lower energies ($>$5 eV) as well as at higher energies. Because of the simplicity of function and cost effective computational advantage, the AYS technique has been widely used in different planetary atmospheres for various aeronomical calculations, like steady state electron fluxes and volume production rates for any ionization or excitation state; the details of the computational technique are described in earlier papers [e.g., \emph{Singhal and Haider}, 1984; \emph{Bhardwaj and Singhal}, 1993; \emph{Singhal and Bhardwaj}, 1991; \emph{Bhardwaj et al.}, 1990, 1996; \emph{Bhardwaj}, 1999, 2003; \emph{Bhardwaj and Michael}, 1999a, b; \emph{Michael and Bhardwaj}, 2000; \textit{Haider and Bhardwaj}, 2005]. \section{Mean Energy per Ion Pair} The mean energy per ion pair, $\mu_j$, is defined as the incident energy $E_0$ divided by the number of ion pairs produced. It can be expressed as \begin{equation} \mu_j(E_0)=E_0/J_j(E_0), \end{equation} where $J_j(E_0)$ is the population of the $j$th ionization process obtained by equation (13). The quantity mean energy per ion pair is known to approach a constant value at higher energies. Figure 7 shows the mean energy per ion pair for the ions CO$_2^+$ (including the ground and excited states), CO$^+$, O$^+$, C$^+$, CO$_2^{++}$, O$^{++}$ and C$^{++}$ along with the mean energy per ion pair for neutral {$\mathrm{CO_2}$}, solid symbol represents the mean energy per ion pair for neutral {$\mathrm{CO_2}$}\ obtained directly from the Monte Carlo simulation at few energy points. Mean energy for all the ions decreases very rapidly above their threshold value, but after $\sim$100 eV $\mu$ declines slowly and at higher energies it becomes almost constant. The values of $\mu$ for CO$_2^+$, CO$^+$, O$^+$, and C$^+$ at 200 (1000) eV are 53.6 (51.2), 403 (415), 263.1 (247.8), and 626.7 eV (576.2) eV, respectively. The mean energy per ion pair for neutral {$\mathrm{CO_2}$}\ gas is 37.5 (35.8) eV at 200 (1000) eV. \emph{Fox and Dalgarno} [1979] reported a value of 33.1 eV at 200 eV for the $\mu$, while \emph{Green et al.} [1977] obtained a value of 34.7 eV at 200 eV from their MDEB method. The measured value of the mean energy per ion pair in neutral {$\mathrm{CO_2}$}\ is 32.7 at high energies [\textit{Klots}, 1968]. Mean energy per ion pair for X$^2\Pi_g$, A$^2\Pi_u$, B$^2\Sigma_u^+$, and C$^2\Sigma_g^+$ states of CO$_2^+$ at 200 (1000) eV are 112.3 (118.4), 180.3 (156), 301.5 (266.4), and 1999 (1222) eV, respectively. \section{Secondary Electron Distribution} During the degradation process, every time the electron undergoes an ionization collision event, a secondary electron is produced. The energy of the secondary electron produced is calculated using (10). The maximum energy of the secondary electron produced can be $(E-I)/2$, where $E$ is the energy of the colliding electron and $I$ is the ionization potential. As mentioned before, secondary and tertiary electrons are also treated in the same manner as the primary electrons in the Monte Carlo model. The energy distribution of secondary electrons is presented in Figure 8 at several incident energies showing the number of secondary electrons produced per incident primary electron. The energy distributions of tertiary and quaternary electrons, which are presented only at E$_0=1000$ eV, are much steeper than that of secondary electrons. Each incident electron of E$_0 = 1000$ eV, at some point of its energy degradation process, produces at least one secondary or tertiary or quaternary electron, whose energy is $<$7 eV. \section{Efficiency} As the electron collide with the atmospheric particles, they lose their energy and finally become thermalized. The energy of the colliding electron is divided among the various inelastic loss processes. Efficiency means the fraction of incident energy of the electron which is eventually deposited in a particular loss channel after the completion of the entire degradation process. The efficiency, $\eta_j(E_0)$, of the $j$th process at incident energy $E_0$ can be obtained as \begin{equation} \eta_j(E_0)=\frac{W_{th}}{E_0}\; J_j(E_0) \end{equation} We have calculated the efficiencies for all inelastic collisions using numerical yield spectra obtained from equation (14) and the AYS [sum of equations (15) and (18)]. Figure 9 presents efficiencies of various single ionization events producing CO$_2^+$, CO$^+$, O$^+$, and C$^+$. The CO$_2^+$ has the maximum efficiency throughout the energy region due its higher ionization cross section. At 1000 eV, $\sim$31\% energy of the incident electron goes into CO$_2^+$ formation, while 5.9\%, 9.8\%, and 5.0\% energy goes into the production of CO$^+$, O$^+$, and C$^+$, respectively. At higher energies ($>$100 eV), increase in the efficiencies for all ions is small, but near threshold it falls very rapidly. At threshold, efficiencies for CO$_2^+$, CO$^+$, O$^+$, and C$^+$ are 5.1\%, 1.1\%, 0.16\% and 0.19\%, respectively, while at 200 eV these are 29\%, 6.0\%, 9.2\%, and 4.6\%, respectively. Efficiencies for CO$_2^+$(A-X), CO$_2^+$(B-X), and first negative band of CO$^+$(B-X) are also shown in Figure 9. At 200 (1000) eV, 12.2 (11.6)\% of incident electron energy goes in to the emission CO$_2^+$(A-X), while 9.8 (11.4)\% and 3.0 (3.3)\% goes in to the emissions CO$_2^+$(B-X) and CO$^+$(B-X), respectively. Figure 10 shows the efficiencies for double ionization of {$\mathrm{CO_2}$}. At 200 (1000) eV, efficiencies for CO$_2^{++}$, O$^{++}$, and C$^{++}$ are 0.56 (0.67)\%, 0.052 (0.12)\%, and 0.092 (0.14)\%, respectively. We have also calculated the efficiencies for (CO$^+$,O$^+$), (C$^+$,O$^+$), and (O$^+$,O$^+$), based on cross sections of \emph{Tian and Vidal} [1998], whose values are 2.7 (3.1)\%, 1.8 (2.4)\%, and 0.96 (1.1)\% at 200 (1000) eV. It is clear from Figures 9 and 10, that efficiencies calculated from the model and those obtained by using AYS are in good agreement. Efficiencies for various excitation processes are presented in Figure 11. The 13.6, 12.4, and 11.1 eV states dominate the excitation events having efficiencies 16 (15)\%, 12 (13)\%, and 4.7 (4.2)\% at 200 (1000) eV, respectively. Efficiencies of various line emissions of atomic oxygen and carbon are shown in Figure 12. Efficiencies for O I (1304), O I (1356), C I (1279), C I (1329), C I (1561), and C I (1657), are 0.12 (0.13)\%, 0.27 (0.28)\%, 0.084 (0.089)\%, 0.035 (0.030)\%, 0.10 (0.093)\%, and 0.19 (0.18)\%, respectively, at 200 (1000) eV. Overall efficiencies calculated from numerical yield spectra and AYS for various emission and excitation events are in good agreement. In Figure 13, we present a summary picture of the electron energy distribution in {$\mathrm{CO_2}$}\ for all the loss processes grouped into important loss channels. At higher ($>$50 eV) energies ionization is the dominant loss process with energy consumption of $\sim$50\%. At lower energies ($<$15 eV), 11.1, 12.4, 8.6, and 9.3 eV loss channels are more important. At energies below 10 eV, vibration becomes the main loss channel. We have also shown the efficiency for total attachment process, which produces negative ion O$^-$. The efficiency for anion O$^-$ production peaks around 8 eV with a value of 0.8\%, while it is 0.15 (0.13)\% at 200 (1000) eV. The total efficiency for double ionization, which results in the production of CO$_2^{++}$, O$^{++}$, and C$^{++}$ ions, is also depicted in the figure. The double ionization efficiency raises sharply above 40 eV, having value of 0.4 (0.7)\% at 100 (200) eV. Around 1000 eV, double ionization efficiency is 0.9\%, which is higher than that of 8.6 and 9.3 eV excitation states. On the other hand, at energies $>$100 eV efficiency for dissociative ionization is higher than that of the 13.6 and 12.4 eV states. \section{Summary} In this paper we have presented a Monte Carlo model for $\le$1000 eV electron degradation in {$\mathrm{CO_2}$}\ gas. All the e-{$\mathrm{CO_2}$}\ collision cross sections are compiled and fitted analytically. The analytical cross sections are presented in figures along with the laboratory measured cross sections for direct comparison, and the fitting parameters are provided in tables. The output of the Monte Carlo model is used to calculate the numerical ``yield spectra'', which is represented by an analytical form. This analytical yield spectra (AYS) can be used in planetary atmospheres to determine various aeronomical quantities. We have modified and improved the AYS presented by \textit{Green et al.} [1977] and \textit{Singhal et al.} [1980] by adding a term that provides a better analytical representation of yield spectra at lower ($<$15 eV) energies. The yield spectra is employed to compute the mean energy per ion pair and efficiency of various inelastic processes. The mean energy per ion pair for {$\mathrm{CO_2}$}\ is found to be 37.5 (35.8) at 200 (1000) eV. The energy distribution of secondary electrons produced per incident electron is presented at few incident energies. Efficiency is an effective measure to know what fraction of the incoming particle energy goes into a particular loss channel. We have presented efficiencies for various inelastic events calculated by using the AYS as well as by using the numerical yield spectra obtained from the Monte Carlo model. Efficiencies obtained by the two methods are in good agreement. In addition to major inelastic processes, efficiencies are presented for the formation of negative ions, double and dissociative double ionization of {$\mathrm{CO_2}$}, and total vibrational excitation in the (100), (010), and (001) states. Since the AYS do not represent well the numerical yield spectra at very low ($<$5 eV) energies, the yield for vibrational excitation and attachment processes calculated by the AYS would be approximate. Ionization is the dominant loss process at higher energies, above 100 eV $\sim$50\% energy goes into ionization. At energies around and below ionization threshold excitation processes become important, and at energy below 10 eV, vibration is the dominant loss channel consuming more than 70\% energy. The 13.6 and 12.4 eV loss channels are also important, at 1000 eV, around 28\% of incident particle energy goes in to these states. A part of these states represents the emissions of Cameron band system, which is an important emission in atmospheres of Mars and Venus as well as on comets (\textit{Bhardwaj and Raghuram}, 2009, in preparation). Efficiencies presented in this paper can be applied to planetary atmospheres by folding them with electron production rate and integrating over the energy. These results will be useful in the modeling of aeronomical processes in atmospheres of Mars, Venus, and {$\mathrm{CO_2}$}-containing atmospheres.
proofpile-arXiv_067-13041
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Observations of the emergent spectra from transiting extrasolar planets with the \emph{Spitzer Space Telescope} have enabled us to probe the atmospheres of a class of giant extrasolar planets known as ``hot Jupiters''. These planets have masses and radii similar to the gas giants in our solar system, but orbit very close to their parent stars, with equilibrium temperatures ranging from 1000-2500 K. By measuring the wavelength-dependent decrease in light when the planet moves behind the star in an event known as a secondary eclipse, we can construct a dayside emission spectrum for the planet \citep{deming05, char05}. During its cryogenic mission, \emph{Spitzer} obtained multi-wavelength observations for fifteen extrasolar planets during secondary eclipse. The results of these studies indicate that hot Jupiter atmospheres can be distinguished by the presence or absence of a strong temperature inversion in the upper atmosphere \citep[e.g.,][]{burrows07b, burrows08, fortney08, barman08, madhu09}. The \emph{Spitzer Space Telescope} is continuing to survey hot Jupiter emission spectra during its post-cryogenic mission. After its cryogen was exhausted in May 2009, only the 3.6 and 4.5 $\mu$m channels of the Infrared Array Camera (IRAC; Fazio et al. 2004) instrument are operational. Fortunately, these two wavebands are well placed to constrain the range of possible models for these atmospheres. Planets without a strong inversion, which include HD 189733b \citep[e.g.][]{deming06, grillmair07, grillmair08, char08, barman08, swain09}, TrES-1 \citep{char05} and TrES-3 \citep{fressin10}, are best described by models that exhibit H$_2$O and CO absorption features, which cause a decrease in the eclipse depth at 4.5 $\mu$m relative to 3.6 $\micron$. A strong thermal inversion changes these features from absorption to emission, therefore increasing the flux at wavelengths greater than 4 $\micron$ in the atmospheres of planets, such as HD 209458b \citep{deming05, richardson07, knutson08, swain09}. Of the systems already observed with \emph{Spitzer}, eleven have been found to possess strong temperature inversions (see Knutson et al. 2010 for a review). \begin{figure}\epsscale{1.0} \plotone{f1.eps} \caption{Background estimate \emph{vs.} time for 3.6 $\mu$m images. We estimate the background by fitting a Gaussian to the central region of the histogram of counts in the entire array. The background estimates exhibit a ramp-like behavior, while also varying between three distinct levels.} \end{figure} In this paper, we present measurements of the transiting extrasolar planet WASP-4b spanning two times of secondary eclipse. WASP-4b is a 1.24 M$_{Jup}$ planet orbiting at 0.023 AU from a G7V star \citep{wilson08, gillon09, winn09}. If we assume that the planet absorbs all incident flux and re-emits that flux as a blackbody from the dayside alone, we calculate a maximum dayside effective temperature of about 2000 K. This highly irradiated planet provides an excellent test case for the correlation between temperature inversions and stellar irradiation (for a recent review see Wheatley et al. 2010). It has been hypothesized that absorbers such as gas-phase TiO in the upper atmosphere trap stellar irradiation, creating a thermal inversion \citep{hubeny03}. However, both because TiO is a heavy molecule and because titanium can condense into solid grains in night-side and day-side cold traps, significant macroscopic mixing would be required to maintain it in the upper atmosphere; it is not clear whether such vigorous mixing should be expected in a stably stratified atmosphere \citep{spiegel09}. This theory also fails to explain the presence of a temperature inversion in XO-1b's atmosphere, as this planet has a dayside temperature well below the condensation point for TiO. One alternative theory suggests that temperature inversions could be explained by absorption of UV and violet visible light by sulfur-containing species \citep{zahnle09}. WASP-4b has a radius of 1.365$\pm$0.021 $R_{Jup}$ \citep{winn09}, which is larger than predicted by models of irradiated planets \citep{burrows07a, fortney07, guillot08}, placing it among a subset of ``bloated planets''. One possible explanation is that the inflated radius is caused by tidal heating due to ongoing orbital circularization. Using formulae from Liu et al. (2008), Winn et al. (2009) find that an orbital eccentricity between 0.002 and 0.02 would produce enough heat to inflate the planet to its observed size. Using radial velocity measurements, Mahusudhan $\&$ Winn (2009) find a 95.4$\%$ confidence upper limit on $e$ of 0.096. By measuring the time of secondary eclipse, we can place a much tighter upper limit on the parameter $e\cos\omega$, which will help determine whether tidal heating is a viable explanation. In Sec. 2 we describe the observations and outline our fits to the data. In Sec. 3, we compare our results to the predictions of atmospheric models. Finally, in Sec. 4, we present our conclusions. \begin{figure}\epsscale{1.0} \plotone{f2.eps} \caption{Photometry at 3.6 and 4.5 $\mu$m \emph{vs.} time from center of secondary eclipse. The decorrelation functions to correct for intrapixel sensitivity are overplotted. We use a linear function of $x$ and $y$ position to correct the 4.5 $\mu$m photometry and a linear function of $x$, $y$ and time for the 3.6 $\mu$m observations.} \end{figure} \section{Observations and Methods} We observed a secondary eclipse of WASP-4b in the 4.5 $\mu$m band on UT 2009 December 6 using IRAC on board the \emph{Spitzer Space Telescope}. We observed in full array mode with a 10.4 s integration time, yielding a total of 2115 images over a period of 7.7 hr. We observed a second secondary eclipse in the 3.6 $\mu$m band on UT 2009 December 9 using the same 10.4 s integration time, acquiring 2115 images in 7.7 hr. We perform photometry on the Basic Calibrated Data (BCD) files produced by version S18.13.0 of the Spitzer pipeline. These data files are dark-subtracted, linearized, flat fielded and flux-calibrated. The cBCD images have been further corrected for artifacts due to bright sources, such as column pulldown, but these corrections have an unknown effect on time series photometry and we therefore elect to use the standard BCD images in our analysis. We extract the UTC-based Julian date for each image from the FITS header (keyword DATE$\_$OBS) and correct to mid-exposure. We convert to UTC-based BJD using the JPL Horizons ephemeris to estimate \emph{Spitzer}'s position during the observations. We correct for transient ``hot pixels'' in a 20$\times$20 pixel box around the star by comparing each pixel's intensity to the median of the 10 preceding and 10 following frames at that position. If a pixel in an individual frame has an intensity $>$ 3$\sigma$ from the median value, its value is replaced by the median. We corrected 0.32$\%$ and 0.35$\%$ of the pixels in the box in the 3.6 $\mu$m and the 4.5 $\mu$m band images, respectively. We estimate the background by fitting a Gaussian to the central region of the histogram of counts in the entire array. We find that the background varies significantly from frame to frame for both channels. The background values, which are plotted in Figure 1 for the 3.6 $\mu$m band images, display a ramp-like behavior, while also varying between three distinct levels. We find a similar pattern in channel 2. This behavior is likely a ubiquitous feature of warm \emph{Spitzer }, as it is also observed in the warm \emph{Spitzer} analysis of CoRoT-1 and CoRoT-2 \citep{deming10}. We use three methods to measure the position of the star on the array. We calculate the flux-weighted centroid within 5.0 pixels of the approximate center of the star, fit a 2D Gaussian with a fixed width to a 7$\times$7 pixel subarray centered on the brightest pixel of the star \citep[e.g.,][]{agol10, stevenson10} and fit Gaussians to the marginal $x$ and $y$ sums using GCNTRD, which is part of the standard IDL astronomy library. Each method yields eclipse depths consistent to within 1$\sigma$. We found that using GCNTRD estimates for channel 1 and the 2D Gaussian estimates for channel 2 produced the smallest reduced chi-squared for the fits and therefore elect to use these position estimate methods. (2D Gaussian fits produced $\chi^2$=2535 and $\chi^2$=2239 for channel 1 and 2, respectively, whereas GCNTRD fits produced $\chi^2$=1973 and $\chi^2$=2273 for channel 1 and 2, respectively.) The difference in $\chi^2$ produced for the different position estimates is very large in channel 1. While we trim approximately the same number of points using both methods, we find that the GCNTRD positions result in a lower level of correlated noise in the final light curve. The rms difference between the 2D Gaussian and GCNTRD positions are 0.075 pixels in $x$ and 0.229 pixels in $y$ for channel 1 and 0.059 pixels in $x$ and 0.145 pixels in $y$ for channel 2. These differences are primarily in the form of a constant offset; we find that the relative change in position calculated using both methods is quite similar. We perform aperture photometry with DAOPHOT using apertures ranging from 3.0 to 5.0 pixels in half pixel intervals. We carried out our fits for each of these apertures and found that the eclipse depths and times remain consistent for apertures between 3.0 and 5.0 pixels. We choose an aperture size of 3.5 for our analysis because it minimizes both the probability of hot pixels falling within the aperture and the root mean square (rms) scatter in the data. \begin{figure}\epsscale{1.0} \plotone{f3.eps} \caption{Photometry at both wavebands after decorrelation \emph{vs.} time from center of secondary eclipse. The data are binned with 6.6 minute intervals. The error bars are based on the scatter of the individual points in each bin. The best-fit eclipse curve is overplotted.} \end{figure} \begin{deluxetable*}{ccccccrrrrr} \tablecaption{Summary of Secondary Eclipse Results \label{summary}} \tablewidth{0pt} \tablenum{1} \label{Secondary Eclipse Results} \tablehead{ \colhead{Wavelength ($\mu$m)} & \colhead{Center of Eclipse (BJD)} & \colhead{Depth ($\%$)} & \colhead{Eclipse Offset (min)} & \colhead{T$_{bright}$(K) \tablenotemark{a}} } \startdata 3.6 & 2455174.87731 $\pm$ 0.00087 & 0.319 $\pm$ 0.031 & 0.5 $\pm$ 1.3 & 1832 $\pm$ 71\\ 4.5 & 2455172.2011 $\pm$0.0013 & 0.343 $\pm$ 0.027 & 0.1 $\pm$ 1.9 & 1632 $\pm$ 56\\ \enddata \tablenotetext{a}{We calculate the brightness temperature of the planet by finding the flux-weighted average of the planet-star flux ratio over each \emph{Spitzer} bandpass. We use a 5500 K PHOENIX NextGen model \citep{haus99} for the stellar spectrum and set the planet's emission spectrum equal to a blackbody, then solve for the temperature at which the planet-star flux ratio equals the observed eclipse depth.} \end{deluxetable*} The position of the star varies by 0.50 pixels in $x$ and 0.46 pixels in $y$ in the 3.6 $\mu$m images and by 0.21 pixels in $x$ and 0.26 pixels in $y$ in the 4.5 $\mu$m images. We discard any images where the measured flux, $x$ position or $y$ position was $>$ 3$\sigma$ from the median of the twenty frames surrounding the image in the time series. We removed a total of 10 images (0.47$\%$) and 15 images (0.71$\%$) from the 3.6 $\mu$m and 4.5 $\mu$m observations, respectively. The measured flux from the star varies significantly with its position on the pixel \citep[e.g.][]{char05,char08}. In order to correct for this intrapixel sensitivity, we fit the data with linear functions of the $x$ and $y$ positions. We fit the 4.5 $\mu$m data with a linear function of the form, \begin{equation} f = f_0(c_1(x-x_0)+c_2(y-y_0)+c_3) \end{equation} \noindent where $f$ is the flux measured on the array, $f_0$ is the original flux of the star, $x$ and $y$ are the positions of the star on the array, $x_0$ and $y_0$ are the median values of $x$ and $y$ over the time series and the constants $c_1$ - $c_3$ are free parameters. As a check we also try fits to the 4.5 $\micron$ data using a linear function of time instead of the $x$ and $y$ variables described above, but this results in noticeably poorer fits ($\chi^2$=2429 for the linear fit with four d.o.f. (degrees of freedom) and 2239 for the function of $x$ and $y$ including five d.o.f.). We also try a linear function of $x$, $y$ and time, but find the additional time term produces only a negligible improvement in the fit ($\chi^2$=2235, six d.o.f.). We fit the 3.6 $\mu$m data with a linear function in $x$, $y$, and time. We find that the linear fit in time produces a clear improvement in both the chi-squared value ($\chi^2$=1973, six d.o.f. and $\chi^2$=2007, five d.o.f. for the fits with and without a linear fit for time, respectively) and the amount of correlated noise. We also try adding quadratic terms in $x$ and $y$ which are usually required when the star falls on the peak of the intrapixel curve (center of the pixel). However, we find that adding additional degrees of freedom in $x$ and $y$ has a negligible effect on the final time series, eclipse values, and chi-squared ($\chi^2$=1968, eight d.o.f.) and therefore elect to use the linear fit. Figure 2 shows the photometry with the decorrelation functions overplotted for each waveband. We use a Markov Chain Monte Carlo method \citep{ford05, winn07} with $10^6$ steps to simultaneously determine the transit depth, timing of the eclipse, and the corrections for intrapixel sensitivity. We use five free parameters in the 4.5 $\mu$m data and six free parameters in the 3.6 $\mu$m data, including the linear term in time. We set the system parameters (planetary and stellar radii, orbital period, and orbital inclination) to the values given in Winn et al. (2009). We calculate the eclipse curve using the equations from Mandel $\&$ Agol (2002). The uncertainty for each point is set equal to the rms deviation of the out-of-eclipse data after removing the intrapixel effect. We also trim the first half hour of data from both the 3.6 and 4.5 $\mu$m time series because it exhibits larger deviations in position, perhaps due to settling of the telescope at a new pointing. We take the median value of the distribution for each parameter as our best-fit solution. We calculate symmetric error bars about the median by finding the range over which the probability distribution contains 68$\%$ of the points above and below the median. The distributions for all parameters are nearly Gaussian and there are no strong correlations between parameters. Best-fit eclipse depths and times are shown in Table 1. As a check, we ran a second independent Markov chain for each channel and obtained identical results. Figure 3 shows the photometry after it has been corrected with the best-fit intrapixel correlation function with the best-fit eclipse curves overplotted. We also calculate error bars using the `prayer-bead' method \citep{gillon09}. We divide our time series by the best-fit solution from the Markov chain, shift the time series in one point increments, multiply the best-fit solution back in and calculate the eclipse depth and time for the new data set. The prayer-bead distributions gave error bars that were consistent with the Markov chain errors and we elect to use the larger of the two errors in each case. For channel 1, we use the prayer-bead error for the eclipse depth (0.031\%) instead of the Markov error (0.019\%), whereas we use the Markov error for the time (1.3 min) instead of the prayer-bead error (0.72 min). For channel 2, we used the prayer-bead errors for both the eclipse depth (0.027\%) and time (1.9 min). The corresponding Markov errors are 0.023\% and 1.4 min. We find that the rms variation in our light curve after correcting for intrapixel sensitivity is 1.1 and 1.2 times the predicted photon noise from the star at 3.6 and 4.5 $\mu$m, respectively. The reduced chi squared for our fits are 1.01 ($\chi^2$=1973, 1945 points, six d.o.f.) and 1.16 ($\chi^2$= 2239, 1941 points, five d.o.f.) for the 3.6 and 4.5 $\mu$m light curves, respectively. The error used in the fits is based on the rms deviation of the out-of-eclipse light curve rather than the predicted photon noise. Since we use the rms error estimates, the reduced $\chi^2$ should theoretically be equal to 1.0 if the noise is purely gaussian. The fact that we find reduced $\chi^2$ values exceeding 1.0 reflects the correlated noise present in our light curves which we take into account with the prayer-bead analysis. \section{Discussion} \subsection{Orbital Eccentricity} The timing of the secondary eclipse is very sensitive to the planet's orbital eccentricity. Assuming a circular orbit and accounting for 23.4 seconds that light takes to travel across the orbit \citep{loeb05}, we would expect to see the secondary eclipse occur at a phase of 0.5002. In the event that there is significant advection of energy to the planet's night side we would expect an additional delay due to an offset hot spot on the planet's day side causing a change in the shape of ingress and egress. We estimate the maximum value of this delay to be 41 seconds based on a model in which the longitudinal advection time is 60\% of the radiative time, corresponding to a hot region shifted 30 degrees east of the substellar point \citep{williams06, cowen10}. We can use the difference between the predicted and observed orbital phase of secondary eclipse, including the light travel time but neglecting the unknown delay from a nonuniform surface brightness, to constrain $e\cos\omega$, where $e$ is the orbital eccentricity and $\omega$ is the argument of pericenter \citep[e.g.,][]{char05}. \begin{figure*}\epsscale{0.9} \plotone{f4.eps} \caption{Dayside planet/star flux ratio \emph{vs.} wavelength for three model atmospheres \citep{fortney08} with the band-averaged flux ratios for each model superposed (squares). The measured contrast ratios are overplotted (black circles). One model (\emph{green}) represents an atmosphere containing TiO in the upper atmosphere at equilibrium abundance. The other two models (\emph{orange} and \emph{magenta}) contain no TiO. The parameter $f$ represents the redistribution of energy over the planet's surface, where $f$=0.50 corresponds to dayside only redistribution and $f$=0.25 corresponds to uniform redistribution over the entire planet. We obtain the best fit to our measurements by a model with no TiO and little redistribution (\emph{orange}). } \end{figure*} \begin{figure}\epsscale{1.0} \plotone{f5.eps} \caption{ Dayside pressure-temperature profiles for the three model atmospheres in Figure 4 \citep{fortney08}. The green model contains TiO in the upper atmosphere and exhibits a strong temperature inversion for pressures below 0.01 bars. The orange and magenta profiles represent atmospheres with no TiO but have different values of the redistribution parameter $f$. The $f$=0.25 model has full redistribution of energy to the nightside, resulting in a cooler dayside profile. The hotter $f$=0.60 model provides the best fit to our measurements. We also indicate the approximate locations of the 3.6 and 4.5 $\micron$ photospheres (solid squares) for each model, estimated here as the pressure at which the model temperature matches the measured brightness temperature in each bandpass.} \end{figure} We find that the eclipse is offset from the predicted time based on the ephemeris from Winn et al. (2009) by 0.5$\pm$1.3 and 0.1$\pm$1.9 minutes in the 3.6 and 4.5 $\mu$m bands, respectively. We take the average of these two values weighted by the inverse of the variance and find a mean of 0.4$\pm$1.0 min, corresponding to $e\cos\omega = 0.00030 \pm 0.00086$. We place a 2$\sigma$ upper limit on $|e\cos\omega|$ of 0.0024, where we have calculated this limit by integrating over the histograms for the eclipse time. We integrate over the histograms from the Markov chain distribution for channel 1 and the prayer-bead distribution for channel 2. This upper limit implies that unless our line of sight happens to align very closely with the planet's major axis (i.e. the argument of pericenter $\omega$ is close to $\pi /2$ or $3\pi /2$) the orbit is nearly circular. Ibgui, Burrows, \& Spiegel (2010) investigate the extra core power that would be needed to explain the otherwise anomalously large radius of WASP-4b. They find that approximately $7.8\times 10^{-8}$ L$_{\odot}$ of heating would be necessary for solar-metallicity opacity atmospheres, which decreases to $10^{-8}$ L$_{\odot}$ for 10$\times$ solar opacity atmospheres. Less power is necessary if the atmosphere helps retain more heat, as in the 10$\times$ solar case. If this heating is due to tides, and the eccentricity is on the order of 0.001 and is maintained by an external planetary perturber in the system (as yet unidentified; Mardling 2007), then the Q$^{\prime}$ tidal dissipation parameter would be roughly between $3\times 10^4$ and $2\times 10^5$. In the Ibgui, Burrows, \& Spiegel (2010) paper, a value of 0.096 is assumed for the eccentricity and this leads them to derive a ``best-estimate" range for Q$^{\prime}$ between $3\times 10^8$ and $2\times 10^9$. With our new constraint on the eccentricity of WASP-4b's orbit, and using the calculations of Ibgui, Burrows, \& Spiegel (2010), we now obtain a range for Q$^{\prime}$ that is more in line with the measured value of Jupiter of $10^5-10^6$ \citep{gold66,yoder81}. \subsection{Atmospheric Temperature Structure} In this paper we examine two distinct classes of hot Jupiter models. Figure 4 shows our planet/star contrast ratios and three models for the planetary atmosphere derived from one-dimensional, plane-parallel atmosphere codes following Fortney et al.$~$(2008). One model assumes the presence of the absorber TiO in the upper atmosphere at equilibrium abundances, whereas the two remaining model atmospheres contain no TiO. Fortney et al.$~$(2008) parameterize the unknown redistribution of energy to the planet's nightside by varying the stellar flux incident at the top of the planetary atmosphere by a geometric factor to account for dayside average ($f$=0.5) or planet-wide average ($f$=0.25) conditions. The slope between the 3.6 and 4.5 $\micron$ points on the model with TiO (\emph{green}) is much too steep to fit both measurements simultaneously. We find that WASP-4b is best fit by the orange model with no TiO (no inversion) and geometric factor $f$=0.60, resulting in a very hot dayside. This is a reasonable choice, as the projected area of the substellar point ($f$=1) is maximized during secondary eclipse while contributions from the cooler regions near the day-night terminator are correspondingly reduced, giving an average value of $f$=2/3 at opposition \citep{burrows08}. The dayside pressure-temperature profiles for these three models are displayed in Figure 5. \begin{figure*}\epsscale{0.9} \plotone{f6.eps} \caption{Dayside planet/star flux ratio \emph{vs.} wavelength for three model atmospheres \citep{burrows08} with the band-averaged flux ratios for each model superposed (squares) to account for the widths of the \emph{Spitzer} bandpasses. The measured contrast ratios are overplotted (black circles). The blue model represents a non-inverted atmosphere ($\kappa_e$=0.0) with redistribution parameter $P_n$=0.1. An inverted atmosphere model is shown in red ($\kappa_e$=0.1), which exhibits water features in emission instead of absorption. The green model represents an atmosphere with a small amount of upper-atmosphere absorber, with optical opacity $\kappa_e$ equal to 0.03. The \emph{Spitzer} measurements are best matched by this model, which suggests that the atmosphere of WASP-4b has a moderate thermal inversion in its upper atmosphere.} \end{figure*} Figure 6 shows three models for the planetary atmosphere with greater degrees of freedom following Burrows et al. (2008). While the Fortney et al. models contain TiO in either zero or equilibrium abundance, the Burrows et al. models contain an unknown absorber at various optical opacities, parameterized by $\kappa_{e}$ in cm$^2/$g. Burrows et al. (2008) add a heat sink at a pressure range of 0.01 to 0.1 bars to model energy redistribution from the day to the nightside. As energy is most likely redistributed deep in the planetary atmosphere, this method for modeling heat transfer is physically motivated, but contains more degrees of freedom than the Fortney et al. models. General circulation models for these planets indicate that redistribution occurs continuously over a range of pressures \citep[e.g.,][]{showman08}, and we note that the range of pressures selected for our parameterized redistribution model can have a modest effect on the resulting pressure-temperature profiles, although it does not affect our main conclusions in this paper. The dimensionless parameter $P_n$ is a measure of the day to nightside energy redistribution, where $P_n$=0.0 represents no redistribution and $P_n$=0.5 represents full redistribution to the nightside. Burrows et al. use a 5500 K Kurucz atmosphere model for the stellar spectrum \citep{kurucz79,kurucz94,kurucz05}, whereas Fortney et al. use a 5500 K PHOENIX NextGen model \citep{haus99}. As a check, we calculate the Burrows et al. planet-star flux ratio models using a PHOENIX NextGen stellar spectrum instead of the Kurucz spectrum and find that the differences are minimal and comparable to the differences caused by the uncertainty in the star's effective temperature. We find that the differences in the band-integrated flux-ratios using the two different stellar models vary between 0.003 and 0.005$\%$ and are therefore negligible when compared to our measurement errors. We show an inverted atmosphere model (\emph{red}), with $\kappa_e$ and $P_n$ set equal to the best-fit values for the archetype inverted atmosphere HD 209458b \citep{burrows07b, burrows08}. This inverted model is a poor fit to our measured contrast ratio at 4.5 $\mu$m. We find the best match is the model with a small amount of stratospheric absorber with $\kappa_e$=0.03 cm$^2/$g and relatively efficient day-night circulation with $P_n$=0.3. The band-integrated flux ratios for this model (\emph{green}) fall within 1$\sigma$ of the measured ratios in both bands. The pressure-temperature profiles in Figure 7 show that this best-fit model exhibits a modest temperature inversion for pressures below 0.01 bars, much weaker than the archetype inverted atmosphere HD 209458b. The blue non-inverted atmosphere model with parameters $\kappa_e$=0.0 and $P_n$=0.1 fails to fit our measurements at both wavebands. We note that while the best-fit Burrows et al. model indicates moderately efficient ($P_n=0.3$) day-night circulation, the best-fit Fortney et al. model with $f=0.60$ requires minimal day-night circulation. It is perhaps not surprising that these relatively simple models disagree, given the differences in their treatment of the incident flux, optical opacities, and energy loss (if any) to the night side. We find some tentative evidence that this disagreement may be systematic, as published results for HD 189733b \citep{knutson09a} and HD 209458b \citep{fortney10} from Fortney et al. favor a hot dayside whereas Burrows et al. models predict greater energy redistribution ($P_n=0.30$ and 0.15 for HD 209485b and HD 189733b, respectively; Burrows et al. 2007b, 2008). Multi-wavelength phase curve observations allow us to test these predictions, at least for the brightest systems (e.g., Knutson et al. 2007, 2009a). \begin{figure}\epsscale{1.0} \plotone{f7.eps} \caption{ Dayside pressure-temperature profiles for three model atmospheres with various values of the parameters $P_n$ and $\kappa_{e}$ \citep{burrows08}. The blue model represents an atmosphere with no inversion. The red model corresponds to an atmosphere with an additional absorber with optical opacity $\kappa_e$=0.1 cm$^2/$g. The absorber, which is added high up in the atmosphere where the pressure is below 0.03 bars, traps stellar irradiation and creates a temperature inversion. The green model with $\kappa_e$=0.03 and $P_n$=0.3 provides the best fit to our measurements of WASP-4b. This model exhibits a slight temperature inversion for pressures less than 0.01 bars. Burrows et al. (2008) add a heat sink at a pressure range of 0.01 to 0.1 bars to model energy redistribution from the day to the nightside, which contributes to the decrease in dayside temperatures between 0.05 and 1.0 bars for the $P_n$=0.3 models. We also indicate the approximate locations of the 3.6 and 4.5 $\micron$ photospheres (solid squares) for each model, estimated here as the median pressure of the $\tau=$2/3 surface over the range of wavelengths spanned by each bandpass. We find the same approximate photosphere locations by solving for the pressure at which the the temperature of the model matches the measured brightness temperature in each band. Due to the width of the \emph{Spitzer} bandpasses, we actually see flux from a wide range of pressures. Typical ranges are 7$\times$10$^{-3}-2\times$10$^{-1}$ bars and 2$\times$10$^{-4}-1\times$10$^{-1}$ bars at 3.6 and 4.5 $\micron$, respectively.} \end{figure} \emph{Spitzer} infrared observations indicate that the atmospheres of extrasolar giant planets tend to exhibit properties ranging between two differing types, exemplified by HD 189733b, whose emission spectrum features water and other molecules in absorption, and HD 209458b, which exhibits these features in emission. Table 2 shows the published values of $\kappa_e$ and $P_n$ for a range of planets with \emph{Spitzer} observations; WASP-4b is similar to HD 189733b and TrES-3 in that it requires a relatively small amount of absorber as compared to HD 209458b. Assuming WASP-4b absorbs with zero albedo and re-emits on the dayside only, the planet's predicted dayside effective temperature is approximately 2000 K. If the planet emits uniformly over both hemispheres, we would expect an effective temperature of about 1650 K. We fit both measured eclipse depths simultaneously using a 5500 K PHOENIX NextGen model \citep{haus99} for the stellar spectrum and a blackbody for the planet's spectrum, and find that WASP-4b has a best-fit blackbody temperature of 1700 K. Given such high irradiation, it is somewhat surprising that WASP-4b exhibits at most a relatively weak thermal inversion. WASP-4b is therefore an exception to the general trend that highly irradiated planets are more likely to have strong thermal inversions. In Knutson et al. (2010) we propose that there exists a correlation between temperature inversions and the activity levels of the host star, where the increased UV flux from active host stars destroy the compounds that are responsible for producing temperature inversions. We use \ion{Ca}{2} H \& K line strengths as indicators of stellar activity levels. In Knutson et al. (2010) we obtain Keck HIRES spectra for WASP-4b and find the \ion{Ca}{2} H \& K line strength estimates are $S_{HK}$=0.194 and $\log\left(R_{HK}\prime\right)$=$-$4.865, assuming a model $B-V$ color of 0.74 for a 5500 K star. These line strengths indicate that WASP-4 is a moderately active star, with a $\log\left(R_{HK}\prime\right)$ value that falls near the division between classes. However, WASP-4b's smaller orbital distance relative to HD 189733b means that it intercepts proportionally more of its star's flux, and as a result we estimate that the UV flux per unit area incident at the surface of WASP-4b is approximately half that received by the planet HD 189733b and twice that received by WASP-2 (see discussion in Knutson et al. 2010). We also calculate a value for the empirical index defined in Knutson et al. (2010) as the difference between the slope across the measured 3.6 and 4.5 $\micron$ eclipse depths and the slope of the best-fit blackbody function for the planet, which provides an observational means to distinguish between the two hot Jupiter atmosphere types. We find a value of $-$0.09$\pm$0.04 in this index for WASP-4b, which suggests that this planet is best classified in the same type as HD 189733b (index of $-$0.15$\pm$0.02) and TrES-3b (index of $-$0.10$\pm$0.05). Planets with strong inversions typically have positive values in this index, therefore this result is consistent with our earlier conclusion that WASP-4 displays at most a relatively weak temperature inversion. \begin{deluxetable}{ccccccrrrrr} \tablecaption{Effective Temperature and Burrows et al. Model Parameters for Extrasolar Giant Planets } \tablewidth{0pt} \tablenum{2} \label{Secondary Eclipse Results} \tablehead{ \colhead{Name} & \colhead{$T_{eff}$ (K) \tablenotemark{a}} & \colhead{$\kappa_e$} & \colhead{$P_n$} & \colhead{Reference} } \startdata TrES-3 & 2000 & 0.01 & 0.3 & Fressin et al. 2010\\ WASP-4b & 2000 & 0.03 & 0.3 & this paper\\ HD 189733b & 1400 & 0.035 & 0.15 & Grillmair et al. 2008\\ HD 209458b & 1700 & 0.1 & 0.3 & Burrows et al. 2007b\\ TrES-4 & 2100 & 0.1 & 0.3 & Knutson et al. 2009b \\ XO-1b & 1400 & 0.1 & 0.3 & Machalek et al. 2008\\ XO-2b & 1600 & 0.1 & 0.3 & Machalek et al. 2009\\ TrES-2 & 1800 & 0.3 & 0.3 & Spiegel \& Burrows 2010\\ HAT-P-7b & 2500 & 1.1 & 0.0 & Spiegel \& Burrows 2010 \\ \enddata \tablenotetext{a}{Predicted blackbody temperature for the planet assuming an albedo of zero and no nightside redistribution of energy.} \end{deluxetable} \section{Conclusions} We observed two secondary eclipses of the extrasolar planet WASP-4b at 3.6 and 4.5 $\mu$m as part of \emph{Spitzer}'s extended warm mission. By measuring the time of the eclipse, we estimate a 2$\sigma$ upper limit on the parameter $|e\cos\omega|$ of 0.0024. This limit implies that unless our line of sight happens to align closely to the planet's major axis, the planet's orbit must be nearly circular. Although this upper limit does not rule out tidal heating, it constrains the range of tidal heating models that could explain this planet's inflated radius. We find secondary eclipse depths of 0.319$\% \pm$0.031$\%$ and 0.343$\% \pm$0.027$\%$ for the 3.6 and 4.5 $\mu$m bands, respectively. These results are consistent with a spectrum exhibiting water and CO in absorption. We find that the atmosphere can be well characterized by models with a modest or no thermal inversion. Measurements at other wavelengths would help to distinguish between these two models. The absence of a strong thermal inversion makes WASP-4b an exception to the rule that inversions are found on planets that receive higher stellar irradiation. Other exceptions include the highly irradiated extrasolar planet TrES-3 \citep{fressin10} which does not have a temperature inversion and XO-1b \citep{mach08} which possesses a temperature inversion despite being relatively cool. These planets indicate that there must exist additional stellar or planetary parameters, other than equilibrium temperature, responsible for determining the relative strengths of thermal inversions in hot Jupiter atmospheres. This work demonstrates that Warm \emph{Spitzer}, which operates with the 3.6 and 4.5 $\mu$m channels only, can be successfully used to characterize the properties of hot Jupiter atmospheres. The increasing availability of ground-based eclipse detections in the near-IR \citep[e.g.,][]{gillon09, croll10a, croll10b, gibson10, lopez10} will also help to resolve ambiguities in the interpretation of the Spitzer data for many of these planets. Indeed, our models predict that WASP-4b should have an eclipse depth of 0.1$-$0.2\% in the $Ks$ band (2.15 microns). Croll et al. (2010b) measured a secondary eclipse depth of $0.133^{+0.018}_{-0.016}$ in this same bandpass for TrES-3, which has an apparent brightness and other properties similar to those of WASP-4. The TrES-3 K-band detection augers well for a similar WASP-4b measurement, which would provide a further point of comparison for the atmospheric models that we present. By the end of its post-cryogenic mission, \emph{Spitzer} will observe more than twenty systems during secondary eclipse. When combined with the nineteen systems observed during the cryogenic mission, as well as any available ground-based detections, these results will allow us to search for correlations with other system parameters that could provide valuable clues to the origin of temperature inversions in hot Jupiter atmospheres. \acknowledgments This work is based on observations made with the \emph{Spitzer Space Telescope}, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with NASA. Support for this work was provided by NASA. Heather A. Knutson is supported by a fellowship from the Miller Institute for Basic Research Science. Eric Agol acknowledges the support of NSF CAREER Grant No. 0645416. \clearpage
proofpile-arXiv_067-13147
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Origins of probability theory} \label{Origins} Formal and concrete concepts of likelihood were first developed in the context of gambling -- notable are the works by \cite{dePacioli1494}, by Cardano in the mid-16$^{\text{th}}$ century \citep{Ore1953}, and the work by Pascal and Fermat in the summer of 1654. A prominent question treated by \cite{dePacioli1494} as well as Pascal and Fermat (1654) is the following ``problem of the points''\footnote{See \citep{Devlin2008} for a detailed historical account.}: imagine that a game of dice has to be abandoned before it can be concluded. For instance, players may be betting money on the highest score in rolling a dice three times but have to leave after two throws. In this situation, how is the ``pot'', the total wager, to be distributed among the players in a fair manner? The first observation is that this is a moral question. Mathematics may aid in answering it, but cannot resolve it without appealing to external information, as any answer must depend on the concept of fairness. It could be perceived as fair that the player with the most points is entitled to the total wager. Another concept of fairness would be to call the game inconclusive and return to each player his or her individual wager, or the pot may be split equally between the participants. Apparently, at least until the $17^{\text{th}}$ century, there was no universal agreement on the relevant concept of fairness. \cite{dePacioli1494}, for instance, argued that the fair solution is to divide the pot in proportion to the points that each player has accrued when the game is interrupted, see \citep{Devlin2008}, p.~15. A century and a half later Pascal was approached by Chevalier de M\'er\'e to produce a conclusive argument based on mathematics that would settle the issue. Pascal and Fermat corresponded on the subject and agreed that the fair solution is to give to each player the expectation value of his winnings. The expectation value they computed is an ensemble average, where all possible outcomes of the game are enumerated, and the product of winnings and probabilities associated with each outcome for each player are added up. This procedure uses the then revolutionary idea of parallel universes. Instead of considering only the state of the universe as it is, or will be, an infinity of additional equally likely universes is imagined. Any of these additional universes, for all we know, could be reality ({\it i.e.}~ the world as it will be). The proportion of those universes where some event occurs is the probability of that event. We will see that in the $18^{\text{th}}$ century Bernoulli noticed undesired properties of the ensemble average, and in the $19^{\text{th}}$ century Boltzmann began to specify conditions for its applicability. The 1654 investigation, which is generally considered the beginning of probability theory, was concerned with a specific problem. It did not attempt to make any predictions, for instance involving repetitions of the game, but solely gave quantitative guidelines where individuals had incompatible moral intuitions. Moral considerations were certainly at the heart of the early debate. Pascal famously used expectation values to argue in favor of his religious beliefs, and much of Cardano's work on gambling is concerned with morals. He came very close to defining a fair game as one where no player has an expected advantage over others: ``To the extent to which you depart from that equality, if it is in your opponent's favor, you are a fool, if in your own, you are unjust'' \citep{Ore1953} p.~189. Following Pascal's and Fermat's work, however, it did not take long for others to recognize the potential of their investigation for making predictions. \cite{Halley1693}, writing in these pages 318 years ago, built on earlier work by \cite{Graunt1662}, and devised a method for pricing life annuities. The idea of embedding reality in infinitely many possible alternatives was revolutionary in 1654, it was essential in the development of statistical mechanics in the 19$^{\text{th}}$ century \citep{Ehrenfest1912,Cohen1996}, and it continues to be a fruitful means of conceptualizing complex and stochastic systems \citep{Gell-MannLloyd2004}. Nonetheless the idea itself is a dubious philosophical construct, justified empirically by the success that, under appropriate conditions, comes with allowing the use of mathematical rigor. Historically, it seems that the philosophical weakness was initially ignored in applications. In Sec.~\ref{Ergodicity} we will review an alternative conceptualization of randomness. \cite{Huygens1657} is credited with making the concept of expectation values explicit and with first proposing an axiomatic form of probability theory. This was helpful in developing the field mathematically, as results could now be proven to be correct. On the other hand, by introducing an axiomatic system, correctness becomes restricted to the context of the axioms themselves. A proven result in probability theory follows from the axioms of probability theory, now usually those of \cite{Kolmogorov1933}. It is related to reality only insofar as the relevant real conditions are reflected by the axioms. \cite{Kolmogorov1933} wrote ``The theory of probability [..] should be developed from axioms in exactly the same way as Geometry and Algebra. This means that after we have defined the elements to be studied and their basic relations, and have stated the axioms by which these relations are to be governed, all further exposition must be based exclusively on these axioms, independent of the usual concrete meaning of these elements and their relations.'' He wrote that it would be a different ``aim [..] to tie up as closely as possible the mathematical theory with the empirical development of the theory of probability.'' To summarize: the first systematic investigation into stochastic systems was concerned with moral advice. The second established an axiomatic system. \section{The lottery} \label{lottery} The St. Petersburg paradox was first put forward by Nicolaus Bernoulli in 1713 \citep{Montmort1713} p.~402. He considered lotteries of the following type: A fair coin is tossed. 1) On heads, the lottery pays \$1, and the game ends. On tails, the coin is tossed again. 2) On heads, the lottery pays \$2, and the game ends. On tails, the coin is tossed again. $\cdots$ n) On heads, the lottery pays \$$2^{n-1}$, and the game ends. On tails, the coin is tossed again. $\cdots$ In other words, the random number of coin tosses, $n$, follows a geometric distribution with parameter $1/2$, and the payouts increase exponentially with $n$. We may call $n$ a ``waiting time'', although in this study it is assumed that the lottery is performed instantaneously, {\it i.e.}~ a geometric random variable is drawn and no significant physical time elapses. The expected payout from this game is \begin{equation} \$\sum_{n=1}^{\infty} \left(\frac{1}{2}\right)^n 2^{n-1} =\$\left( \frac{1}{2}+ \frac{1}{2}+...\right), \elabel{ensemble} \end{equation} which is a diverging sum. A rational person, N. Bernoulli argued, should therefore be willing to pay any price for a ticket in this lottery. In reality, however, people are rarely willing to pay more than \$10, which constitutes the paradox. Reactions to the paradox include the following: Even though the expected payout is infinite, there is not an infinite amount of money or goods in the world to pay up. So the lottery is not realistic \citep{Cramer1728}. If the payouts are limited to some realistic value, then the lottery's expected payout is drastically reduced. For example the 31$^{\text{st}}$ term in the sum \eref{ensemble} comes from a payout of about \$$10^9$, so limiting payouts to \$$10^9$ reduces the expected payout from \$$\infty$ to \$15. Similarly, one could argue that it is only too sensible to ignore events with a probability of the order of $10^{-9}$ \citep{Menger1934}. Another argument is that no one would offer such a lottery because it carries an infinite expected loss for the lottery-seller, which makes it irrelevant \citep{Samuelson1960}. \section{Bernoulli's resolution} \label{Bernoulli's} The quantity calculated in \eref{ensemble} is usually called an ``expected'' payout. But since it fails to capture the reality of the situation its conceptual validity must be questioned. \cite{Bernoulli1738} noted\\ ``\S1 Ever since mathematicians first began to study the measurement of risk there has been general agreement on the following proposition: Expected values are computed by multiplying each possible gain by the number of possible cases where, in this theory, the consideration of cases which are all of the same probability is insisted upon.'' Indeed, \cite{Huygens1657} had postulated: ``if any one should put 3 shillings in one hand without telling me which, and 7 in the other, and give me choice of either of them; I say, it is the same thing as if he should give me 5 shillings...'' This concept of expectation is agnostic regarding fluctuations, which is harmless only if the consequences of the fluctuations, such as associated risks, are negligible. This is usually the case in small-stakes recreational gambling as considered in the earliest studies of chance by \cite{dePacioli1494}, Cardano \citep{Ore1953}, and \cite{FermatPascal1654}, mentioned in Sec.~\ref{Origins}, but it is not the case in the St. Petersburg paradox. Noticing that the ability to bear risk depends not only on the risk but also on the riskbearer's resources, \cite{Bernoulli1738} wrote under \S3:\\ ``If I am not wrong then it seems clear that all men cannot use the same rule to evaluate the gamble. The rule established in \S1 must, therefore, be discarded.'' Bernoulli, and shortly before him \cite{Cramer1728}, drew attention to psychological and behavioral issues involved in the evaluation of the proposed lottery. The desirability or ``utility'' associated with a financial gain, they argued, depends not only on the gain itself but also on the wealth of the person who is making this gain. Instead of computing the expectation value of the monetary winnings, they proposed to compute instead the expectation value of the gain in utility. To this end the utility function $u(w)$ was introduced, which specifies the utility of a wealth of \$$w$. Since an extra dollar is generally worth less to a rich person than to a poor person, $u(w)$ is assumed to be concave, such that $\frac{du(w)}{dw}$ is monotonically decreasing. While exceptional circumstances can render this assumption invalid (Bernoulli cites an imprisoned rich man who only needs another 2,000 ducats to buy his freedom), it is well confirmed behaviorally. Otherwise $u(w)$ is only loosely constrained. \cite{Bernoulli1738} suggested the logarithmic function $u_B(w)=\ln(w)$, while \cite{Cramer1728} had proposed using $u_C(w)=\sqrt{w}$ instead. Bernoulli's proposition of the logarithm was based on the intuition that the increase in wealth should correspond to an increase in utility that is inversely proportional to the wealth a person already has, $\frac{du}{dx}=\frac{1}{x}$, whose solution is the logarithm. \cite{Bernoulli1738} thus ``discarded the rule'' (for calculating expected gains in wealth) by replacing the object whose expectation value was to be calculated. Instead of gains in wealth, he decided to focus on the expectation of gains in some function of wealth. In Sec.~\ref{Resolution} we will also discard the rule established in \S1 of \citep{Bernoulli1738}, but not by replacing the object whose average is to be calculated, {\it i.e.}~ not by replacing plain monetary gains by a function of those gains. Instead we will replace the type of average, using the time average instead of the ensemble average. This is necessary because the system under investigation (the dynamics of monetary wealth) is not ergodic, as will be shown in Sec.~\ref{Resolution}. In doing so we will critique the implicit considering of multiple imagined systems, or parallel universes. But first, applying Bernoulli's reasoning, we compute the expected change in logarithmic utility, $\ave{\Delta u_B}$, due to playing the lottery, given the initial wealth $\$w$ and the cost of a ticket in the lottery $\$c$, \begin{equation} \ave{\Delta u_B}=\sum_{n=1}^\infty \left(\frac{1}{2}\right)^{n} \left(\overbrace{\ln(w-c+2^{n-1})}^{\text{Utility after the game}}- \underbrace{\ln(w)}_{\text{Utility before the game}}\right). \elabel{utility_change} \end{equation} This sum converges (as long as each individual term is finite), as is readily shown using the ratio test. Depending on $w$ and $c$, the quantity can be positive or negative, reflecting expected gain or loss of utility. Assuming that potential lottery players base their decisions not on the expected monetary gain but instead on the expected gain in usefulness, and that that usefulness is appropriately represented by $u_B$, the paradox is thus resolved. It is dissatisfying that this resolution of the paradox relies on a function $u(w)$ that is postulated and, in the framework of Cramer and Bernoulli cannot be derived from more fundamental considerations. Disagreements on whether the assumptions (the characteristics of diminishing marginal utility of wealth) are realistic are difficult to settle. Anticipating this objection, \cite{Bernoulli1738} -- Daniel being perhaps less mathematician than scientist -- appealed to observations: ``Since all our propositions harmonize perfectly with experience it would be wrong to neglect them as abstractions resting upon precarious hypotheses.'' The responses to the paradox mentioned at the end of Sec.~\ref{lottery} are similarly dissatisfying -- they address the relevance of the problem and argue that it would never really arise, but they do not resolve it. Since the paradoxical aspect is the behavior of real people, however, these arguments are valid, and all means of disposing of the paradox could be similar in character. While Bernoulli's observations of human risk aversion and even the functional form he proposed for modeling these are ``correct'' in a specific sense elaborated in Sec.~\ref{Resolution}, these behavioral regularities have a physical reason that Bernoulli failed to point out. In fact, it appears that he was not aware of this physical reason, which justifies only $u_B(w)=\ln(w)$. \cite{Bernoulli1738} did not consider the logarithmic form of utility essential and wrote of Cramer's work, which uses $u_C(w)=\sqrt{w}$: ``Indeed I have found his theory so similar to mine that it seems miraculous that we independently reached such close agreement on this sort of subject.'' \section{Ergodicity} \label{Ergodicity} The question of ergodicity in stochastic systems is concerned with a conceptual choice in giving meaning to quantitative probabilities. It can be argued that it is meaningless to assign a probability to a single event, and that any decision regarding a single event must resort to intuition or morals. For mathematical guidance the event has to be embedded within other similar events. \cite{FermatPascal1654} chose to embed within parallel universes, but alternatively -- and often more meaningfully -- we can embed within time. The concept of a decision regarding a single isolated event, whether probabilistic or not, seems dubious: how do we interpret the premise of isolation? Surely the event is part of a history. Does the individual making the decision die immediately after the event? In general the consequences of the decision will unfold over time. The origins of ergodic theory lie in the mechanics of gases \citep{Uffink2004}. One is interested in large-scale effects of the molecular dynamics, {\it i.e.}~ in the thermodynamic variables. For instance, the macroscopic pressure of a gas is a rate per area of molecular momentum transfer to a container wall, averaged over an area that is large compared to the typical distance between molecules and over a time that is long compared to the typical interval between molecular impacts in the area. Since the number of particles is large and collisions are possible, however, it is practically not possible to explicitly solve the microscopic equations of motion. Full information about the state $\x$ (positions and momenta of all molecules) is not available, and the time average, for instance of momentum transfer to a container wall, cannot be computed directly. \cite{Boltzmann1871b} and \cite{Maxwell1879} independently replaced the physically required time average by the average over an ensemble of appropriately weighted states $\x$, making use of Huygens' expectation value. The weight of the different states $\x$ in the ensemble was postulated and subsequently justified empirically by comparing predictions to observations. The key rationale behind this dramatic step is that the systems considered are in equilibrium: the macroscopic variables of interest do not change in time, and microscopic fluctuations obey detailed balance, see {\it e.g.}~ \citep{vanKampen1992}. Under these strict conditions, time has little tangible effect, and we may get away with disregarding it completely. Nonetheless, both Boltzmann and Maxwell were concerned that for mathematical convenience they were using the {\it a priori} irrelevant ensemble average. Specifically, when \cite{Boltzmann1871b} suggested to treat a gas as a collection of many systems, namely sub-volumes which can be thought of as a probabilistic ensemble, he warned that using this ``trick'' means ``to assume that between [...] the various [...] systems \underline{no interaction ever occurs}.'' The requirement of absolutely no interaction between a collection of systems is equivalent, in practical terms, to the non-existence of all these systems from each other's perspectives -- if systems A and B cannot ever interact in any way, then to system A, for all practical purposes, system B does not exist, and vice versa. Another way of putting this is that systems A and B are parallel universes. Assuming the validity of this procedure is known as the ergodic hypothesis. It is permissible under strict conditions of stationarity, see {\it e.g.}~ \cite{GrimmetStirzaker2001}, Ch.~9.5. These conditions were understood long after the St. Petersburg paradox had been introduced \citep{Birkhoff1931,Birkhoff1931b,vonNeumann1932A,vonNeumann1932B}. Much of the literature on ergodic systems is concerned with deterministic dynamics, but the basic question whether time averages may be replaced by ensemble averages is equally applicable to stochastic systems, such as Langevin equations or lotteries. The essence of ergodicity is the question whether the system when observed for a sufficiently long time $t$ samples all states in its sample space in such a way that the relative frequencies $f(\x,t)d\x$ with which they are observed approach a unique (independent of initial conditions) probability, $P(\x)d\x$, \begin{equation} \lim_{t \to \infty} f(\x, t) = P(\x). \elabel{ergodic} \end{equation} If this distribution does not exist or is not unique, the time average, $\bar{A}=\lim_{T\to\infty}\frac{1}{T}\int_0^TA(\x,t) dt$, of an observable $A$ cannot be computed as an ensemble average in Huygens' sense, $\ave{A}=\int_\x A(\x,t) P(\x) d\x$. The generic variable $A$ may depend on time only through its state dependence, or it may have explicit time dependence. If $P(\x)$ is not unique, then the time average of $A$ generally depends on initial conditions. If $P(\x)$ does not exist, there may still be a unique time average. A unique ensemble average may also still exist -- although we cannot find $P(\x)$ from \eref{ergodic}, we may be able to determine $\tilde{P}(\x, t)$, the proportion of systems in an ensemble that are in state $\x$ at time $t$, and compute the ensemble average as $\ave{A}(t)=\int_\x A(\x,t) \tilde{P}(\x,t) d\x$. In special cases the time dependencies of $A(\x,t)$ and $\tilde{P}(\x,t)$ can be such that $\ave{A}(t)$ does not actually depend on time. However, there is no guarantee in these cases that the time average and ensemble average will be identical. Growth factors in the St. Petersburg lottery are such a special case. In Sec.~\ref{Resolution} it will be shown that while the ({\it a priori} irrelevant) ensemble-average winnings from the lottery diverge, the time-average winnings do not. Mathematically the end result is identical to the result obtained by Bernoulli (although see Sec.~\ref{Relation}(\ref{Menger})). Conceptually, however, the arbitrary utility (arbitrary in the sense that it depends on personal characteristics), is replaced by an argument based on the physical reality of the passing of time and the fact that no communication or transfer of resources is possible between the parallel universes introduced by Fermat. \subsection{The economic context} \label{economic} To repeat, the quantity in \eref{ensemble} is accurately interpreted as follows: imagine a world of parallel universes defined such that every chance event splits our current universe into an ensemble containing member-universes for every possible outcome of the chance event. We further require that the proportion of members of the ensemble corresponding to a particular outcome is the probability of that outcome. In this case, if we give the same weight to every member-universe, \eref{ensemble} is the ensemble average over all possible future states of the universe ({\it i.e.}~ states after the game). Of course, we are not {\it a priori}~ interested in such an average because we cannot realize the average payout over all possible states of the universe. Following the arguments of Boltzmann and Maxwell, this quantity is meaningful only in two cases. \begin{enumerate} \item The physical situation could be close to an ensemble of non-interacting systems which eventually share their resources. This would be the case if many participants took part in independent rounds of the lottery, with an agreement to share their payouts, which would be a problem in portfolio construction, and different from Bernoulli's setup\footnote{This situation is equivalent to a single person buying tickets for many parallel rounds of the lottery. In the limit of an infinitely rich person and a finite ticket price it can be shown that it is advisable to invest all funds in such independent lotteries.}. \item The ensemble average could reflect the time-average performance of a single participant in the lottery. Whereas time averages in statistical mechanics are often difficult to compute (hence the ergodic hypothesis), the simplicity of the St. Petersburg lottery makes it easy to compute them and see how they differ from ensemble averages. \end{enumerate} Thus neither case applies to the St. Petersburg lottery, and the ensemble average is irrelevant to the decision whether to buy a ticket. In general, to realize an average over the ensemble, ensemble members must exchange resources, but this is often impossible, so we must be extremely careful when interpreting ensemble averages of the type of \eref{ensemble}. \section{Resolution using non-ergodicity} \label{Resolution} The resolution of the St. Petersburg paradox presented in this section builds on the following alternative conceptualization: \begin{itemize} \item \underline{Rejection of parallel universes:} To the individual who decides whether to purchase a ticket in the lottery, it is irrelevant how he may fare in a parallel universe. Huygens' (or Fermat's) ensemble average is thus not immediately relevant to the problem. \item \underline{Acceptance of continuation of time:} The individual regularly encounters situations similar to the St. Petersburg lottery. What matters to his financial well-being is whether he makes decisions under uncertain conditions in such a way as to accumulate wealth {\it over time}. \end{itemize} Similarly, in statistical mechanics Boltzmann and Maxwell were interested in momentum accumulated {\it over time}. Because they considered equilibrium systems, where time is largely irrelevant, they hypothesized that time averages could be replaced by ensemble averages. However, a person's wealth is not usually in equilibrium, nor even stationary: on the time scales of interest, it generally grows or shrinks instead of fluctuating about a long-time average value. Therefore the ergodic hypothesis does not apply \citep{Peters2010}. Consequently, there is no reason to believe that the expected (ensemble-average) gain from the lottery coincides with the time-average gain. That they are indeed different will be shown in this section by explicitly calculating both. The accumulation of wealth over time is well characterized by an exponential growth rate. To compute this, we consider the factor $r_i$ by which a player's wealth changes in one round of the lottery\footnote{One ``round'' of the lottery is used here to mean one sequence of coin tosses until a tails-event occurs. Throughout, an index $i$ refers to such rounds, whereas $n$ indicates waiting times -- the number of times a coin is tossed in a given round.}, \begin{equation} r_i=\frac{w-c+m_i}{w}, \elabel{ri} \end{equation} where, as in \eref{utility_change}, \$$w$ is the player's wealth before the round of the lottery, \$$c$ is the cost of a lottery ticket, and \$$m_i$ is the payout from that round of the lottery. To convert this factor into an exponential growth rate $g$ (so that $\exp(gt)$ is the factor by which wealth changes in $t$ rounds of the lottery), we take the logarithm, $g_i=\ln(r_i)$.\footnote{The logarithm is taken to facilitate comparison with Bernoulli's analysis. As long as it acts on the average, as opposed to being averaged over, it does not change the convergence properties.} \subsection{Ensemble Average} \label{Average} \begin{theorem} The ensemble-average exponential growth rate in the St. Petersburg lottery is $\ave{g}=\ln \left(\sum_{n=1}^{\infty}\left(\frac{1}{2}\right)^n\frac{w-c+2^{n-1}}{w}\right).$ \end{theorem} \begin{proof} First, we consider the ensemble-average growth factor, and begin by averaging over a finite sample of $N$ players, playing the lottery in parallel universes, {\it i.e.}~ in general players will experience different sequences of coin tosses, \begin{equation} \ave{r}_N=\frac{1}{N}\sum_{i=1}^N r_i, \elabel{ensemble_growth_1} \end{equation} which defines the finite-sample average $\ave{\cdot}_N$. We change the summation in \eref{ensemble_growth_1} to run over the geometrically distributed number of coin tosses in one round, $n$, \begin{equation} \ave{r}_N=\frac{1}{N}\sum_{n=1}^{n_N^{\text{max}}} k_n r_n, \elabel{ensemble_growth_2} \end{equation} where $k_n$ is the frequency with which a given $n$, {\it i.e.}~ the first tails-event on the $n^{\text{th}}$ toss, occurs in the sample of $N$ parallel universes, and $n_N^{\text{max}}$ is the highest $n$ observed in the sample. Letting $N$ grow, $k_n/N$ approaches the probability of $n$, and we obtain a simple number, the ensemble-average growth factor $\ave{r}$, rather than a stochastic variable $\ave{r}_N$ \begin{align} \ave{r}=\lim_{N\to\infty}\ave{r}_N&=\lim_{N\to\infty}\sum_{n=1}^{n_N^{\text{max}}} \frac{k_n}{N} r_n \elabel{ensemble_growth_3}\\ &=\sum_{n=1}^\infty p_n r_n.\nonumber \end{align} The logarithm of $\ave{r}$ expresses this as the ensemble-average exponential growth rate. Using \eref{ri} and writing the probabilities explicitly, we obtain \begin{equation} \ave{g}=\ln \left(\sum_{n=1}^{\infty}\left(\frac{1}{2}\right)^n\frac{w-c+2^{n-1}}{w}\right). \elabel{ensemble_growth} \end{equation} \end{proof} Since the ensemble-average payout from one round in the lottery diverges, \eref{ensemble}, so does this corresponding ensemble-average exponential growth rate \eref{ensemble_growth}. \subsection{Time average} \begin{theorem} \label{time_theorem} The time-average exponential growth rate in the St. Petersburg lottery is $\bar{g}=\sum_{n=1}^\infty \left(\frac{1}{2}\right)^{n} \ln(w-c+2^{n-1})-\ln(w)$. \end{theorem} \begin{proof} The time average is computed in close analogy to the way the ensemble average was computed. After a finite number $T$ of rounds of the game a player's wealth reaches\footnote{A possible objection to this statement is discussed below, starting on p.~\pageref{product_objection}.} \begin{equation} w(T)=w \prod_{i=1}^T r_i. \end{equation} The $T^{\text{th}}$ root of the total fractional change, \begin{equation} \bar{r}_T=\left(\prod_{i=1}^T r_i\right)^{1/T}, \end{equation} which defines the finite-time average $\bar{r}_T$, is the factor by which wealth has grown on average in one round of the lottery over the time span $T$. We change the product to run over $n$, \begin{equation} \bar{r}_T=\left(\prod_{n=1}^{n^{\text{max}}_T} r_n^{k_n}\right)^{1/T}, \elabel{time_factor_1} \end{equation} where $k_n$ is the frequency with which a given $n$ occurred in the sequence of $T$ rounds, and $n^{\text{max}}_T$ is the highest $n$ observed in the sequence. Letting $T$ grow, $k_n/T$ approaches the probability of $n$, and we obtain a simple number, the time-average growth factor $\bar{r}$, rather than a stochastic variable $\bar{r}_T$ \begin{align} \bar{r}&=\lim_{T\to \infty}\bar{r}_T=\lim_{T\to \infty}\prod_{n=1}^{n^{\text{max}}_T} r_n^{k_n/T}\elabel{time_factor}\\ &=\prod_{n=1}^\infty r_n^{p_n}.\nonumber \end{align} The logarithm of $\bar{r}$ expresses this as the time-average exponential growth rate. Using \eref{ri} and writing the probabilities explicitly, we obtain \begin{align} \bar{g}&=\ln\left(\prod_{n=1}^\infty r_n^{p_n}\right) \elabel{time_growth}\\ &=\sum_{n=1}^\infty p_n \ln r_n \nonumber\\ &=\sum_{n=1}^\infty \left(\frac{1}{2}\right)^{n} \left(\ln(w-c+2^{n-1})-\ln(w)\right). \nonumber \end{align} \end{proof} The final line of \eref{time_growth} is identical to the right-hand side of \eref{utility_change}. Again the quantity can be positive or negative, but instead of the ensemble average of the change in utility we have calculated the time-average exponential growth rate of a player's wealth without any assumptions about risk preferences and personal characteristics. If the player can expect his wealth to grow over time, and he has no other constraints, he should play the game; if he expects to lose money over time, he should not play. The loci of the transition between growth and decay, where $\bar{g}=0$ define a line in the $c$ {\it vs.} $w$ plane, which is shown in \fref{g_bar}. \Eref{time_growth} depends on the player's wealth -- keeping $c>1$ fixed, $\bar{g}$ initially increases with $w$, see inset of \fref{g_bar} for the example of $c=2$. This is because the wealthy player keeps most of his money safe, and a loss does not seriously affect his future ability to invest. For the very wealthy player, neither a win nor a loss is significant, and the time-average exponential growth rate asymptotes to zero as $w\to\infty$. At the other extreme, a player whose wealth $w\leq c-1$ risks bankruptcy, which in a sense means the end to his economic life, and $\lim_{w\to (c-1)^+}\bar{g}=-\infty$. \begin{figure} \includegraphics[width=.9\textwidth]{20101117_g_bar.eps} \caption{\Eref{time_growth} (or \eref{utility_change}) defines a relationship between $w$ and $c$, where $\bar{g}(w,c)=0$, {\it i.e.}~ the player breaks even over time (or his ensemble-average logarithmic utility change is zero) if he pays $\$c$ given his wealth $\$w$. \underline{Inset:} Time-average exponential growth rate (or ensemble-average logarithmic utility change), $\bar{g}(w, c=2)=\ave{\Delta u_B}(w, c=2)$, for the St. Petersburg lottery as a function of wealth, $\$w$, with a ticket price of $\$c=\$2$. If the player risks bankruptcy by purchasing a ticket, $\bar{g}(w,c) \to -\infty$. To the infinitely wealthy player a gain or loss is irrelevant, and $\bar{g}(w,c) \to 0$.} \flabel{g_bar} \end{figure} So \eref{time_growth} can also be considered a criterion for how much risk a person should take. The cost of the ticket, $c$, is the exposure to the lottery. For fixed (positive) $w$ ``buying'' a ticket is always advisable if $c=0$, see Sec.~\ref{Relation}(\ref{Menger}). As $c$ increases, $\bar{g}$ will eventually become negative as the exposure, or risk, becomes too large. The time resolution discourages entering into any gamble where bankruptcy, {\it i.e.}~ zero or negative wealth after the game, occurs with non-zero probability. In these cases individual terms in the sum in \eref{time_growth} are undefined. \label{product_objection} \Eref{time_growth}, may seem an unnatural criterion for the following reason: the lottery is played only once with wealth $w$. By the next round, wealth has changed by $m_i-c$, and the situation has to be re-evaluated. So in reality, although we may buy tickets at the same price repeatedly, a combination of different $\bar{g}$ (resulting from different ``initial'' wealths $w$) will be realized. However, over a sufficiently long time, we must assume that we will face equivalent decisions again, and thus play equivalent lotteries again. Let's consider the result of playing many rounds in different lotteries, $\prod_i^T r_i$. Because of commutativity we can rearrange the factors in the product so that the first $T'$ factors correspond to the rounds in which we face equivalent lotteries (for instance we have the same wealth, and the tickets are offered at the same price), and the remaining $T-T'$ factors refer to different situations, \begin{align} \prod_{i=1}^{T}r_i &=\prod_{j=1}^{T'} r_j \prod_{m=T'+1}^Tr_m. \end{align} Whatever $\prod_{m=T'+1}^Tr_m$ may be, the steps in \eref{time_factor} apply to the first product, and the sign of the quantity in \eref{time_growth}, which determines whether the first product is greater or smaller than one, is a good criterion for deciding whether to buy a ticket. It is instructive to calculate the time-average exponential growth rate in another way. Line 2 of \eref{time_growth} looks like an ensemble-average exponential growth rate, obtained by computing exponential growth rates for individual systems and then averaging those over the ensemble, \begin{equation} \lim_{N\to\infty}\frac{1}{N}\sum_{i=1}^N \ln(r_i). \elabel{ensemble_wrong} \end{equation} The reason why this quantity is not the ensemble average but the time average is subtle. There is no limit $T \to \infty$, so how can this be a time average? Averages extract deterministic parameters from stochastic processes. The ensemble average does this by considering an infinity of parallel universes, and the time average does it by considering infinitely many time steps. But to find the time average it is not necessary for time itself to approach infinity. Instead, the unit of time can be rescaled. It is only necessary that all possible scenarios occur exactly with the appropriate frequencies during the sampling time. As long as the effect of time -- of events happening sequentially -- is accounted for, this will lead to the time average. We have used one round of the lottery as one time unit. Thus the return from one time unit will be one of the possible returns $r_n$. If we observe only one time unit, the best estimate for the time-average return would be the return that happened to be realized in that time step. An approximate estimate for the time-average exponential growth rate is thus $\bar{g}^{\text{est}}_1\approx r_1-1$\footnote{The approximation $\ln(r_1)\approx r_1-1$ is permissible here because we will consider infinitesimal time steps such that $\exp(\bar{g}dt)=1+\bar{g}dt$ will be exact.}. To improve this estimate, we pick $q$ returns $r_j$ at random and in agreement with the probabilities $p_n$, and let each return act for $1/q$ time units\footnote{The subscript $j$ is used here instead of $i$ because it refers not to one round but to sub-intervals of one round.}. The total time that passes during the experiment is kept fixed but we separate it into $q$ sub-intervals of time. The result will be \begin{equation} \bar{g}^{\text{est}}_q=\sum_{j=1}^q(r_j^{1/q}-1), \end{equation} The proportion of sub-intervals during which return $r_n$ is realized will approach $p_n$ as $q\to\infty$. In this limit we can therefore replace the sum over time steps by a sum over $n$ as follows \begin{align} \bar{g}^{\text{est}}_\infty&=\lim_{q\to\infty}\sum_{j=1}^q(r_j^{1/q}-1)\\ &=\sum_{n=1}^{\infty} \lim_{q \to\infty} k_n (r_n^{1/q}-1)\\ &=\sum_{n=1}^{\infty} p_n \lim_{q \to\infty}q (r_n^{1/q}-1), \end{align} where $k_n$ once again is the frequency with which a given $n$ occurs, now in the sample of $q$ sub-intervals. Using the definition of the logarithm, $\ln(r_n)\equiv\lim_{q \to\infty} q (r_n^{1/q}-1)$ yields \begin{equation} \lim_{q\to\infty}\sum_{j=1}^q(r_j^{1/q}-1)=\sum_{n=1}^{\infty} p_n \ln r_n, \end{equation} meaning that the time-average exponential growth rate, derived by splitting a time unit into infinitely many sub-intervals and playing through all possible scenarios in these sub-intervals can be written as the expectation value of the logarithm of returns. A limit which is equivalent to the apparently missing limit $T\to\infty$ is implied by the logarithm and evaluated before the explicit limit in \eref{ensemble_wrong}. Thus \eref{ensemble_wrong} is an ensemble average of a time average, which is nothing but a time average. \section{Relation to Bernoulli's resolution} \label{Relation} \Eref{time_growth} is mathematically equivalent to Bernoulli's use of logarithmic utility. Bernoulli argued behaviorally that instead of the expectation value of monetary gain, the expectation value of the gain in a loosely constrained function (the utility) of wealth should be considered. One of the allowed functions is the logarithm, which has the special property of encoding the multiplicative nature common to gambling and investing in a linear additive object, the expectation value \begin{equation} \sum_{n=1}^{\infty} p_n\ln r_n=\ln\left(\lim_{T\to \infty}\left(\prod_{i=1}^T r_i \right)^{1/T}\right). \elabel{relation} \end{equation} Inadvertently, by postulating logarithmic utility (left-hand side of \eref{relation}), Bernoulli replaced the ensemble-average winnings, with the time-average exponential growth rate in a multiplicative non-ergodic stochastic process (right-hand side of \eref{relation}). Bernoulli did not make the time argument, as is evident from his acceptance of Cramer's square-root utility, which does not have the property of \eref{relation}: $\sum_i^{\infty} p(r_i) \sqrt{r_i}$ cannot be written as a similar product. This is problematic because the arbitrariness of utility can be abused to justify reckless behavior, and it ignores the fundamental physical limits, given by time irreversibility, to what can be considered reasonable. But because Bernoulli's early work postulated the logarithm, many consequences of \eref{time_growth} have already been discussed in the literature under the heading of ``logarithmic utility''. A different heading for these results is ``Kelly criterion'' \citep{Kelly1956,CoverThomas1991,Thorp2006}. In contrast to ensemble-average exponential growth rates, which often diverge (for instance with leverage) time-average exponential growth rates can be optimized \citep{Peters2009,Peters2010}. \cite{Kelly1956} used this fact to optimize wager sizes in a hypothetical horse race using private information. While he refrained from using utilities because he deemed them ``too general to shed any light on the specific problems'' he considered, he did not point out the fundamental difference in perspective his treatment implies: in essence, arbitrary utility functions are replaced by the physical truth that time cannot be reversed. My aim here is to emphasize this difference in perspective. It is crucial that logarithmic utility from this point of view is not a utility at all. Rather, the logarithm accounts for the multiplicative nature of the process: the ensemble average of the logarithm of growth factors equals the logarithm of the time average of growth factors. Comparing \eref{time_growth} and \eref{utility_change}, it is tempting to say that the time average justifies logarithmic utility. I advise against this interpretation because it conflates physical concepts of time with behavioral concepts of usefulness. Any utility function other than the logarithm leads to recommendations that do not agree with the time perspective. \subsection{Menger's objection to unbounded utility} \label{Menger} \cite{Bernoulli1738} did not actually write down \eref{utility_change}, although it is often assumed that that was his intention. Instead of using the criterion \eref{utility_change} he argued in two steps ``how large a stake an individual should be willing to venture'', pp.~26--27. \begin{enumerate} \item The expected gain in utility is calculated without explicitly taking the ticket price into account \begin{equation} \elabel{mistake_1} \sum_{n=1}^\infty \left(\frac{1}{2}\right)^n \ln\left(w+2^{n-1}\right) - \ln(w). \end{equation} \item This is followed by the statement that ``the stake more than which persons [...] should not venture'' is that ticket price, $\$c$, which satisfies \begin{equation} \elabel{mistake_2} \overbrace{\sum_{n=1}^\infty \left(\frac{1}{2}\right)^n \left(\ln(w+2^{n-1})-\ln(w)\right)}^{\text{expected utility gain with $c=0$}} -\underbrace{\left[\ln(w)-\ln(w-c)\right]}_{\text{utility loss at purchase}}=0. \end{equation} \end{enumerate} This is not the condition that defines the line in the main panel of \fref{g_bar} as it does not imply that the expected gain in utility, \eref{utility_change}, is zero. In this sense \eref{utility_change} is an inaccurate, although generally accepted and sensible, representation of Bernoulli's work. The difference between \eref{utility_change} and \eref{mistake_2} and the conflicting interpretations of these equations are consequences of the aforementioned arbitrariness of the utility framework. \cite{Menger1934} claimed that using logarithmic utility as Bernoulli did, \eref{mistake_1} and \eref{mistake_2}, does not resolve modified versions of the paradox where payouts $\$f(n)$ as a function of waiting time $n$ increase faster than according to Bernoulli's original $f_B(n)\equiv 2^{n-1}$. Specifically, Menger considered $f_M(n) \equiv w \exp(2^{n})-w$. Note that this function is curiously defined in terms of the initial wealth. His first step closely mirrors Bernoulli's first step, but then he jumps to an untenable conclusion: \\ \begin{enumerate} \item \cite{Menger1934} pointed out that replacing $2^{n-1}$ in \eref{mistake_1} by $f_M(n)$, the expected gain in logarithmic utility at zero ticket price diverges. \item He argued that the paradox remains unresolved because ``it is clear that also in the modified St. Petersburg game no normal person will risk a large amount or even his fortune as a wager'', p.~468, my translation, and generalized to conclude that this formally prohibits the use of any unbounded utility function. \end{enumerate} The meaning of the second statement is unclear. A player who pays ``his fortune'' for a ticket in Menger's lottery and then experiences a heads-event on the first coin toss, {\it i.e.}~ the worst possible outcome, will still gain since the worst-case payout, $\$f_M(1)=\$w\exp(2)-\$w$, is more than the initial wealth, $\$w$. For a person to risk losing anything at all, the ticket price has to be $\$c\geq \$ f_M(1)$, far greater than the person's wealth. For a person to risk losing his entire wealth, the ticket price has to be greater still, $\$c\geq \$ f_M(1)+\$w=\$w\exp(2)$. But at such prices \eref{mistake_2} is undefined. Perhaps Menger meant that Bernoulli's condition \eref{mistake_2} cannot be satisfied, and the price one should be willing to pay is infinite. In that case the second part of Menger's argument implicitly assumes that the positive divergence in the first part cannot be offset by anything else. But as the ticket price approaches the player's wealth, $c\to w$, the utility loss at purchase in \eref{mistake_2} represents another divergence, implying that this argument is inconclusive. It will now be shown to be invalid. Menger's objection to Bernoulli could hold in the following sense: the left-hand side of \eref{mistake_2}, using Menger's payout function, diverges positively if $c<w$ and is undefined otherwise. But it is never zero (or negative) -- when does it warn against the gamble? To understand what the undefined regime signifies, one has to study the process of divergence and compare the two infinities as they unfold. For any finite $n_{\text{max}}$, a finite value $c<w$ does exist which renders the corresponding partial sum zero, \begin{equation} \elabel{mistake_3} \forall n_{\text{max}}<\infty \hspace{.2cm} \exists \hspace{.2cm} c<w :\sum_{n=1}^{n_{\text{max}}} \left(\frac{1}{2}\right)^n 2^n -\left[\ln(w)-\ln(w-c)\right]=0. \end{equation} To ensure positivity up to exactly $c=w$, where the expression becomes undefined, events of zero probability have to be taken into account. For any non-zero lower bound on probability Bernoulli's condition can be satisfied. In this sense, in the literal original Bernoulli-setup, values of $c\geq w$, where \eref{mistake_3} is undefined, correspond to the recommendation not to buy a ticket, and the paradox is resolved. Menger's conclusion is incorrect. Bernoulli's logarithmic utility recommends to purchase tickets as long as they cost less than the player's wealth, implying a significant minimum gain -- a course many a ``normal person'' may wish to pursue. The criterion could be criticized for the opposite reason: it rejects prices that guarantee a win, even in the worst case. The time resolution produces the criterion in Theorem~\ref{time_theorem}, which is equivalent to \eref{utility_change} and not to Bernoulli's literal original criterion \eref{mistake_2}. Consequently, it yields a different recommendation, which may at first appear surprising but turns out also to correspond to reasonable behavior given the assumptions on which it is based: it recommends to purchase a ticket at any price that cannot lead to bankruptcy. The player could be left with an arbitrarily small positive wealth after one round. The recommendation may be followed by a ``normal person'' because of the assumption that equivalent lotteries can be played in sequence as often as desired. Under these conditions, irrespective of how close a player gets to bankruptcy, losses will be recovered over time. Of course, if these conditions are violated, the time resolution does not apply. This last statement is another warning against the na\"{i}ve use of mathematics, whose truth is always restricted to the context of axioms or assumptions. Applicability reflects the degree to which assumptions are representative of real conditions in a given situation. While ensemble averages are meaningless in the absence of samples (here parallel rounds), time averages are meaningless in the absence of time (here sequences of equivalent rounds). \section{Discussion} \label{Discussion} Excessive risk is to be avoided primarily because we cannot go back in time. Behavioral aspects and personal circumstances are relevant on a different level -- they can change and do not immediately follow from the laws of physics. The perspective described here has consequences far beyond the St. Petersburg paradox, including investment decisions \citep{Peters2009,Peters2010} as well as macro-economic processes. For example, it is sensible for a nation striving for growth to encourage risks that lead to occasional bankruptcies of companies and individuals. How large a risk is macroeconomically sensible? What are the moral implications? Does gross domestic product -- a linear sum, similar to a sample average -- measure what one should be interested in? While the St. Petersburg lottery is an extreme case, \eref{time_growth} and \fref{g_bar} carry a more general message: if net losses are possible, the negative time-average exponential growth rate for small wealth, $w$, turns positive as $w$ increases, implying higher exponential growth rates for larger entities. In a collection of such entities inequality has a tendency to increase, letting large entities dominate and monopolies arise. This can cause markets to cease functioning, as competition is compromised or corporations become ``too big to fail''. There is anecdotal evidence that assets in stock markets have become more correlated in recent decades, and effective diversification (which mimics ensembles) harder to achieve. This would make the time perspective even more important, and the consequences of ignoring it more dramatic. Utility functions are externally provided to represent risk preferences but are unable by construction to recommend appropriate levels of risk. The framework is self-referential in that it can only translate a given utility function into actions that are optimal with respect to that same utility function. This can have unwanted consequences. For example, leverage or credit represents a level of risk which needs to be optimized, but current incentive structures in the financial industry can encourage exceeding the optimal risk. Adam \cite{Smith1776}, cited in \citep{Foley2006}, warned that excessive lending -- in his case based on bills of exchange for goods in transit -- can lead to a collapse of the credit system, followed by bankruptcies and unemployment; insufficient lending, on the other hand, can lead to economic stagnation, two stages which often follow one another in a boom-bust cycle. To avoid both, good criteria for appropriate levels of risk are needed, which the utility framework cannot deliver. The time arguments presented here provide an objective null-hypothesis concept of optimality and can be used to optimize leverage under a given set of conditions \citep{Peters2010}. In the present case, optimality based on such considerations is a good approximation to practically optimal behavior. This is evident from Bernoulli's work, whose justification of the almost identical result was practical validity. The proposed conceptual re-orientation may help reconcile economic theory with empirical regularities, such as risk aversion, known from behavioral economics. It is easy to construct examples where less risk should be taken than recommended by the criterion in \eref{time_growth}. For example, some fraction of \$$w$ may already be earmarked for other vital use. It is very difficult, however, to think of an example where greater risk is beneficial. For this reason, the time-perspective is a useful tool for finding upper limits on risk to be implemented in regulation, for instance as margin requirements, minimum capital requirements, or maximum loan-to-value ratios. Of course, such regulation must take into account further arguments, but its least complex form can be based on the time perspective. The epistemological situation is typical of the process of concept formation \citep{Lakatos1976}. As the conceptual context changed, in this case from moral to predictive, the original definition of the term ``expectation'' began to lead to paradoxical conclusions. From today's conceptually later perspective it appears that N. Bernoulli made a hidden assumption, namely the assumption, explicitly stated by \cite{Huygens1657}, that ``it is the same thing'' to receive 5 shillings as it is to have an equal chance of receiving either 3 or 7 shillings. \cite{Lakatos1976} points out that it can be hard to imagine in retrospect that an eminent mathematician made a hidden assumption, which is often perceived as an error. He writes on p.~46, ``while they [the hidden assumptions] were in your {\it subconscious}, they were listed as {\it trivially true} -- the ...[paradox] however made them summersault into your conscious list as {\it trivially false}.'' Similarly, it seems trivially true at first that the expected gain from playing the lottery should be the criterion for participating -- taking time into account makes this assumption summersault into being trivially false. Thus the St. Petersburg paradox relies for its existence on the assumption that the expected gain (or growth factor or exponential growth rate) is the relevant quantity for an individual deciding whether to take part in the lottery. This assumption can be shown to be implausible by carefully analyzing the physical meaning of the ensemble average. A quantity that is more directly relevant to the financial well-being of an individual is the growth of an investment over time. Utility, which can obscure risks, is not necessary to evaluate the situation and resolve the paradox. It is the actual wealth, in \$, of a player, not the utility, that grows with $\bar{g}$, \eref{time_growth}. It is manifestly not true that the commonly used ensemble-average performance of the lottery equals the time-average performance. In this sense the system is not ergodic, and statements based on anything other than measures of the actual time-average performance must be interpreted carefully. \section*{Acknowledgments} For discussions and thoughtful comments and suggestions I would like to thank A. Adamou, M. Gell-Mann, R. Hersh, D. Holm, O. Jenkinson, W. Klein, J. Lamb, J. Lawrence, C. McCarthy and J.-P. Onstwedder. This work was supported by ZONlab Ltd.
proofpile-arXiv_067-13167
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \,\,\,\,The complete understanding of the high energy scattering processes is impossible without the calculation of the contributions of the different unitarization corrections to the amplitude. In the QCD Pomeron framework, \cite{bfkl,bfklsum,nlbfkl}, these corrections are arose due the self Pomeron interactions via the triple Pomeron interactions vertices \cite{vert1,vert2}. These self Pomeron interactions lead to the very complicated picture of the amplitude's evolution with rapidity, see for example \cite{eglla1,eglla2,eglla3,eglla4,jimwalk}. There are few main approaches which claim that they properly describe the Pomeron self interactions. The first one is presented as a chain of the evolution equations with Balitsky-Kovchegov (BK) equation as a main field equation of the hierarchy, \cite{BK}. The BK equation may be formulated in the framework of the Color Glass Condensate Approach (CGK), \cite{jimwalk}, and may be properly analyzed in the terms of s-channel interacting color dipoles, \cite{dipmod}. This equation correctly describes the interaction of two non identical objects, for example such interactions as DIS on nuclei, see \cite{bksols,bdepkov,jimsol,bksemi1,bksemi2,bk-pheno,kks,iim}. Another approach is the QCD Reggeon Field Theory (RFT-QCD), which is formulated in the momentum space and based on the standard diagrammatic calculus, \cite{eglla1,eglla2,eglla3,eglla4,braun1,braun2,braun3}. In this approach the BK equation describes a resummation of the ''fan'' diagrams of the type depicted in Fig.1a with only Pomerons merging vertices considered. Usual BK equation, describing "fan" diagrams, does not include Pomeron's splitting vertex. Therefore, the next natural step toward the unitarization of the amplitude is a symmetrical consideration of the scattering process with both vertices included. In this case the splitting vertex may be accounted differently in two different approximations. In the first approach the semi-classical problem of the interest is considered and there the sum of the diagrams depicted in Fig.1b is calculated, neglecting Pomeron loops contribution into amplitude. The second approach is concentrated on the solution of the full quantum problem basing on some effective high-energy QCD inspired models, accounting diagrams of Fig.1c type, see for example \cite{Lipatov:1995pn}. So far several attempts of the calculations were done in both directions, on the basis of QCD-RFT theory, \cite{braun1,braun2,braun3} and in the framework of CGC and dipole model, \cite{selfdual}. The NLO and one-loop contribution into the amplitude also were calculated , see for example \cite{NLO,OneLoop}, but the full solution still seems very far from it's completeness. In this situation it is very natural to consider much simpler zero transverse dimensional model, in hope that some important properties of solutions of this model will be faced in QCD as well. The RFT-0 (Reggeon Field Theory in zero transverse dimensions) model is attracted much interest during few last years, see \cite{0dimsol}. The approach was formulated and studied a long time ago, even before the QCD era, see \cite{0dimc,0dimq,0dimi}. The interest to the approach is due very attractive properties of the model. First of all, RFT-0 is solvable at different parameters of the model. This possibility to find full solution for the different regimes of the theory is an important feature of the model, since we hope that we will understand more about RFT-QCD if we will have full and semi-classical solutions for RFT-0, see examples in \cite{BM,BondBr,Braun4}. Therefore, the second important fact about the RFT-0 is that in this theory we can find both classical and quantum solutions for the amplitude that provides us with the information about the relative contribution of the loops to the amplitude. The possibility to consider in RFT-0 different types of the Pomeron vertices is an another property of the RFT-0 which is very useful and which could clarify the situation in QCD. In the paper we consider two RFT-0 models, in the second one together with usual triple Pomeron vertex we also include the quaternary Pomeron vertex. The paper is organized as follows. In the second section we consider the RFT-0 model with only triple Pomeron vertex. In two subsections of the section we introduce an whole machinery of the problem for the both quantum and classical solutions. In the following subsections we present the results of our calculations for the amplitude at different parameters of the model as well as the results for the two point Green's function (''effective'' Pomeron propagator ) of the theory. In the Sec.3 we solve the same problems for the RFT-0 theory with the quaternary Pomeron vertex included. The next section, Sec.4, is dedicated to the possible application of the solution to the problem of diffractive dissociation. The Sec.6 presents a summary of the results of the calculations and the last section, Sec.7, is the conclusion of our paper. \section{Solution of RFT in zero transverse dimension with only triple Pomeron coupling} In this paper we investigate the one-dimensional problem of the interacting Pomerons described by Lagrangian, which in the terms of Gribov fields has the following form: \begin{equation}\label{Lag1} L\,=-\,\psi^{+}\,\dot{\psi}\,-\,\mu\,\psi^{+}\,\psi\,+\,i\, \lambda\,\psi^{+}\,(\psi^{+}\,+\,\psi)\,\psi\,\,\,, \end{equation} where $\,\mu\,$ is the intercept of the bare Gribov field, $\lambda$ is the triple field interactions vertex and differentiation means differentiation on rapidity (y) variable, which is only the variable of the problem. Introducing the Pomeron fields, $q\,=\,i\,\psi^{+}\,$ and $p\,=\,i\,\psi\,$ we rewrite the Lagrangian in the form of real $\lambda\,$ coupling: \begin{equation}\label{Lag2} L\,=\,q\,\dot{p}\,+\,\mu\,q\,p\,-\,\lambda\,q\,(q\,+\,p)\,p\,\,. \end{equation} The field q and p may be understood now in the terms of the Pomeron creation and Pomeron annihilation operators $a^{+}$ $a$: \begin{equation}\label{Oper} \,q\,\rightarrow\,a^{+}\,=\,q\,\,\,\,, \,p\,\rightarrow\,a\,=\,-\,\frac{d}{d\,q}\,. \end{equation} The Hamiltonian of the problem in the terms of the operators q and p has the following form \begin{equation}\label{Ham1} H=-\,\mu\,q\,p\,+\,\lambda\,q\,(q\,+\,p)\,p\,\,\,, \end{equation} being the second order differential operator: \begin{equation}\label{Ham2} H=\,(\mu\,q\,-\,\lambda\,q^{2})\,\frac{d}{dq}\,+ \,\lambda\,q\,\frac{d^2}{dq^2}\,\,. \end{equation} The full, quantum solution of the theory will coincide, therefore, with the solution of the quantum mechanics problem with the Hamiltonian given by Eq.\ref{Ham2}: \begin{equation}\label{Ham3} H\,\Psi\,=\,\frac{d\,\Psi}{d\,y}\,\,, \end{equation} where the function \begin{equation}\label{Ham33} \Psi(y,q)\,=\,\sum_{n=0}^{n=\infty}\,\lambda_{n}\, e^{\,-\,E_{n}\,y}\,\phi_{n}(q)\,\, \end{equation} is the full quantum amplitude of the process of the scattering of two particles. Here $E_{n}$ and $\phi_{n}(q)$ denote eigenvalues and eigenfunctions of the operator Eq.~\ref{Ham2}, and $\lambda_{n}$ are the normalized projections coefficients of the eigenfunctions on the value of $\,\Psi\,$ at $\,y=0\,$. The classical solution of the theory, i.e. the solution with only ''tree'' diagrams without loops also may be obtained in the given framework. It is simply the solution of equations of motion for the Lagrangian given by Eq.~\ref{Lag2} \begin{figure}[t] \begin{center} \psfig{file=diag1.eps,width=180mm} \end{center} \caption{\it Examples of diagrams of the effective theory of interacting Pomerons with triple pomeron vertices: a) a fan diagram; b) a tree diagram defining the classical limit; c) a diagram with quantum loops.} \label{diag1} \end{figure} \subsection{The quantum solution of the model} An approximate solution of the Eq.\ref{Ham3} was found many years ago, see \cite{0dimc,0dimq,0dimi}. Analytical solutions of the Eq.\ref{Ham3} at large value of the ratio $\,\mu\,/\,\lambda\,\equiv\,\varrho\,$ for ground state and first exited state were considered at the same time as well. Nevertheless, the spectrum of the theory and eigenfunctions for next exited states at large value of $\,\varrho\,$ as well as the solutions for arbitrary values of $\,\varrho\,$ were not obtained. Therefore, we want to define the procedure for the numerical calculations of the spectrum of the theory and corresponding eigenfunctions for the cases of the strong triple Pomeron coupling (small $\,\varrho\,$ ) and week coupling (large $\,\varrho\,$ ) as well. In the following we will be more interesting in the case of the small triple coupling as the case which is mostly correspond to the real perturbative QCD situation. So, we need to solve the second order differential equation (see more detailed in \cite{0dimq}): \begin{equation}\label{Quant1} (\varrho\,-\,q)\,\frac{d\,\Psi(y,q)}{dq}\,+ \,\frac{d^2\,\Psi(y,q)}{dq^2}\,=\frac{1}{\lambda\,q}\, \frac{d\,\Psi(y,q)}{d\,y}\,\,, \end{equation} with the initial and boundary conditions on the function $\Psi(y,q)$: \begin{eqnarray}\label{Quant2} \,&\,&\,\Psi(y=0,q)\,=\,\sum_{n=0}^{n=\infty}\,\lambda_{n}\, \,\phi_{n}(q)\,=\,I(q)\,\\ \,&\,&\,\Psi(y,q\,\rightarrow\,0)\,\propto\,q\,\\ \,&\,&\,\Psi(y,q\,\rightarrow\,\infty)\,\propto \,const.\,\,, \end{eqnarray} where form of $I(q)$ depends on the particularly considered physical problem. Now it is constructive to make a transformation of the eigenfunctions: \begin{equation}\label{Quant3} \phi_{n}(q)\,=\,e^{(q-\varrho)^2\,/4}\,f_{n}(q)\,\,. \end{equation} In this case the term which is proportional to the first derivative over q is eliminated leading to hermitian Hamiltonian for the problem. After this transformation we obtain a Shredinger type eigenvalue equation: \begin{equation}\label{Quant4} \frac{d^2\,f_{n}(q)}{dq^2}\,+\,\frac{f_{n}(q)}{2}\,-\, \frac{1}{4}\,(q\,-\,\varrho)^2\,f_{n}(q)\,=\,-\,\frac{E_n}{\lambda\,q}\, f_{n}(q)\, \end{equation} with the following boundary conditions on the function $f_{n}(q)$: \begin{eqnarray}\label{Quant5} \,&\,&\,f_n(q\,\rightarrow\,0)\,\propto\,q\,\\ \,&\,&\,f_n(q\,\rightarrow\,\infty)\,\propto \, e^{-q^2\,/\,4\,+\,q\varrho\,/\,2}\,\,. \end{eqnarray} Our problem of interest is the scattering of two particles, therefore we define initial condition for the $\Psi(y=0,q)$ in the eikonal form (see more in \cite{0dimq}) : \begin{equation}\label{Quant6} \Psi(y=0,q)\,=\,I(q)\,=\,I(q\,,q_{ext})\,=\,1-e^{-q\,q_{ext}}\,, \end{equation} here $q_{ext}$ is a value of the source for the Pomeron field at zero rapidity. The function $\Psi(y,q)$ results the evolution of interacting Pomeron fields for the value of the Pomeron-external particle vertex equal to $q_{ext}$ at zero rapidity, till the value of the single Pomeron field equal to $q$ at rapidity y. The last ingredients of the theory are projection coefficients $\,\lambda_n\,$ of the eigenfunctions of the solution on the initial state I(q). Having in mind, that our eigenfunctions are orthogonal on interval of q from $0$ to $\,\infty\,$ with weight function $\,F_{W}(q,\varrho)\,$ : \begin{equation}\label{Quant7} \int_{0}^{\infty}\,f_{n}(q)\,f_{m}(q)\,F_{W}(q,\varrho)\,dq\,=\,\delta_{n\,m}\,\,, \end{equation} we obtain for $\,\lambda_n\,$: \begin{equation}\label{Quant8} \lambda_n\,(q_{ext})=\,\frac{\int_{0}^{\infty}\,f_{n}(q)\,I(q\,,q_{ext})\, F_{W}(q,\varrho)\,dq\, }{\int_{0}^{\infty}\,f_{n}^{2}(q)\,F_{W}(q,\varrho)\,dq\,}\,\,, \end{equation} where the weight function $\,F_{W}(q,\varrho)\,$ has the following form (see \cite{0dimq} and also \cite{Heun}): \begin{equation}\label{Quant9} F_{W}(q,\varrho)\,=\frac{\,e^{-(q\,-\,\varrho)^2\,/\,2}}{\,q\,}\,. \end{equation} The numerical solution of Eq.\ref{Quant4} with the boundary conditions given by Eq.\ref{Quant5} is the solution for the eigenvalues and eigenfunctions of the second order differential equation with two boundary value conditions. These solutions for eigenfunctions and eigenvalues may be found for each values of $\mu$ and $\lambda$, i.e. for the different values of parameter $\,\varrho\,$. The obtained values of eigenfunctions $E_n$ and eigenfunctions $f_n(q)$ allow to complete the calculation of the full quantum scattering amplitude and , therefore, solve our problem. So, considering the scattering of two particles , where the vertex of interaction of the first particle with Pomeron is equal $q_1$ and the vertex of interaction of the second particle with Pomeron is equal $q_2$, we define the full , quantum solution for the scattering amplitude at given rapidity y as \begin{equation}\label{Quant13} \Psi(y,q=q_2)\,=\,\sum_{n=0}^{n=\infty}\,\lambda_{n}(q_1)\, e^{\,-\,E_{n}\,y}\,e^{(q_2-\varrho)^2\,/4}\,f_{n}(q_2)\,\,, \end{equation} with the $\,\lambda_{n}(q_1)$ given by Eq.~\ref{Quant8} for the known values of eigenfunctions, see Fig.1c. The solution for the amplitude, Eq.~\ref{Quant13}, does not depend from which value of Pomeron field, $q_1$ or $q_2$, the evolution begins. If we consider the evolution of the Pomeron from $q_2$ to $q_1$ we will obtain the same answer for the amplitude, as for the case of evolution from $q_1$ to $q_2$. \subsection{The classical solution of the model} In order to see the role of the Pomeron loops in the scattering amplitude it is important to calculate the classical solution of our problem, given by the ''net'' diagrams of Fig.1b. The comparison between two solutions will show a relative weight of the loops in the scattering amplitude and, therefore, an applicability of the classical solution at given values of the theory parameters. The required classical solution we obtain solving equations of motion for the Lagrangian given by Eq.\ref{Lag2} with the sources of Pomeron fields at zero rapidity and final rapidity of the process Y: \begin{equation}\label{Class1} L\,=\,\frac{1}{2}\,q\,\dot{p}\,-\frac{1}{2}\,\dot{q}\,p\,+ \,\mu\,q\,p\,-\,\lambda\,q\,(q\,+\,p)\,p\,+\, \,q(y)\,p_0(y)\,+\,q_0(y)\,p(y)\,\,, \end{equation} where $\,p_0(y)\,\,q_0(y)\,$ are the sources of the Pomeron fields. The equations of the motion for the fields p and q are the following: \begin{eqnarray}\label{Class2} \,&\,&\,\dot{q}\,=\,\mu\,q\,-\,\lambda\,q^2\,-\,2\,\lambda\,q\,p\,\\ \,&\,&\,\dot{p}\,=\,-\,\mu\,p\,+\,\lambda\,p^2\,+\,2\,\lambda\,q\,p\,\\ \,&\,&\,q_0(y)\,=\,q_1\,\delta(y)\,\\ \,&\,&\,p_0(y)\,=\,q_2\,\delta(y-Y)\,\,. \end{eqnarray} This system of equations is another example of two value boundary problem for the system of first order differential equations, and may be also solved for different values of parameters $\mu,\,\,\lambda,\,\,q_1,\,\,q_2\,$. The solutions of the system Eq.\ref{Class2} at given value of final rapidity Y and given values of parameters $\mu,\,\,\lambda,\,\,q_1\,\,q_2\,$ we will denote by $\{q_{c}(y),p_{c}(y)\}$. With the solution $\{q_{c}(y),p_{c}(y)\}$ the ''net'', classical amplitude , represented by diagrams Fig.1b, is defined by standard way as: \begin{equation}\label{Class3} \Psi_{c}(Y,\,q_{c},\,p_{c})\,=\, 1\,-\,e^{-S(Y,\,q_{c},\,p_{c})}\,\,, \end{equation} where \begin{equation}\label{Class4} S(Y,q_{c},p_{c})=\int_{0}^{Y} L(Y,q_{c}(y),p_{c}(y)) dy = \frac{1}{2}(q_1 p_{c}(0)+q_{c}(Y)q_2)+ \frac{\lambda}{2}\int_{0}^{Y} (q^2_{c}(y)p_{c}(y)+ q_{c}(y)\,p^2_{c}(y)) dy\,. \end{equation} The amplitude Eq.\ref{Class3} describes the eikonalized interactions of ''net'' diagrams and for symmetric boundary conditions, $q_1=q_2$, the amplitude is invariant under the duality transformations: \begin{equation}\label{Class5} p\,\leftrightarrow\,q\,\,\,\,and\,\,\,\,y\leftrightarrow\,Y-y\,. \end{equation} The interesting feature of the classical solution of the system defined by the Lagrangian Eq.\ref{Class1} is that starting from some critical rapidity $Y_c$ the solution of equation of motion is not unique for $q_1=q_2\,<\,\varrho$. From the rapidity $Y_c$ there are at least three classical trajectories $\{q^{i}_{c}(y),p^{i}_{c}(y)\}$ which locally minimize the action and with increasing of the rapidities the number of the trajectories is growing. Each from them provides the local minimum of the action and the amplitude of the theory, therefore, must be rewritten in the following form: \begin{equation}\label{Class6} \Psi_{c}(Y)\,=\,1\,-\, \sum_{i}\,\Delta_{i}\,\exp\{-S(Y,\,q^{i}_{c}(y)\,,p^{i}_{c}(y))\}\,, \end{equation} where $\,\Delta_{i}\,$ is the quantum weight of the corresponding classical trajectories. In the following consideration these weights we take equal 1 or -1, depending on the type of trajectory, see more detailed consideration of this problem in \cite{0dimi}. In our calculations of the classical amplitude for the symmetrical case $q_1=q_2$, we always will use three solutions of the equations of motion. The first one, $\{q^{1}_{c}(y),p^{1}_{c}(y)\}$, is the symmetrical in the sence of Eq.\ref{Class5}, which trajectory will be similar to the trajectory number 1 in the Fig.\ref{Traj}. Another two dominant solutions, which we use in the calculations and which arise from some rapidity $Y_c$ , are the solutions for the trajectories similar to the trajectories 2 and 3 in the Fig.\ref{Traj}. Separately each of them is not symmetrical in the sense of the symmetry transformation given by the Eq.\ref{Class5}, but instead, under the duality transformation of rapidity given by the Eq.\ref{Class5}, there is a pair symmetry of these two solutions \begin{equation} q^{2}_{c}(y)\,=\,p^{3}_{c}(Y-y)\,\,\,\,and\,\,\,\,p^{2}_{c}(y)\,=\,q^{3}_{c}(Y-y). \end{equation} It is interesting to note, that asymptotic behavior of these two classical solutions is well known, each of them may be described by ''fan'' diagram amplitude of Fig.1a, see \cite{0dimq}, and being the dominant contributions these solutions lead to the ''fan'' dominance effect, see ~\cite{BM}. The final solution of our problem in the classical approximation we can write with the help of these three particular solutions as \begin{equation}\label{Class7} \Psi_{c}(Y)=1+exp\{-S(Y,\,q^{1}_{c}(y)\,,p^{1}_{c}(y))\}- exp\{-S(Y,\,q^{2}_{c}(y)\,,p^{2}_{c}(y))\}- exp\{-S(Y,\,q^{3}_{c}(y)\,,p^{3}_{c}(y))\}\,, \end{equation} which is symmetrical in respect to the duality transformation Eq.\ref{Class5}. In our plots for the symmetrical interactions, $q_1=q_2$, the classical solution will be always presented by amplitude of Eq.\ref{Class7}, which contains three parts beginning from critical rapidity $Y_c$, and only one part, symmetrical solution, at rapidities smaller than $Y_c$. In the asymmetrical case the symmetry is broken initially, and, therefore, only one ''fan'' configuration survives in the classical solution. \begin{figure}[t] \begin{center} \psfig{file=solution7.eps,width=100mm} \end{center} \caption{\it Classical solutions of the RFT-0 with only triple Pomeron vertex: the $\{ q,p\}$ trajectories obtained for $Y=5\,>\,Y_c\,$, $q=p=0.7$, $\rho\,=\,5$.} \label{Traj} \end{figure} It is important to underline, that this picture for classical solution holds for each from the considered models, for RFT-0 with triple Pomeron vertex only and for RFT-0 model with both triple and quaternary vertexes. \subsection{Parameters of the model} First of all, we define the range and value of parameters for which the calculations were done in both cases, quantum and classical. We calculated quantum and classical amplitudes, $\,\Psi(Y,\,q)\,$ and $\,\Psi_{c}(Y,\,q_{c},\,p_{c})\,$, for the following value of $\varrho\,=\,\mu\,/\,\lambda\,$: \begin{eqnarray}\label{Num1} \,&\,&\,\varrho\,=\,5\,\,\,\,(\mu\,=\,0.2\,,\,\lambda\,=\,0.04\,). \end{eqnarray} The reason of the main attention on this value of $\varrho\,$ is very simple. The coupling constant in RFT-0 may be defined as $\,\alpha_s\,=\,1\,/\,\varrho$ and at $\varrho\,=\,5\,$ we have, therefore, $\,\alpha_s\,=\,0.2\,$ . This value for the coupling constant is reasonable also in QCD that provides the possibility of some analogies between two theories. The value of the external sources we take as follow \begin{itemize}\label{Num2} \item symmetrical case: \begin{equation} \,q_1\,=\,q_2\,=0.1\,-\,1\,\,; \end{equation} \item non-symmetrical case: \begin{equation} \,q_1\,=\,0.2\,;\,\,\,q_2\,=0.3\,-\,1\,. \end{equation} \end{itemize} As an example of the case of the strong coupling , i.e. large value of the triple vertex, we also will present the quantum solution for the symmetrical case of interaction at \begin{eqnarray}\label{Num11} \,&\,&\,\varrho\,=\,1\,\,\,\,(\mu\,=\,0.1\,,\,\lambda\,=\,0.1\,). \end{eqnarray} Now it is instructive to consider the spectrum of the RFT-0 theory at different $\varrho$. The Table\ref{EFUN} presents the found eigenvalues of the theory, for $\,\varrho\,=\,5\,$ as well as for $\,\varrho\,=\,1\,$ and for $\,\varrho\,=\,3\,$. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ $\varrho\,$ & $E_0$ & $E_1$ & $E_2$ & $E_3$ & $E_4$ & $E_5$ & $E_6$ & $E_7$ & $E_8$ & $E_9$ \\ \, & \,& \,& \, &\, &\, &\, &\, &\, &\, & \\ \hline \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ $\varrho\,=\,1\,$ & 0.0546 & 0.292 & 0.609 & 0.997 & 1.447 & 1.95 & 2.503 & 3.103 & 3.745 & 4.427 \\ \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ \hline \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\, \\ $\varrho\,=\,3\,$ & 0.00217 & 0.142 & 0.297 & 0.507 & 0.759 & 1.047 & 1.37 & 1.724 & 2.108 & 2.579\\ \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ \hline \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\, \\ $\varrho\,=\,5\,$ & $ 1.36\cdot\,10^{-6}$ & 0.144 & 0.178 & 0.276 & 0.386 & 0.519 & 0.671 & 0.841 & 1.03 & 1.228 \\ \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ \hline \end{tabular} \caption{\it The eigenvalues of Eq.\ref{Quant4} for different values of parameter $\varrho$.} \label{EFUN} \end{center} \end{table} The asymptotic behavior of the amplitude is clearly seen from the Fig.\ref{EFUN77} which represents the behavior of the ground state as a function of $\,\varrho\,$. \begin{figure}[hptb] \begin{center} \psfig{file=eigenval1.eps,width=100mm} \end{center} \caption{\it\large The ground state of RFT as a function of $\varrho$.} \label{EFUN77} \end{figure} Being important at small rapidity, all exited states, i.e. states with eigenvalues $E_i,\,\,\,i=1..\infty\,$, at larger rapidities are decreasing very rapidly to zero. Therefore, at asymptotically large rapidity only the ground state survives, which nonzero origin is due the tunneling effect of the order $e^{-\varrho^2\,/\,2}$ which arises between the Coulomb and harmonic wells of the potential of the Eq.\ref{Quant4}, see \cite{0dimc,0dimq,0dimi}. At large enough rapidity the amplitude for any value of $\varrho$ is zero: for small $\varrho$ it happens rapidly already at small rapidities, at large $\varrho$ huge rapidity needs. As we will see further, this is a particular property of the model with only triple Pomeron vertex, the theory with additional quaternary vertex has precisely zero ground state. \subsection{Numerical results: loops versus ''net'' diagrams} The results of the calculations of the scattering amplitude $iT$ for the quantum and classical cases are presented in the Fig.\ref{EFUN3}-Fig.\ref{EFUN6} for the symmetrical values of the external sources. In the Fig.\ref{EFUN66}-Fig.\ref{EFUN666} the results for the non symmetrical case are represented. These results are presented in the form of 3D plots as well, as in the form of contour plots. The white color of the contour plot corresponds to the maximum value of the z axis function of the 3D plot, and the black color , correspondingly, to the minimum value of the same function. The amplitude is given as a function of two variables, rapidity and value of sources. In a symmetrical case the sources are the same, in non symmetrical case we fix one sources, $q_1\,=\,0.2\,$, and the second variable is the value of the second, external source $q_2\,=\,0.3\,-\,1\,$. We also present the absolute ratio of the difference between quantum amplitude and classical amplitude to the quantum amplitude, $|\Psi(Y,\,q)\,-\,\Psi_{c}(Y,\,q_{c},\,p_{c})|\,/\,\Psi(Y,\,q)$, in order to illustrate the relative contribution of the loops in the full solution comparatively to the classical solution. The rapidity variable Y of the plots is scaled, i.e. in the plots the variable $\,\mu\,y\,$ was used instead usual rapidity $y$. Therefore, in order to obtain the value of the amplitude at some physical rapidity it needs to divide the Y variable of the plot on intercept $\mu$ : $Y_{phys}\,=\,Y_{plot}\,/\,\mu\,$. For any value of intercept $\mu$ we have the plot of the amplitude for rapidities $Y_{phys}\,=\,Y_{plot}\,/\,\mu\,$ at triple Pomeron vertex $\lambda$ equal to $\lambda\,=\,\mu\,/\,\varrho\,\,$. We see, that for the maximum value of the $Y_{plot}\,=\,8\,$ at intercept, for example, $\,\mu\,=\,0.3\,$ the normal rapidity is large: $\,Y_{phys}\,\sim\,27\,$ and covers the reasonable rapidity range which may be interesting in the high energy physics. \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=QS3VSym.eps,width=115mm} & \psfig{file=QS3VSymContour_1.eps ,width=60mm}\\ & \\ \fig{EFUN3}-a & \fig{EFUN3}-b \\ & \\ \end{tabular} \caption{\it The quantum amplitude $\Psi(Y,\,q)\,$ of the 3P vertex model in the form of 3D and contour plots at $\varrho\,=\,5\,$ as a functions of scaled rapidity Y and symmetrical values of external sources $q_1=q_2$. } \label{EFUN3} \end{figure} \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=SemS3VSym_1.eps,width=115mm} & \psfig{file=SemS3VSymContour_1.eps,width=60mm}\\ & \\ \fig{EFUN4}-a & \fig{EFUN4}-b \\ & \\ \end{tabular} \caption{\it The classical amplitude $\,\Psi_{c}(Y,\,q_{c},\,p_{c})\,$ of the 3P vertex model in the form of 3D and contour plots at $\varrho\,=\,5\,$ and presented as a functions of scaled rapidity Y and symmetrical values of external sources $q_1=q_2$.} \label{EFUN4} \end{figure} \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=Ratio3V_1.eps,width=115mm} & \psfig{file=Ratio3V_1Contour.eps ,width=60mm}\\ & \\ \fig{EFUN5}-a & \fig{EFUN5}-b \\ & \\ \end{tabular} \caption{\it The ratio $|\Psi(Y,\,q)\,-\,\Psi_{c}(Y,\,q_{c},\,p_{c})|\,/\,\Psi(Y,\,q)$ in the 3P vertex model is represented by 3D plot and contour plot. The plots are presented as a functions of scaled rapidity Y at $\varrho\,=\,5\,$ and at $q_1=q_2=0.2-1$.} \label{EFUN5} \end{figure} \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=Ratio3V_2.eps,width=90mm} & \psfig{file=Ratio3V_3.eps,width=90mm} \\ & \\ \fig{EFUN6}-a & \fig{EFUN6}-b \\ & \\ \end{tabular} \caption{\it The same ratio $|\Psi(Y,\,q)\,-\,\Psi_{c}(Y,\,q_{c},\,p_{c})|\,/\,\Psi(Y,\,q)$ in the 3P vertex model is represented by 3D plots, now at the values of the sources $q_1=q_2=0.1-0.4$ in the Fig.7-a and at the values of the sources $q_1=q_2=0.4-1$ in the Fig.7-b correspondingly. The plots are presented as a functions of scaled rapidity Y at $\varrho\,=\,5\,$ . } \label{EFUN6} \end{figure} \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=QS3VAntisym_1.eps,width=115mm} & \psfig{file=QS3VAntisymContour_1.eps,width=60mm}\\ & \\ \fig{EFUN66}-a & \fig{EFUN66}-b \\ & \\ \end{tabular} \caption{\it The quantum amplitude $\Psi(Y,\,q)\,$ of the 3P vertex model for the non symmetrical case is represented by 3D plot and contour plot. The plots are presented as a functions of scaled rapidity Y at $\varrho\,=\,5\,$ and at fixed $q_1=0.2$ with $q_2=0.3-1$.} \label{EFUN66} \end{figure} \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=Ratio3VAntiSym_1.eps,width=115mm} & \psfig{file=Ratio3VAntiSymContour_1.eps,width=60mm} \\ & \\ \fig{EFUN666}-a & \fig{EFUN666}-b \\ & \\ \end{tabular} \caption{\it The ratio $|\Psi(Y,\,q)\,-\,\Psi_{c}(Y,\,q_{c},\,p_{c})|\,/\,\Psi(Y,\,q)$ in the 3P vertex model is represented by 3D plot and contour plot at the non symmetrical values of the sources: $q_1=0.2\,$ is fixed and $q_2=0.3-1$ is a variable. The plots are presented as a functions of scaled rapidity Y at $\varrho\,=\,5\,$ . } \label{EFUN666} \end{figure} We also present a quantum amplitude calculated for the small value of $\varrho\,=\,1$. This value of $\varrho\,$ means large value of triple Pomeron vertex and relatively large value of the ground state energy, see table \ref{EFUN}. From the plot Fig\ref{EFUN7} is clearly seen, that at large enough rapidities the amplitude approaches zero as it must be for such value of $\varrho\,$. \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=QS3VSymRo1.eps,width=120mm} & \psfig{file=QS3VSymContourRo1.eps ,width=55mm}\\ & \\ \fig{EFUN7}-a & \fig{EFUN7}-b \\ & \\ \end{tabular} \caption{\it The quantum amplitude $\Psi(Y,\,q)\,$ of the 3P vertex model in the form of 3D and contour plots at $\varrho\,=\,1\,$ as a functions of scaled rapidity Y and symmetrical values of external sources $q_1=q_2$.} \label{EFUN7} \end{figure} \subsection{''Effective'' Pomeron propagator} With the knowledge of the spectrum of RFT, the calculation of the ''effective'' Pomeron propagator $P_{eff}(y,q)$ is easy task. First of all, we change the initial condition for our equations, instead Eq.\ref{Quant6} now we have \begin{equation}\label{Prop1} \Psi(y=0,q)\,=\,I(q)\,=\,I(q\,,q_{ext})\,=\,q\,q_{ext}\,. \end{equation} The scattering amplitude $\Psi(y,q)$ now describes the Green's function of transition from one Pomeron created from source $q$ at rapidity zero to any number N Pomerons interacting with sources $q_{ext}$ at final rapidity. \begin{figure}[hptb] \begin{center} \begin{tabular}{ c c} \psfig{file=green.eps,width=80mm} & \psfig{file=greenro.eps,width=80mm}\\ & \\ \fig{EFUN8}-a & \fig{EFUN8}-b \\ & \\ \end{tabular} \end{center} \caption{\it The $P_{amp}(y,q)$ and $P_{eff}(y,q)$ functions of the 3P model at $\varrho\,=\,5\,$ and $q_1\,=\,q_{2}\,=1/ \varrho\,=0.2\,$ as a functions of scaled rapidity Y .} \label{EFUN8} \end{figure} Therefore, in order to obtain requested "effective" Pomeron propagator with only one Pomeron interacting with one source at final rapidity, we also need to take the derivative of $\Psi(y,q)$ over Pomeron field $q$ at $\,q\,=\,0\,$ and multiply obtained function on Pomeron field $q$ : \begin{equation}\label{Prop2} P_{eff}(y,q)\,=\,\left(\frac{d\,\Psi(y,q)}{d\,q}\right)_{q=0}\,\cdot\,q\,\,. \end{equation} The $P_{eff}(y,q)$ function is simply the second term of the Tailor expansion of the $\Psi(y,q)$ around $q=0$. Dividing obtained propagator $P_{eff}(y,q)$ on the sources $q\,q_{ext}\,$, we obtain the Green's function of the theory, which does not depend on the values of the sources: \begin{equation}\label{Prop22} P_{amp}(y,q)\,=\,\frac{P_{eff}(y,q)}{q\,q_{ext}\,}. \end{equation} $P_{amp}(y,q)$ does not depend on the sources of the Pomerons fields and in order to obtain requested propagator with given sources we need simply to multiply $P_{amp}(y,q)$ on the values of these sources $q\,q_{ext}\,$. Fig\ref{EFUN8} presents a plot of the $P_{amp}(y,q)$ function and plot for the $P_{eff}(y,q)$ function at $q\,=\,q_{ext}\,=1/ \varrho\,=0.2\,$. The interesting question, which we can address in this calculations, it is the question about the importance of $P_{eff}(y,q)$ in RFT-0. Namely, let's define with the help of $P_{eff}(y,q)$ the eikonalized ''effective'' Pomeron amplitude \begin{equation} \Psi_{eff}^{eik}(y,q)\,=\,1-e^{-P_{eff}(y,q)}\, \label{EikProp} \end{equation} which account loops in the "effective" propagator and does not account interactions between different propagators and between propagators and external sources. This amplitude could clarify the precision of this approximation in comparison with the full solution. So, the required ratio is presented in the Fig\ref{EFUN10} . \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=EikRatio.eps,width=115mm} & \psfig{file=ContEikRatio.eps,width=60mm}\\ & \\ \fig{EFUN10}-a & \fig{EFUN10}-b \\ & \\ \end{tabular} \caption{\it The ratio $|\Psi(Y,\,q)\,-\,\Psi_{eff}^{eik}(y,q)\,|\,/\,\Psi(Y,\,q)$ in the 3P model in the form of 3D and contour plotshe 3P model at $\varrho\,=\,5\,$ as a functions of scaled rapidity Y and symmetrical values of external sources $q_1=q_2=0.1-0.2$.} \label{EFUN10} \end{figure} \section{Solution of RFT in zero transverse dimension with triple and quaternary Pomeron vertices} In this section we discuss a RFT-0 model given by the following Lagrangian: \begin{equation}\label{FV1} L\,=\,q\,\dot{p}\,+\,\mu\,q\,p\,-\,\lambda\,q\,(q\,+\,p)\,p\,+ \,\lambda^{'}\,q^2\,p^2\,, \end{equation} where new quaternary Pomeron vertex $\,\lambda^{'}\,$ is introduced in comparison to the Eq.\ref{Lag2}. In the terms of the $q,\,p\,$ operators, see Eq.\ref{Oper}, the Hamiltonian of the problem has the form \begin{equation}\label{FV2} H=\,(\mu\,q\,-\,\lambda\,q^{2})\,\frac{d}{dq}\,+ \,(\lambda\,q\,-\,\lambda^{'}\,q^{2})\,\frac{d^2}{dq^2}\,\,, \end{equation} and the full, quantum solution of the theory will coincide with the solution of the quantum mechanical problem determined by the new Hamiltonian Eq.\ref{FV2}: \begin{equation}\label{FV3} H\,\Psi\,=\,\frac{d\,\Psi}{d\,y}\,\,, \end{equation} where the function \begin{equation} \Psi(y,q)\,=\,\sum_{n=0}^{n=\infty}\,\lambda_{n}\, e^{\,-\,E_{n}\,y}\,\phi_{n}(q)\,\, \end{equation} is the full quantum amplitude of two scattering particles. As before, $E_{n}$ and $\phi_{n}(q)$ denote eigenvalues and eigenfunctions of the Hamiltonian given by the Eq.\ref{FV2}, coefficients $\lambda_{n}$ are the normalized projections coefficients of the eigenfunctions on the value of $\,\Psi\,$ at $\,y=0\,$. The value of the quaternary vertex $\,\lambda^{'}\,$ is important for our following calculations. If the vertex is very small, $\,\lambda^{'}\,<\,\lambda\,/\,\varrho\,$ then all results of the Section 1 will stay the same with only very small corrections to the amplitude, it is was known many years ago, see \cite{0dimq}. Our calculations support this conclusion. Therefore, for our calculations more interesting to take another value of the vertex, so called ''magic'' value of $\,\lambda^{'}\,$, which is given by the following expression \begin{equation}\label{FV33} \lambda^{'}\,=\,\frac{\lambda}{\varrho}\,. \end{equation} The important property of the theory with this ''magic'' value of $\,\lambda^{'}\,$ is that in this theory , unlike previous cases, the ground state is precisely zero, as we will see later. Another important feature of the Hamiltonian Eq.\ref{FV2} with the ''magic'' value of the vertex is that this model has precise correspondence with the s-channel reaction-diffusion models, see \cite{BMMSX}, and with conformal approach for the QCD Pomeron, see \cite{Korch,BondLang}. As in the previous model, in this section we will also consider only the calculations with the value of the parameter $\varrho\,=\,5$. Nevertheless, the qualitative understanding of the behavior of the amplitude at smaller values of $\varrho$ also will be clear. \subsection{The quantum solution of the second model} The quantum solution of the Hamiltonian Eq.\ref{FV2} is a solution of the following second order differential equation \begin{equation}\label{QFV1} \,\lambda^{'}\,q\,(\,\lambda\,/\,\lambda^{'}\,-\,q\,)\, \,\frac{d^2\,\Psi(y,q)}{dq^2}\,+\,\,\lambda\,q\,( \varrho\,-\,q)\,\frac{d\,\Psi(y,q)}{dq}\,=\,\frac{d\,\Psi(y,q)}{d\,y}\,\, \end{equation} with the initial and boundary conditions on the function $\Psi(y,q)$: \begin{eqnarray}\label{QFV2} \,&\,&\,\Psi(y=0,q)\,=\,\sum_{n=0}^{n=\infty}\,\lambda_{n}\, \,\phi_{n}(q)\,=\,I(q)\,\\ \,&\,&\,\Psi(y,q\,\rightarrow\,0)\,\propto\,q\,\\ \,&\,&\,\Psi(y,q\,\rightarrow\,\varrho)\,\propto \,const.\,\,, \end{eqnarray} where as before, the form of the function $I(q)$ depends on the particular physical problem. Very important property of Eq.\ref{QFV1} is the existing of the ground state $\phi_{0}(q)$ with the zero energy $E_0\,=\,0\,$. In this case the solution for Eq.\ref{QFV1} is trivial: \begin{equation}\label{QFV3} \phi_{0}(q)\,=\,1\,-\,e^{-\varrho\,q}\,. \end{equation} This ground state is not orthogonal to the other eigenfunctions of the Hamiltonian, $\phi_{i},\,\,i=\,1\,..\,\infty\,$, which are orthogonal to each other: \begin{equation}\label{QFV4} \int_{0}^{\varrho}\,\phi_{i}(q)\,\phi_{j}(q)\,F_{W}(q,\varrho)\,dq\,= \,Const\,\delta_{i\,j}\, \end{equation} with the weight function \begin{equation} F_{W}(q,\varrho)\,=\,\frac{e^{\varrho\,q}}{q\,(\varrho\,-\,q)}\,. \end{equation} where \begin{eqnarray}\label{QFV55} \,&\,&\,\phi_{i}(q=0)\,=\,0,\,\,\,\,i=\,1\,..\,\infty\,;\\ \,&\,&\,\phi_{i}(q=\varrho)\,=\,0,\,\,\,\,i=\,1\,..\,\infty\,. \end{eqnarray} Using these properties of th eigenfunctions we obtain for the $\,\lambda_0$ projection coefficient: \begin{equation}\label{QFV5} \lambda_0\,=\,\frac{I(\varrho)}{\phi_{0}(\varrho)}\,=\,\frac{I(\varrho\,,q_{ext})} {\,1\,-\,e^{-\varrho^2}\,} \end{equation} All other projection coefficients, $\lambda_{i},\,\,i=\,1\,..\,\infty\,$, may be calculated using the following expression: \begin{equation}\label{QFV6} \lambda_{i}(q_{ext})\,=\,\frac{\int_{0}^{\varrho}\,\phi_{i}(q)\,I(q\,,q_{ext})\, F_{W}(q,\varrho)\,dq\, }{\int_{0}^{\varrho}\,\phi_{i}^{2}(q)\,F_{W}(q,\varrho)\,dq\,}\,-\, \lambda_0\,\frac{\int_{0}^{\varrho}\,\phi_{i}(q)\,\phi_{0}(q)\, F_{W}(q,\varrho)\,dq\, }{\int_{0}^{\varrho}\,\phi_{i}^{2}(q)\,F_{W}(q,\varrho)\,dq\,}\,. \end{equation} For the functions $\phi_{i},\,\,i=\,1\,..\,\infty\,$ the equation Eq.\ref{QFV1} has the form \begin{equation}\label{QFV7} \,\frac{d^2\,\phi_i(q)}{dq^2}\,+\,\varrho\, \frac{d\,\phi_i(q)}{dq}\,=\,-\,\frac{E_i}{\lambda^{'}\,q\,(\varrho-q)}\,\phi_i(q)\,, \end{equation} and with the help of the transformation \begin{equation}\label{QFV8} \phi_i(q)\,=\,f_i(q)\,e^{-\varrho\,q\,/\,2} \end{equation} this equation gets the following standard Shredinger form: \begin{equation}\label{QFV9} \,\frac{d^2\,f_i(q)}{dq^2}\,-\,\frac{\varrho^2}{4}\,f_i(q)\,=\,- \,\frac{E_i}{\lambda^{'}\,q\,(\varrho-q)}\,f_i(q)\, \end{equation} for the $\,f_i(q)\,$ functions. The function $f_i(q)$ are defined on the edges of interval as \begin{eqnarray}\label{QFV10} \,&\,&\,f_{i}(q=0)\,=\,0,\,\,\,\,i=\,1\,..\,\infty\,;\\ \,&\,&\,f_{i}(q=\varrho)\,=\,0,\,\,\,\,i=\,1\,..\,\infty\,, \end{eqnarray} and they are orthogonal each to other on the interval from $\,0\,$ to $\,\varrho\,$ with the new weight function $F_{W}^{f}(q,\varrho)$ \begin{equation} F_{W}^{f}(q,\varrho)\,=\,\frac{1}{q\,(\varrho\,-\,q)}\,. \end{equation} Our problem we also can solve using hermitian Hamiltonian defined by Eq.\ref{QFV9} with the projection coefficients \begin{equation}\label{QFV66} \lambda_{i}(q_{ext})\,=\,\frac{\int_{0}^{\varrho}\,f_{i}(q)\,I(q\,,q_{ext})\, \,e^{\varrho\,q\,/\,2}\,F_{W}^{f}(q,\varrho)\,dq\, }{\int_{0}^{\varrho}\,f_{i}^{2}(q)\,F_{W}^{f}(q,\varrho)\,dq\,}\,-\, \lambda_0\,\frac{\int_{0}^{\varrho}\,f_{i}(q)\,\phi_{0}(q)\, \,e^{\varrho\,q\,/\,2}\,F_{W}^{f}(q,\varrho)\,dq\, }{\int_{0}^{\varrho}\,f_{i}^{2}(q)\,F_{W}^{f}(q,\varrho)\,dq\,}\,. \end{equation} As was done in the previous section, solving the two value boundary problem for differential equation Eq.\ref{QFV7} or Eq.\ref{QFV9}, we obtain the spectrum and eigenfunctions of the model and find an amplitude for the different values of the sources $q_1,\,q_2\,$ and parameter $\varrho$: \begin{equation}\label{QFV11} \Psi(y,q=q_2)\,=\,\frac{I(\varrho\,,q_{ext})\,(1\,-\,e^{-\varrho\,q}\,)\,} {\,1\,-\,e^{-\varrho^2}\,}\,+\, \,\sum_{n=1}^{n=\infty}\,\lambda_{n}(q_1)\, e^{-E_{n}\,y}\,\phi_{n}(q_2)\,\,. \end{equation} \subsection{The classical solution for the second model} Comparison between the contribution to the amplitude of diagrams with loops and without loops may be performed again if we additionally will calculate classical solutions for the Lagrangian Eq.\ref{FV1}: \begin{equation}\label{CFV1} L\,=\,\frac{1}{2}\,q\,\dot{p}\,-\frac{1}{2}\,\dot{q}\,p\,+ \,\mu\,q\,p\,-\,\lambda\,q\,(q\,+\,p)\,p\,+\, \,\lambda^{'}\,q^2\,p^2\,+\, \,q(y)\,p_0(y)\,+\,q_0(y)\,p(y)\,\,. \end{equation} The equation of motion for the fields p and q in this case are the following: \begin{eqnarray}\label{CVF2} \,&\,&\,\dot{q}\,=\,\mu\,q\,-\,\lambda\,q^2\,-\,2\,\lambda\,q\,p\,+\,2\, \lambda^{'}\,q\,p^2\,\\ \,&\,&\,\dot{p}\,=\,-\,\mu\,p\,+\,\lambda\,p^2\,+\,2\,\lambda\,q\,p\,-\,2\, \lambda^{'}\,q^2\,p\,\\ \,&\,&\,q_0(y)\,=\,q_1\,\delta(y)\,\\ \,&\,&\,p_0(y)\,=\,q_2\,\delta(y-Y)\,\,. \end{eqnarray} The solutions of these equations, of course, are different from the solutions of the equations of motion for the Lagrangian Eq.\ref{Class1}, but the representation of the amplitude $\Psi_{c}(Y)$ in the terms of three classical solutions $\{q^{i}_{c},p^{i}_{c}\}$ of the system Eq.\ref{CVF2} is the same as in the Subsection 2.2 ( see also Fig.\ref{Traj4}) : \begin{equation}\label{CVF3} \Psi_{c}(Y)=1+exp\{-S(Y,\,q^{1}_{c}(y)\,,p^{1}_{c}(y))\}- exp\{-S(Y,\,q^{2}_{c}(y)\,,p^{2}_{c}(y))\}- exp\{-S(Y,\,q^{3}_{c}(y)\,,p^{3}_{c}(y))\}\,. \end{equation} Therefore, here we will not repeat all steps of the Subsection 2.2. The picture for the classical solution derived for the model with only triple Pomeron vertex is the same for the model with both triple and quaternary vertices. The only difference between two models is the value of critical rapidity $Y_c$ from which the additional solutions arising, breaking the target-projectile symmetry of the initial symmetrical solution, but for our consideration it is not important. \begin{figure}[t] \begin{center} \psfig{file=solution8.eps,width=100mm} \end{center} \caption{\it Classical solutions of the RFT-0 with triple and quaternary Pomeron vertexes: the $\{ q,p\}$ trajectories obtained for $Y=5\,>\,Y_c\,$, $q=p=0.7$, $\rho\,=\,5$.} \label{Traj4} \end{figure} \subsection{Parameters and asymptotic behavior of the amplitude in the second model} As in the previous case, for the second model we will consider quantum amplitude $\,\Psi(Y,\,q)\,$ for the following value of $\varrho\,=\,\mu\,/\,\lambda\,$: \begin{eqnarray}\label{CVF4} \,&\,&\,\varrho\,=\,5\,\,\,\,(\mu\,=\,0.2\,,\,\lambda\,=\,0.04\,); \end{eqnarray} and for the following values of the external sources (only symmetrical case ): \begin{equation}\label{CVF5} \,q_1\,=\,q_2\,=\,0.1\,-\,1\,. \end{equation} The ground state of this model has precisely zero eigenvalue, as we showed before, see Eq.\ref{QFV3} and for other eigenvalues of the model see Table.\ref{QFV6}. \begin{table}[t] \begin{center} \begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ $\varrho\,$ & $E_0$ & $E_1$ & $E_2$ & $E_3$ & $E_4$ & $E_5$ & $E_6$ & $E_7$ & $E_8$ & $E_9$ \\ \, & \,& \,& \, &\, &\, &\, &\, &\, &\, & \\ \hline \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\, \\ $\varrho\,=\,5\,$ & 0 & 0.182 & 0.189 & 0.307 & 0.338 & 0.42 & 0.513 & 0.626 & 0.756 & 0.905 \\ \, & \,& \,& \, &\, &\, &\, &\, &\, &\, &\,\\ \hline \end{tabular} \caption{\it The eigenvalues of Eq.\ref{QFV7} for the value of the parameter $\varrho$ equal 5.} \label{CVF6} \end{center} \end{table} Indeed, now, at asymptotically large rapidity, the amplitude is fully defined by the ground state: \begin{equation}\label{CVF7} \Psi_{asymp.}(q)\,=\,\lambda_0\,\phi_{0}(q)\,= \,\frac{I(\varrho\,,q_{ext})\,(\,1\,-\,e^{-\varrho\,q}\,)} {\,1\,-\,e^{-\varrho^2}\,}\,. \end{equation} In the case of the eikonal type initial conditions $I(q\,,q_{ext})$ the quantum amplitude $\Psi(y,q)\,$ at $q\,=\,\varrho$ has the following form \begin{equation}\label{CVF8} I(\varrho\,,q_{ext})\,=\,1\,-\,e^{-\varrho\,q_{ext}}\,, \end{equation} and, therefore, we obtain the following asymptotic behavior of our quantum amplitude : \begin{equation}\label{CVF9} \Psi_{asymp.}(q)\,= \,\frac{(1\,-\,e^{-\varrho\,q_{ext}})\,(1\,-\,e^{-\varrho\,q})} {\,1\,-\,e^{-\varrho^2}\,}\,. \end{equation} For the ''effective'' Pomeron propagator, $P_{eff}(y,q)$ the initial condition is different: \begin{equation}\label{CVF10} I(\varrho\,,q_{ext})\,=\,\varrho\,q_{ext}\,. \end{equation} Using the definition of the propagator given by Eq.\ref{Prop2} at asymptotically large rapidity we obtain: \begin{equation}\label{CVF11} P_{eff}^{asymp.}(q)\,=\, \,\frac{(\varrho\,q_{ext})\,(\varrho\,q)} {\,1\,-\,e^{-\varrho^2}\,}\,. \end{equation} As we see, for each particular values of sources $q_1,q_2$ and parameter $\varrho$ neither quantum amplitude nor full Pomeron propagator are not decreasing to zero at asymptotically large rapidity, approaching instead some constant values defined by Eq.\ref{CVF9} and Eq.\ref{CVF11} correspondingly. \subsection{Numerical results for the the second model} The results of our calculations of the scattering amplitude $iT$ for the quantum case are presented in the same from as for the first model. Due the similarities between these two models, in this section we present only quantum solution for the amplitude for the symmetrical case of interactions, see Fig.\ref{NFV1}. We also present the ratio between the difference of the triple Pomeron vertex model quantum amplitude and quantum amplitude of the second model to the the quantum amplitude of the first model : $|\,\Psi^{3P}\,-\,\Psi^{4P}\,|\,/\,\Psi^{3P}$ , see Fig.\ref{NFV2}. \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=QS4VSym.eps ,width=115mm} & \psfig{file=QS4VSymContour_1.eps ,width=60mm}\\ & \\ \fig{NFV1}-a & \fig{NFV1}-b \\ & \\ \end{tabular} \caption{\it The quantum amplitude $\Psi(Y,\,q)\,$ of the second model in the form of 3D and contour plots at $\varrho\,=\,5\,$ as a functions of scaled rapidity Y and symmetrical values of external sources $q_1=q_2$.} \label{NFV1} \end{figure} \begin{figure}[hptb] \begin{center} \psfig{file=QSRatio3V_4V.eps,width=170mm} \end{center} \caption{\it The ratio of the quantum amplitudes $|\,\Psi^{3P}\,-\,\Psi^{4P}\,|\,/\,\Psi^{3P}$ in the second model in the form of 3D plot at $\varrho\,=\,5\,$ as a functions of scaled rapidity Y and symmetrical values of external sources $q_1=q_2$.} \label{NFV2} \end{figure} \subsection{''Effective'' Pomeron propagator in the second model} The ''effective'' Pomeron propagator we define as in the previous model \begin{equation}\label{PFV1} P_{eff}(y,q)\,=\,\left(\frac{d\,\Psi(y,q)}{d\,q}\right)_{q=0}\,\cdot\,q\,\,, \end{equation} with the initial condition given by \begin{equation}\label{PFV2} \Psi(y=0,q)\,=\,I(q)\,=\,I(q\,,q_{ext})\,=\,q\,q_{ext}\,. \end{equation} The asymptotic behavior of the $\,P_{eff}(y,q)\,$ at asymptotically large rapidities is defined by expression Eq.\ref{CVF11}: \begin{equation} P_{eff}^{asymp.}(q)\,=\,\,\frac{(\varrho\,q_{ext})\,(\varrho\,q)} {\,1\,-\,e^{-\varrho^2}\,}\,. \end{equation} For the values $q_{ext}\,=\,q\,=\,1\,/\,\varrho\,$, we obtain that at large rapidity this function reaches unitarity limit see plot Fig.\ref{NFV3} and more detailed derivation in ~\cite{BMMSX}. The plot of the Green's function of the theory \begin{equation}\label{Prop222} P_{amp}(y,q)\,=\,\frac{P_{eff}(y,q)}{q\,q_{ext}\,}, \end{equation} is presented in Fig.\ref{NFV3}. \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=green4v.eps,width=85mm} & \psfig{file=green4vro.eps,width=85mm}\\ & \\ \fig{NFV3}-a & \fig{NFV3}-b \\ & \\ \end{tabular} \caption{\it The $P_{amp}(y,q)$ and $P_{eff}(y,q)$ functions of the second model at $\varrho\,=\,5\,$ and $q_1\,=\,q_{2}\,=1/ \varrho\,=0.2\,$ as a functions of scaled rapidity Y .} \label{NFV3} \end{figure} \section{Diffractive dissociation process in RFT-0} As an example of applications of the methods developed in the paper, we present calculations of the diffractive dissociation process in the RFT-0 with only triple Pomeron vertex. The algorithm of the calculations is defined as follows. We have the quantum solution of the theory \begin{equation}\label{DD1} \,\Psi_{1}(y,q)\,=\,\sum_{n=0}^{n=\infty}\,\lambda_{n}\, e^{\,-\,E_{n}\,y}\,\phi_{n}(q)\,\, \end{equation} defined at all range of rapidity $Y$ at some values of the sources $q_1$ and $q_2$. Let's now consider this solution at the rapidity interval $Y_1\,<\,Y$. With the help of the $\,\Psi_{1}(Y_1,q)\,$ we define another function $\,\Psi_{2}(y,q)\,$ \begin{equation}\label{DD2} \,\Psi_{2}(y,q)\,=\,\sum_{n=0}^{n=\infty}\,\tilde{\lambda}_{n}\, e^{\,-\,E_{n}\,\left( y\,-\,Y_1\right)}\,\phi_{n}(q)\, \end{equation} on the rapidity interval $\,Y_1<\,\,y\,<\,Y\,$ with the following initial condition: \begin{eqnarray}\label{DD3} \,&\,&\,\Psi_{2}(y=Y_1,q)\,= \,\sum_{n=0}^{n=\infty}\,\tilde{\lambda}_{n}\, \,\phi_{n}(q)\,=\, \,(\,\Psi_{1}(Y_1,q)\,)^{2}\,\,. \end{eqnarray} The illustration of this construction see in the Fig.\ref{diffr}. \begin{figure}[t] \begin{center} \psfig{file=diffr.eps,width=110mm} \end{center} \caption{\it The single diffractive dissociation process for rapidity $Y_1$ at total rapidity $Y$.} \label{diffr} \end{figure} The only numbers that we need to calculate now are the coefficients $\tilde{\lambda}_{n}\,$. But their calculation is trivial, they simply \begin{equation}\label{DD4} \tilde{\lambda}_n\,(Y_1)=\,\frac{\int_{0}^{\infty}\,\phi_{n}(q)\, (\,\Psi_{1}(Y_1,q)\,)^{2}\,e^{-(q-\varrho)^2\,/4}\, F_{W}(q,\varrho)\,dq\, }{\int_{0}^{\infty}\,\phi_{n}^{2}(q)\,F_{W}(q,\varrho)\, e^{-(q-\varrho)^2\,/2}\,dq\,}\,\,, \end{equation} where the weight function $\,F_{W}(q,\varrho)\,$ is the same as in the Eq.\ref{Quant9}. The coefficients $\tilde{\lambda}_{n}\,$ fully determine requested $\,\Psi_{2}(y,q)\,$ function, which represents the differential cross section of the single diffraction process at given and fixed rapidity interval $$ y=Y_2=Y-Y_1 $$ Integrating this function over rapidity interval $\,Y_1<\,\,y\,<\,Y\,$ we obtain as the answer the sum of the total diffractive dissociation cross sections on this rapidity interval and elastic cross section \begin{equation} \sigma_{SD}^{Tot}\,+\,\sigma_{el}\,=\,\int_{Y_1}^{Y}\,\Psi_{2}(y,q)\,dy \end{equation} So, the value of the single difractive cross section integrated over rapidity interval $Y_2$ is the following \begin{equation}\label{DD5} \sigma_{SD}^{Tot}=\,\int_{Y_1}^{Y}\,\Psi_{2}(y,q)\,dy\,-\left( \Psi_{1}(Y,q)\right)^{2}\,, \end{equation} where $Y$ is the total rapidity of the process, $Y_1$ is the rapidity "gap" of the process, $Y_2$ is the value of the rapidity taken by produced diffractive state and elastic cross section of the process is defined as $\sigma_{el}=\left( \Psi_{1}(Y,q)\right)^{2}$. In general we do not expect that the numbers given by RFT-0 theory will be correct. The only interesting value in the given model, therefore, is the ratio of the single diffractive cross section to the total cross section. \begin{equation}\label{DD6} R(Y_1)\,=\,\frac{\sigma_{SD}^{Tot}}{\sigma_{tot}} \end{equation} where $\sigma_{tot}=2\Psi_{1}(Y,q)$. Considering this ratio we can not prove, of course, that the same value of the $R$ we will obtain in QCD as well, but, nevertheless, it is interesting to see what new about this ratio we obtain using full quantum solution for the amplitudes. Another interesting ratio, which we can calculate in the given framework, this is a ratio of differential single diffraction cross section to the total cros section: \begin{equation}\label{DD7} R_1(Y_1)\,=\,\frac{\Psi_{2}(Y_2)}{\sigma_{tot}} \end{equation} In $R_1$ ratio the dependence on diffractive mass $Y_2$ is present, and it allows to as to trace the energy dependence of the ratio of the diffractive state to the total cross section. Therefore, the following calculations are presented. We fix the value of diffractive state $Y_2$ equal 1-4 units of scaled rapidity as a one variable and as a second variable in the 3D plot we consider the value of total rapidity $Y$, which we take equal 4-8 units. These plots we make for the two different symmetrical values of the external sources: 0.2 and 0.7. Obtained plots, see Fig.\ref{diffr1} for the ratio $R$ and Fig.\ref{diffr2} for the ratio $R_1$, show the changes of the ration accordingly to the total rapidity changes. \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=TDifraction0_2.eps ,width=115mm} & \psfig{file=ContTDifraction0_2.eps,width=60mm}\\ & \\ \fig{diffr1}-a & \fig{diffr1}-b \\ & \\ \psfig{file=TDifraction0_7.eps ,width=115mm} & \psfig{file=ContTDifraction0_7.eps,width=60mm}\\ & \\ \fig{diffr1}-c & \fig{diffr1}-d \\ & \\ \end{tabular} \caption{\it The ratio $R$ for the case of symmetrical sources $q_1=q_2=0.2$ in the ~\fig{diffr1}-a - ~\fig{diffr1}-b and for the symmetrical sources $q_1=q_2=0.7$ in the ~\fig{diffr1}-c - ~\fig{diffr1}-d correspondingly. The ratios are plotted at $\varrho\,=\,5\,$ as a functions of scaled rapidity Y and scaled rapidity variable $Y_2$.} \label{diffr1} \end{figure} \begin{figure}[hptb] \begin{tabular}{ c c} \psfig{file=DDifraction0_2.eps ,width=115mm} & \psfig{file=ContDDifraction0_2.eps,width=60mm}\\ & \\ \fig{diffr2}-a & \fig{diffr2}-b \\ & \\ \psfig{file=DDifraction0_7.eps ,width=115mm} & \psfig{file=ContDDifraction0_7.eps,width=60mm}\\ & \\ \fig{diffr2}-c & \fig{diffr2}-d \\ & \\ \end{tabular} \caption{\it The ratio $R_1$ for the case of symmetrical sources $q_1=q_2=0.2$ in the ~\fig{diffr2}-a - ~\fig{diffr2}-b and for the symmetrical sources $q_1=q_2=0.7$ in the ~\fig{diffr2}-c - ~\fig{diffr2}-d correspondingly. The ratios are plotted at $\varrho\,=\,5\,$ as a functions of scaled rapidity Y and scaled rapidity variable $Y_2$.} \label{diffr2} \end{figure} \section{Discussion of results} The are the following main results of the paper. First of all, we obtained the full quantum solutions for the models and compared it with the classical solutions, illustrating the relative contribution of the loops to the amplitude. In order to find an useful approximation to the full quantum solution we also considered an eikonalized amplitude which was built with the help of the full two point Green's functions. All these amplitudes were calculated for different values of external vertexes that allows to trace the applicability of each approximation separately in different parameter's regions. Another important result is a comparison of quantum solutions with and without quaternary Pomeron vertex that clarify the contribution of this vertex to the process amplitude . Plots Fig.\ref{EFUN3}-Fig.\ref{EFUN666} represents the result of the calculations in the RFT-0 approach with triple Pomeron vertex only. The quantum solution of the model is presented in the plot Fig.\ref{EFUN3}, whereas the classical solution is presented in the plot Fig.\ref{EFUN4}. The comparison between these two solutions could be found in the plots Fig.\ref{EFUN5}-Fig.\ref{EFUN6}. From the Fig.\ref{EFUN3} we see, that our full solution for small values of external sources does not reach unitarity really. Instead of "black" disk the "grey" one is achieved. This is may be easily explained if we will remember that our initial conditions have a eikonal form, whereas in the s-channel model, see \cite{BMMSX} and references therein, the propagator form of initial conditions is always applied. In RFT-0 model the black disk limit, as it must be, achieved only at large values of the sources, that corresponds to the nuclei interaction in usual QCD, see \cite{braun1,braun2,BGLM}. Other interesting results are represented in the Fig.\ref{EFUN5}-Fig.\ref{EFUN6}. Indeed, the natural question, which in fact is very important in any practical calculations with BK equation involved, is a question about applicability of classical solutions of the model. Namely, at which values of external sources the difference between the classical and quantum solutions is acceptable from the point of view of the precision of the amplitude calculations. In real QCD this answer may be obtained only approximately, at least. In RFT-0 calculations the answer is clear, at large values of the external sources the classical solution is acceptable only at large values of the sources. In the symmetrical case of interactions with the small sources involved, the relative difference between the amplitudes is very large. The difference pretty fast decreases with the increasing of the value of the sources, but still the difference is not negligible even when $q=0.3\,-\,0.4$, see Fig.\ref{EFUN5}-Fig.\ref{EFUN6}. Important to underline, that one from the main reasons that provides the applicability of the classical solution is a presence of the third classical solution arose at some rapidity $Y_c$ ( see Section 2.2 an \eq{Class6}) in the framework. The third solution provides the decrease of the classical amplitude at large values of the total rapidity and without this solution the precision of the model was even worse. The situation with the non-symmetrical case of interactions is better, see Fig.\ref{EFUN66}-Fig.\ref{EFUN666}. Due the initially broken symmetry of the interaction, the fan diagrams are initially dominant and provide the better precision of the classical solution in comparison to the symmetrical case of interactions. Considering the second possible candidate on the role of "good amplitude approximation", eikonalized function \eq{EikProp}, see Fig.\ref{EFUN10}, we see that at small values of the external sources this amplitude is indeed better than the classical solution. In the region of large values of the sources both amplitude are equal more or less - the unitarization corrections are small for the large sources. We also could compare the effective Pomeron propagator with the asymptotic results obtained in the framework of s-channel model, see \cite{BMMSX}. Looking in the Fig.\ref{EFUN8} and Fig.\ref{NFV3}, we see, that the value of the full propagator, which considered as an amplitude in s-channel model, is close to one, especially for the second considered model with quaternary vertex included, which has direct relations with s-channel model, see again \cite{BMMSX}. But, in general, for an arbitrary value of a source there is no propagator's unitarization and the way to achieve the unitarization in this case is the eikonalization of the propagator only. Interesting problem, which also could be investigated in the framework of RFT-0, is a problem of the influence of the value of the triple Pomeron vertex on the behavior of the amplitude. Changing value of the $\rho$ parameter from the $5$ till $1$ we change the value of the triple vertex $\lambda$ from $0.04$ till $0.1$. The resulting amplitude is depicted in the Fig.\ref{EFUN7}. From this plot we see, that already at relatively small values of rapidity this amplitude is decreasing till zero. This fast of the decreasing of the amplitude at large values of $\lambda$ denote a main difference between the models of RFT-0 with and without quaternary vertex. In RFT-0 without the quaternary vertex the amplitude, as it mentioned above, approaches zero whereas in RFT-0 with the quaternary vertex the amplitude approach constant with any value of triple vertex and correctly adjusted value of quaternary vertex, see \eq{FV33}, \eq{CVF9} and \eq{CVF11}. Of course, when the value of the triple vertex is small, two models do not so differ, see Fig.\ref{NFV2}. This difference in the behavior of the amplitudes in two models may be also analyzed in the terms of s-channel model, see \cite{BMMSX} for the details and explanations. As a possible example of the application of the model we calculated the values of the ratios of the single diffractive and differential single diffractive cross sections to the total cross section in the given framework, see Eq.\ref{DD6} and Eq.\ref{DD7}. The results of these calculations are present in the Fig.\ref{diffr1} - Fig.\ref{diffr2}. The plot Fig.\ref{diffr1} represents the ratio of single diffractive cross section (integrated over rapidity of the diffractive state) to the value of the total cross section. In two cases when the sources are equal 0.2 and 0.7, we obtain, that for the fixed value of the diffractive state this ratio almost does not change with energy. More precisely, in both cases there is a tiny change in the ratio behavior at small rapidities 4-5, and constant behavior at large rapidiies 6-8. Tracing the relative contribution of the diffractive state at fixed rapidity in both cases we see, that the contribution of the large mass diffractive state, $Y_2=4$, approximately two times large than the contribution from small , "elastic", diffractive state $Y_2=1$, as it must be in reality. We see, that the simple RFT-0 model correctly reproduce main futures of the real QCD. It is also interesting to note, that the relative contribution of the large mass diffractive state is larger for the case of small external sources, there is $R=0.11$ for source $q=0.2$ against $R=0.028$ for source $q=0.7$. Concerning the calculations of the second ratio, $R_1$, see Fig.\ref{diffr2}, we obtain that these two cases are different. At small source, $q=0.2$, the maximum of the ratio is at small diffractive state, which has almost constant behavior with total rapidity. At large vale of source, $q=0.7$ , the situation is opposite, the maximim contribution comes from the large mass diffractive state. The contribution of the small diffractive state at $q=0.7$ is zero in Fig.\ref{diffr2} at large values of Y, that means constant value of single diffraction cross section at small values of $Y_2$ and large values of $Y$. In general, considering Fig.\ref{diffr1} - Fig.\ref{diffr2}, we can conclude, that the description of the diffractive states in RFT in the case when we include in calculations all possible re-scattering corrections is different from the "naive" diffraction models, where only part of corrections is included. Therefore, there is a hope, that using the same receipts of calculations in the QCD RFT, we will be able correctly describe diffraction data and their energy dependence. \section{Conclusion} In this paper we performed calculations of the different amplitudes in the framework of the RFT-0 model. Comparing different approximation we analyze the applicability of the different approximation schemes and their dependence on the parameters of the model. The main conclusion of the calculations is that we could trust to the classical approximation for the amplitude only when the value of the triple Pomeron vertex is small. In this region the difference between models with and without quaternary vertexes is negligible. Nevertheless, the important condition for this approximation in the case of interacting of symmetrical particles is an including of the third classical solution in the set of classical solutions, which arises at some critical value of rapidity. Without this solution the classical approximation is not good, even at not small values of the external sources. For the case of the small values of these vertexes the eikonalized Green's function amplitude is more precise solution then the classical one. Unfortunately, in real QCD this amplitude also could not be calculated precisely and therefore we could not trust to the classical solutions at all when the values of the external sources is not large enough. Does the proton is "large enough" in the case of real QCD is an open question, which could not be answered, unfortunately, in the given framework. When the value of the triple vertex is growing, the picture is changing drastically. In this case we could not trust anymore to the classical solutions of the models. Also, there is a large difference in the behavior of the amplitude in models with and without quaternary vertex, it's presence changes the asymptotic behavior of the amplitude. In real QCD it could means, that the different evolution equations must be applied in the different regions of the impact parameter space with the different values of the coupling constant. If we assume, that the influence of the non-perturbative effects is in the change of the value of the coupling constant only, then we must separate the contribution of the perturbative spots in impact parameter space, which evolution is described by the BK equation, from the non-perturbative regions where different evolution equations must be applied. In this case the overall amplitude is a sum of the amplitudes described by the different evolution equations in different regions, with the BK equation applicable in the regions of high partons density only. This picture will lead to the factorization of the non-perturbative effects from the perturbative ones, first of all, and also it could explain the possible applicability of the BK equation in case of proton-proton collisions. Namely, at high enough energies the overall contribution of "black" spots may be larger then the contribution of the "white" and "grey" ones and we could describe even inclusive data by the BK equation formalism. The large but constant non-perturbative contributions into the amplitude in this case may be accounted by the adjusting of the values of the external sources in interaction of interests. Concluding we underline, that RFT-0 model is a very interesting and important ground for an initial implementation of the different ideas which further could be applied in real QCD calculations as well. \section*{Acknowledgments} I am grateful to Leszek Motyka for his participation in the development of the main themes of the paper and to M.Braun for his support and interest to the paper during period of it's writing. \newpage
proofpile-arXiv_067-13177
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Dried colloidal particle films find use in a number of applications such as tapes for photography and magnetic storage \citep{lew02}, porous coated printer papers, coating vitamin tablets, synthetic opals, photonic crystals \citep{zen02} etc. The macroscopic properties of the film such as its thickness, particle packing and the mechanical strength are influenced by the drying rate, interparticle potential, particle size and shape and the modulus of the particles. When a dispersion of colloidal particles is dried, the particles concentrate, eventually reaching a close packed concentration. The liquid menisci on the top layer of particles compresses the packing while the substrate resists transverse deformation of the packing. Consequently, transverse tensile stresses develop in the packing and when these stresses exceed a critical value, the packing cracks resulting in a variety of crack patterns. Such cracks occur not only in thin films such as paints and coatings but also in thick systems and over geophysical length scales such as in the case of dried river beds. Most of the experimental investigations of the cracking phenomenon in drying colloidal dispersions have focused on the thin film geometry where stresses have been measured using the classical cantilever bending technique \citep{lew02, mahesh05, jhmn1999}. These measurements show that thin films of monodisperse colloidal dispersions containing identical particles crack at a critical stress that is independent of the particle size but varies inversely with the film thickness \citep{chiu93a, chiu93b, mahesh05, mahesh09a}. In almost all cases, the film nucleates multiple cracks with crack spacing that varies linearly with film thickness \citep{mahesh05, morris00}. Experiments also suggest that irrespective of particle size or moduli, each dispersion has a maximum crack free thickness below which the films do not crack. The critical cracking thickness is found to increase with particle size and moduli in the case of hard polymer and metal oxide particles. A number of investigations have also focused on cracking in confined geometries such as capillary tubes where the dispersion dries from one end resulting in a compaction front of packed particles. While the studies in this geometry have mainly focused on crack tip velocity and its relation to the speed of the compaction front \citep{limat95, morris09, Duf03}, it is only recently that Dufresne and co-workers\citep{Duf10} have been successful in imaging the stress variation near the tip of a propagating (interface) crack at the interface of an elastomer and saturated colloidal bed and extract the stress intensity factor from it. The stress decays as inverse square root of the distance from the crack-tip and is in line with the predictions of classical linear theory for fracture in brittle materials. On the theoretical front, Routh and Russel \cite{routh99} have derived a constitutive relation relating the macroscopic stress to macroscopic strain in a drying film. They considered the viscoelastic deformation of a pair of identical particles due to contact and interfacial forces and related the strain at the particle level to these forces. Next, they volume averaged the forces over all orientations to arrive at the macroscopic stress versus strain relationship for a drying film. In the absence of particle-solvent interfacial tension, the expression for the macroscopic stress tensor \citep{mahesh05} for identical elastic spheres reduces to \begin{eqnarray} \sigma_{ij} &=& \delta_{ij} \left\lbrace -P - \frac{GM\phi_{rcp}}{140} \left( \epsilon^2_{mm}+2 \epsilon_{nm} \epsilon_{mn} \right) \right\rbrace \nonumber \\ && -\frac{GM\phi_{rcp}} {35} \left( \epsilon_{mm} \epsilon_{ij}+2 \epsilon_{im}\epsilon_{mj} \right) \label{eq:41} \end{eqnarray} where, $ \epsilon_{ij} $ is the macroscopic strain, $ P $ is the capillary pressure, $\phi_{rcp}$ is the random close packing concentration, $ G $ is the shear modulus of the particles and $ M $ is the number of contacting neighbors. The constitutive equation is an improvement over the traditional poroelasticity models \citep{biot1941, biot1955, biot1956} as the former accounts for the nonlinear deformation at the particle level and the influence of particle size, modulus and packing characteristics on the macroscopic deformation field. The model has been successful in predicting not only the stress profile in drying films of both film forming and cracking systems \citep{mahesh04}, but also in predicting many aspects of the cracking mechanism in the latter \citep{mahesh05}. More recently, Russel et. al. \cite{russel08a} have improved on the above relation by adopting the Hertzian contact mechanics at the particle pair level. The final constitutive relation is also non-linear with the stress varying as three halves power of the strain. Using this relation, they determine the capillary pressure necessary either to open an infinite crack in a flawless film or to extend pre-existing flaws of finite lengths. Their results suggest that flaws which are a fraction of the film thickness are sufficient to initiate cracks that would propagate across the sample at pressures modestly greater than obtained from the energy argument. In a related study Man and Russel \cite{russel08b} demonstrate experimentally the role of flaws in nucleating a crack and show that the critical stress obtained from the energy argument only gives the lower bound. \begin{figure}[ht] \begin{center} \includegraphics[scale=0.6]{./fig1.pdf} \caption {A crack of length $2a$ embedded in a packed bed. The bed is stressed in `1' direction. We consider the case where $a\ll h$.} \label{crack} \end{center} \end{figure} In this study, we determine the stress field near a crack tip along with the shape of the crack that is present in a two dimensional particle packing saturated with solvent. The flaw is embedded inside the colloidal packing and the size of the flaw is much smaller that any other dimension of packing, say film thickness (Figure \ref{crack}). Further, when the crack dimension in the out of plane direction is larger than $a$, then the only length dimension relevant to the problem is $a$, and the situation becomes amenable to plane stress/strain analysis \cite{suo1992}. A packed bed made of an array of colloidal particles can be considered to be a collection of polycrystalline aggregates with pre-existing flaws. These flaws may be attributed to micro-cracks, grain boundaries between the clusters of ordered packing of mono-dispersed particles, dissimilar pores inside the colloidal bed etc. Nucleation of a crack under these circumstances changes the stress field close to the crack with stress concentration at the crack tip. In this work, the stress and strain fields are linearized about the pre-crack state to determine the disturbance displacement field immediately after the opening of a mode-I crack. These results also yield the stress intensity factor for the two dimensional elastic field which is then related to the surface energy using the well known Griffith's criterion for equilibrium cracks. The calculated quantities are then compared with the numerical solution for the full problem. The calculations show that the dimensionless critical capillary pressure required to open a crack varies inversely with the crack length to the two thirds' power and depends on a dimensionless parameter that measures the ratio of the elastic to surface energy. A simple scaling analysis reveals the essence of the results to follow. Since $\sigma \sim E {\epsilon^{o}}^2$, where `$E$' is effective modulus of the packing and $\epsilon^{o}$ is the characteristic strain in the packing, the elastic energy recovered on the opening of a crack of length `$a$' in a packing of unit thickness is, $\sigma \epsilon^{o} a^2$. Equating this to surface energy ($\gamma a$) and noting that the capillary pressure is linearly related to the stress, gives the critical capillary pressure for opening the crack, $ \frac{P_c R}{2\gamma} \sim A \left( \frac{R}{a} \right)^{2/3} \left( \frac{ER}{\gamma} \right)^{1/3} $, where `$\gamma$' is the surface tension of the solvent, $R$ is the radius of the particles, and $A$ is a constant. The objective of this paper is to rigorously determine the value of $A$ and investigate the consequence of this result. \section{Nucleation of crack} \begin{figure}[t] \begin{center} \includegraphics[scale=0.38]{./fig2.pdf} \caption {A crack of length $2a$ embedded in a packed bed. The bed is stressed in `1' direction. Shaded region shows the region over which the analysis is performed. $T^{\prime}_{1}$ is the traction along crack surface.} \label{coord} \end{center} \end{figure}% Consider a colloidal packing saturated with water that is confined by solid boundaries at $x_1=\pm L$ with free surfaces at $x_2 = x_3 = \pm h$. As water evaporates, the capillary pressure puts the packing in tension in the $ x_1$ direction so that it is free to contract along $x_2$ and $x_3$ (Figure \ref{coord}). In this case, the strain is given by, $\epsilon_{ij} = - \epsilon^{o} \left( \delta_{i2} \delta_{2j} + \delta_{i3}\delta_{3j} \right) $, where $^o$ denotes the pre-crack values. Volume conservation over a unit volume of the bed relates the strain to the particle volume fraction, \begin{equation} \phi^o = \phi \left( 1-\epsilon^{o} \right)^2 \cdot \label{eq:42} \end{equation} where, $\phi^o$ is the volume fraction in the pre-crack state. The strain and stress fields are sought for a crack with extent $-a < x_2 < a$ in the plain stress formulation, \begin{eqnarray} && \epsilon_{ij} = - \epsilon^{o} \left( \delta_{i2} \delta_{2j} + \delta_{i3}\delta_{3j} \right) + \epsilon^{\prime}_{ij} \;\;\mathrm{ , } \nonumber \\ && P = P^o + P^{\prime} \;\;\mathrm{ and, }\;\; \sigma_{ij} = {\sigma}^{o}_{ij} + {\sigma}^{\prime}_{ij} \;\;\mathrm{ , } \label{eq:43} \end{eqnarray} with the perturbed variables represented by the primed quantities and $a<<L,h$. Substituting these in the constitutive equation (\ref{eq:41}) and retaining terms linear in the perturbed quantities gives, \begin{eqnarray} && \bar{\sigma}^{\prime}_{11} = -\bar P^{\prime} + ( 3\epsilon^{\prime}_{11} + 2\epsilon^{\prime}_{22} + 2\epsilon^{\prime}_{33} ) \nonumber \\ && \bar{\sigma}^{\prime}_{22} = -\bar P^{\prime} + ( 2\epsilon^{\prime}_{11} + 9\epsilon^{\prime}_{22} + 3\epsilon^{\prime}_{33} ) \nonumber \\ && \bar{\sigma}^{\prime}_{33} = -\bar P^{\prime} + ( 2\epsilon^{\prime}_{11} + 3\epsilon^{\prime}_{22} + 9\epsilon^{\prime}_{33} ) \nonumber \\ && \bar{\sigma}^{\prime}_{12} = 4\epsilon^{\prime}_{12} \nonumber \\ && \bar{\sigma}^{\prime}_{23} = 6\epsilon^{\prime}_{23} \nonumber \\ && \bar{\sigma}^{\prime}_{31} = 4\epsilon^{\prime}_{31} \label{eq:4501} \end{eqnarray} where a bar over a variable implies a dimensionless quantity with stress and pressure rendered dimensionless with $E \equiv \frac{GM\phi_{rcp} \epsilon^{o}}{35} \cdot$ The dimensionless stress for the pre-crack state is, \begin{eqnarray} \bar{\sigma}^{o}_{ij} = \delta_{ij} \left[ - \bar P^o - 2(\epsilon^{o}) \right] - 4(\epsilon^{o}) \left( \delta_{i2} \delta_{2j} + \delta_{i3}\delta_{3j} \right) \cdot \label{eq:46} \end{eqnarray} Since we consider the plane stress case ($\bar \sigma_{3j}=0$) and the bed is stressed only in the $x_1$ direction, $\bar P^o = -6 \epsilon^{o}$, $\bar{\sigma}^{o}_{11}=4 \epsilon^{o}$ and $\bar{\sigma}^{o}_{22}=0 \cdot$ The total amount of particle phase remains constant in the packing, and so the particle volume fractions before and after cracking are related, \begin{equation} \frac{\phi_{rcp}}{\phi} = (1+\epsilon^{\prime}_{11})(1-\epsilon^{o}+\epsilon^{\prime}_{22})(1-\epsilon^{o}+\epsilon^{\prime}_{33}) \quad\mathrm{,} \label{eq:047} \end{equation} where, $\phi_{rcp} $ is the random close packing and the strain is taken to be zero when $\phi=\phi_{rcp}$. The time evolution of stress and strain around the crack can be further subdivided into two limiting cases~\cite{mahesh05} i.e. the short time and the long time limits. In the short time limit, the impact of crack formation on the stress and strain variation is such that it would occur in the absence of solvent flow, suggesting that the material will be incompressible. Thus, in the short time limit and for $ \epsilon^{o} \ll 1$, (\ref{eq:047}) reduces to, \begin{equation} \epsilon^{\prime}_{11} + \epsilon^{\prime}_{22} + \epsilon^{\prime}_{33} = 0. \label{eq:048} \end{equation} Since we shall consider only the plane stress case here ($\bar{\sigma}^{\prime}_{33} = 0$), $\bar P^{\prime} = \epsilon^{\prime}_{22} + 7\epsilon^{\prime}_{33}$. At longer time scales, liquid flows so as to eliminate pressure variations, giving us the required condition for the long time limit, $\bar P^{\prime} =0 $. The perturbed stress and strain are compactly related in the two cases, $\bar{\sigma}^{\prime}_{ij} = C_{ij} \epsilon^{e\prime}_{ij}$ where $\epsilon^{e\prime}_{ij}$ is the engineering strain and $C$ is the stiffness matrix. The latter is given by, \begin{equation} \quad C = \begin{bmatrix} 8 & 6 & 0\\ 6 & 12 & 0\\ 0 & 0 & 2 \end{bmatrix} \label{eq:049} \end{equation} in the short-time limit and, \begin{equation} \quad C = \begin{bmatrix} \frac{23}{9} & \frac{4}{3} & 0 \\ \frac{4}{3} & 8 & 0 \\ 0 & 0 & 2 \end{bmatrix} \label{eq:049a} \end{equation} in the long-time limit. While the original constitutive relation in the pre-crack state (\ref{eq:41}) is for an isotropic solid, (\ref{eq:049}) and (\ref{eq:049a}) imply that the relaxation resulting from the presence of the flaw under the imposed conditions is that for an anisotropic solid. A similar observation is noted by Russel and co-workers for the more accurate constitutive relation based on the Hertzian contact mechanics. For convenience, we write the constitutive equations as, $\Delta_{i} = S_{ij} \Sigma_{j}$ where both $\Delta$ and $\Sigma$ are 6-by-1 column vectors. The components of $\Delta$ and ${\Sigma}$ are given by, $\epsilon^{e\prime}_{11}, \epsilon^{e\prime}_{22}, \epsilon^{e\prime}_{33}, \epsilon^{e\prime}_{23}, \epsilon^{e\prime}_{31}, \epsilon^{e\prime}_{12} $ and $\bar {\sigma}^{\prime}_{11}, \bar {\sigma}^{\prime}_{22}, ... , \bar {\sigma}^{\prime}_{12}$ respectively, and the 6-by-6 coefficient matrix, $\mathbf{S}$, is the compliance matrix. The elements of $\mathbf{S}$ are determined from $\mathbf{C}$. Note that the present case relates to the case of a orthotropic anisotropy with plane stress condition. Therefore, $\mathbf{S}$ has only seven non-zero elements. The displacement field in case of plane strain is easily obtained using the procedure outlined in the next section except that the components of the compliance matrix for the plane stress problem ($S_{ij}$) are replaced with, \[ D_{ij} = S_{ij} - \frac{S_{i3} S_{j3} }{S_{33}},\;\;(i,j=1,2,..., 6) \] for the plane strain case. \section{Asymptotic analysis near a crack tip} The knowledge of the stress fields in the neighborhood of the crack tip is essential in determining the strength of the packed bed. Since the perturbed stress is linear in perturbed strain, we draw upon the mathematical techniques developed in the solid mechanics literature to determine the stress field near the tip of a crack \cite{sih65,lekh81,rice68,esh57,grif21}. The coordinate system ($\tilde x_1,\tilde x_2$) for this analysis is shown in Figure \ref{coord} where the origin is placed at the crack tip so that $\tilde x_1=\bar x_1, \tilde x_2 = \bar x_2 + \bar a$ and the variables have been rendered dimensionless using the characteristic length of the solution domain. The momentum balance equation in $\tilde x_1$ and $\tilde x_2$ directions in the absence of body forces are given by, \begin{align} & \pd{\bar{\sigma}^{\prime}_{11}}{\tilde x_1} + \pd{\bar{\sigma}^{\prime}_{12}}{\tilde x_2} = 0,\;\mathrm{and} \nonumber \\ & \pd{\bar{\sigma}^{\prime}_{21}}{\tilde x_1} + \pd{\bar{\sigma}^{\prime}_{22}}{\tilde x_2} = 0. \label{eq:49} \end{align} Following Sih et al.\cite{sih65} and Hoenig\cite{hoenig82}, we related the stresses to stress correlation functions, $\chi$ and $\psi$, via \begin{align} \bar{\sigma}^{\prime}_{ij} &= - \pddm{\chi}{\tilde x_i}{\tilde x_j} + \delta_{ij} \pdd{\chi}{\tilde x_m} , \;\mathrm{and}\nonumber \\ \bar{\sigma}^{\prime}_{3i} &= e_{ij} \pd{\psi}{\tilde x_j} \label{eq:410} \end{align} with $e_{ij}$ and $\delta_{ij}$ being the second order alternating and Dirac delta tensor respectively, and $i,j$ allowed values of 1 or 2. Note that (\ref{eq:410}) automatically satisfies (\ref{eq:49}). The above expressions along with the constitutive relation are substituted in the compatibility equations, \begin{eqnarray} & & \pdd{\epsilon^{e\prime}_{11}}{\tilde x_2} + \pdd{\epsilon^{e\prime}_{22}}{\tilde x_1} - \pddm{\epsilon^{e\prime}_{12}}{\tilde x_1}{\tilde x_2} = 0,\; \mathrm{and} \nonumber \\ & & \pddm{\epsilon^{e\prime}_{11}}{\tilde x_2}{\tilde x_3} = \pddm{\epsilon^{e\prime}_{12}}{\tilde x_3}{\tilde x_1} - \pdd{\epsilon^{e\prime}_{23}}{\tilde x_1} + \pddm{\epsilon^{e\prime}_{31}}{\tilde x_1}{\tilde x_2} \end{eqnarray} to give, respectively, \begin{align} & S_{11}\chi_{,2222} + (2S_{12} + S_{66})\chi_{,1122} + S_{22}\chi_{,1111} = 0, \quad \mathrm{and} \label{eq:4101} \\ & S_{44} \psi_{,11} + S_{55}\psi_{,22} = 0. \label{eq:4102} \end{align} The degree of anisotropy in the material can be judged by rewriting (\ref{eq:4101}) differently, \begin{equation} \nabla^4{\chi} + \delta_1 \chi_{,1111} + \delta_2 \chi_{,2222} = 0 \label{eq:4103} \end{equation} where $1 + \delta_{1}= \frac{2S_{22}}{S_{66} + 2S_{12}}$ and $ 1 + \delta_2 = \frac{2S_{11}}{S_{66} + 2S_{12}}$ are indicators of anisotropy in the material. The difference in the values of $\delta_1$ and $\delta_2$ originate from the fact that the bed is held along `1' direction and perturbed along `2' and `3' directions leading to a directional perturbation of the stress field. When $|\delta_{i}| \ll 1$, $\chi$ satisfies the biharmonic equation, i.e. the material is isotropic. In the current problem, $\delta_i$ are $ -0.12 $ and $ 0.33 $ in the short-time limit and $ -0.24 $ and $1.4 $ in the long-time limit suggesting that the anisotropy is significant and cannot be ignored. (\ref{eq:4101}) and (\ref{eq:4102}) are a pair of decoupled equations in $\chi$ and $\psi$, \begin{eqnarray} L_{4} \chi &=& 0 \\ L_{2}\psi &=& 0 \label{eq:411} \end{eqnarray} where the differential operators are given by, $L_2 \equiv S_{44} \pdd{}{ \tilde x_1} + S_{55} \pdd{}{ \tilde x_2}$ and $L_4 \equiv S_{11} \frac{\partial^4}{\partial \tilde x_2^4} + 2(S_{12}+S_{66}) \frac{\partial^4}{\partial \tilde x_1^2 \partial \tilde x_2^2} + S_{22} \frac{\partial^4}{\partial \tilde x_1^4}$. Lekhnitskii\cite{lekh81} has shown that $L_2$ and $L_4$ can be decomposed into two and four linear operators of first order respectively, of the form $D_k= \partial/ \partial \tilde x_2 - \mu_k \partial / \partial \tilde x_1$ such that $D_1 D_2 D_3 D_4 \chi =0$ and $D_5 D_6 \psi = 0$. Substitution of $D_k$ in $L_4$ and $L_2$ shows that $\mu_k$ are roots of the polynomial operators, $l_4 \equiv S_{11}\mu_k^4 + (2S_{12}+S_{66})\mu_k^2 + S_{22}=0$ and $l_2 \equiv S_{55}\mu_k^2 + S_{44}=0$. Then, the stress correlation functions can be written as, \begin{eqnarray} \chi & = & \displaystyle\sum^{2}_{i=1} \left \{ \chi_i( \tilde x_1 + \mu_i \tilde x_2) + \chi_i( \tilde x_1 + \bar \mu_i \tilde x_2) \right \} \quad \mathrm{and} \nonumber \\ \psi & = & \psi ( \tilde x_1 + \mu_i \tilde x_2) + \psi ( \tilde x_1 + \bar \mu_i \tilde x_2) \end{eqnarray} where the bar on $\mu_i$ represents the conjugate complex number. Further, Lekhnitskii\cite{lekh81} has shown that for the elastic energy of the packing to be positive, the roots cannot be real. Consequently, the general expression for the stress function will involve the real part of both the complex conjugates, \begin{eqnarray} \chi_R( \tilde x_1, \tilde x_2) &=& 2 \Re \left\lbrace \displaystyle \sum^{2}_{i=1} \chi_i(z_i) \right\rbrace \nonumber \\ \psi_R( \tilde x_1, \tilde x_2) &=& 2 \Re \left\lbrace \psi(z_3) \right\rbrace \label{eq:412} \end{eqnarray} where $z_i = \tilde x_1 + \bar \mu_i \tilde x_2$ and $\mathbf{Re}$ represents the real part. Since the stresses are related to the derivatives of the above functions, it is convenient to assume the following functional form, \begin{eqnarray} && \frac{ \partial \chi_k (z_k)}{\partial z_k} = G_k(z_k)\; \mathrm{for}\; k\in[1,2] \nonumber \\ \mathrm{and}\; && \psi_3(z_3) = G_3(z_3) \end{eqnarray} so that the stresses are given by, \begin{align} & \bar{\sigma}^{\prime}_{11} = \pdd{\chi_R}{\tilde x_2} = 2\Re\left [\mu^2_1 \frac{\mathrm{d}G_1}{\mathrm{d} z_1} + \mu^2_2 \frac{\mathrm{d}G_2}{\mathrm{d} z_2} \right ], \nonumber \\ & \bar{\sigma}^{\prime}_{22} = \pdd{\chi_R}{\tilde x_1} = 2\Re \left [\frac{\mathrm{d} G_1}{\mathrm{d} z_1} + \frac{\mathrm{d} G_2}{\mathrm{d} z_2} \right ], \nonumber \\ & \bar{\sigma}^{\prime}_{12} = -\pddm{\chi_R}{\tilde x_1}{\tilde x_2} = -2\Re \left [\mu_1 \frac{\mathrm{d} G_1}{\mathrm{d} z_1} + \mu_2 \frac{\mathrm{d} G_2}{\mathrm{d} z_2} \right ], \nonumber \\ & \bar{\sigma}^{\prime}_{31} = \pd{\psi_R}{\tilde x_2} = 2\Re \left [\mu_3 \frac{\mathrm{d} G_3}{\mathrm{d} z_3} \right ], \;\mathrm{and}\nonumber \\ & \bar{\sigma}^{\prime}_{23} = -\pd{\psi_R}{\tilde x_1} = -2\Re \left [\frac{\mathrm{d} G_3}{\mathrm{d} z_3} \right ]. \label{eq:4121} \end{align} We can also determine the strain components in terms of the stress function, for example, \begin{eqnarray} \epsilon^{e\prime}_{11} &=& \pd{\bar u^{\prime}_1}{\tilde x_1} \nonumber \\ &=& 2 \Re \left [ \frac{\mathrm{d} G_1}{\mathrm{d} z_1} (S_{11}\mu^2_1 + S_{12} - S_{16}\mu_1) \right ] \nonumber \\ && + 2 \Re \left [ \frac{\mathrm{d} G_2}{\mathrm{d} z_2} (S_{11}\mu^2_2 + S_{12} - S_{16}\mu_2) \right ] \cdot \end{eqnarray} Integrating the above expression with respect to $z_j$ gives the displacement field in the `1' direction, \begin{eqnarray} \bar u^{\prime}_1 &=& 2 \Re \left\lbrace \displaystyle\sum^{2}_{j=1} p_{1j} G_j(z_j) \right\rbrace \end{eqnarray} where, $p_{1j} = S_{11}\mu^2_j + S_{12} - S_{16}\mu_j \cdot$ \\ Following a similar procedure for the remaining strain components, all displacements are determined, \begin{equation} \bar u^{\prime}_i = 2 \Re \left\lbrace \displaystyle \sum^{3}_{j=1} p_{ij} G_j (z_j) \right\rbrace \label{eq:41201} \end{equation}% where \begin{eqnarray} p_{1i} &=& S_{11}\mu_i^2 + S_{12} - S_{16}\mu_i \nonumber \\ p_{2i} &=& S_{12}\mu_i + S_{22}/\mu_i - S_{26} \nonumber \\ p_{33} &=& S_{45} - S_{44}/\mu_3 \nonumber \\ p_{31} &=& p_{32} = p_{13} = p_{23} = 0 \label{eq:41202} \end{eqnarray} Next, we ascertain the functional form of $G_i$. Rice\cite{rice68} has shown that the J-integral, \\ $ J = \displaystyle\int_{{\mathrm{d}}{\Omega}} \left( W \mathrm{d} \tilde x_1 - \bar {\sigma}^{\prime}_{ij} n_j \frac{\mathrm{d} \bar u_i}{\mathrm{d} \tilde x_2} \mathrm{d} S \right)$, has the same value for all integration paths surrounding crack tips in two dimensional fields of linear or nonlinear elastic materials. Here, $ W $ is the strain energy density, $ n_j $ is the normal to the chosen path, and $S$ is the distance along the path ${\mathrm{d}}{\Omega}$. Assuming $\frac{\mathrm{d} G_i}{\mathrm{d} z_i} \varpropto {z_i}^{p}$ in the neighbourhood of the crack opening, $W \varpropto {z_i}^{2p}$ and $\bar {\sigma}^{\prime}_{ij} n_j \frac{\mathrm{d} \bar u_i}{\mathrm{d} \tilde x_2} \sim {z_i}^{2p}$. Hence, $ J \sim {z_i}^{2p+1} $. Since the value of $J$ should be independent of the path, $ p = - \frac{1}{2} $. Thus, we assume $G = B_i \sqrt{{2 \bar a z_i}/{\pi}}$ for a flaw of size $\bar a $ where $B_i$ is the stress function amplitude. \begin{figure}[t] \begin{center} \includegraphics[scale=0.80]{./fig3.pdf} \caption {Shifted coordinate system for the asymptotic analysis} \label{coasymp} \end{center} \end{figure} For a \underline{stress free} crack surface, the stress and displacement components near the crack tip become, \begin{align} \bar{\sigma}^{\prime}_{11} &= \sqrt{\frac{2\bar a}{\pi \tilde r}} \Re \displaystyle\sum^{2}_{i=1} \frac{B_i \mu_i^2}{\sqrt{\cos\theta + \mu_i \sin\theta}},\nonumber \\ \bar{\sigma}^{\prime}_{22} &= \sqrt{\frac{2\bar a}{\pi \tilde r}} \Re \displaystyle\sum^{2}_{i=1} \frac{B_i}{\sqrt{\cos\theta + \mu_i \sin\theta}}, \nonumber \\ \bar{\sigma}^{\prime}_{12} &= -\sqrt{\frac{2\bar a}{\pi \tilde r}} \Re \displaystyle\sum^{2}_{i=1} \frac{B_i}{\sqrt{\cos\theta + \mu_i \sin\theta}},\nonumber \\ \bar{\sigma}^{\prime}_{31} &= \sqrt{\frac{2\bar a}{\pi \tilde r}} \Re \frac{B_3 \mu_3}{\sqrt{\cos\theta + \mu_3 \sin\theta}}, \nonumber \\ \bar{\sigma}^{\prime}_{23} &= -\sqrt{\frac{2\bar a}{\pi \tilde r}} \Re \frac{B_3}{\sqrt{\cos\theta + \mu_3 \sin\theta}}, \; \mathrm{and} \nonumber \\ \bar u^{\prime}_i &= 2\sqrt{\frac{2\bar a\tilde r}{\pi}} \Re \displaystyle\sum^{3}_{j=1} p_{ij} B_j{\sqrt{\cos\theta + \mu_j \sin\theta}}. \label{eq:41301} \end{align} Note that the perturbed normal stress at the crack surface for the current problem has a finite non-zero value ($-\bar {\sigma}^{o}_{11}$) which will require minor modifications to some of the above expressions and is dealt with towards the end of this section. The stress intensity factor ($K$) for mode I crack is defined as, \begin{equation} \bar K_1 = \Re{\bar{K}_1} = \displaystyle \lim_{\substack{{\tilde x_2}\rightarrow0^- \\ {\tilde x_1} = 0 }} \bar {\sigma}^{\prime}_{11} \sqrt{2 \pi \tilde r} = 2 \sqrt{\bar a} \Re \displaystyle\sum^{2}_{i=1} \frac{B_i \mu^2_i}{\sqrt{-\mu_i}} \label{eq:414} \end{equation} Similarly, $\bar K_2$ and $\bar K_3 $ can be obtained from $ \bar {\sigma}^{\prime}_{12} $ and $ \bar {\sigma}^{\prime}_{13}$. Thus the stress intensity factor for the three modes can be written compactly, \begin{equation} \mathbf{\bar{K}} = -2 \mathbf{i} \sqrt{\bar a} \mathbf{N} \mathbf{I_{\mu} B} \label{eq:41401} \end{equation} where $i=\sqrt{-1} \;,$ \begin{equation} [N] = \begin{bmatrix} \mu^2_1 & \mu^2_2 & 0 \\ -\mu_1 & -\mu_2 & 0 \\ 0 & 0 &\mu_3\\ \end{bmatrix} ,\; \mathrm{and} \quad [I_{\mu}] = \begin{bmatrix} \frac{1}{\sqrt{\mu_1}} & 0 & 0 \\ 0 & \frac{1}{\sqrt{\mu_2}} & 0 \\ 0 & 0 &\frac{1}{\sqrt{\mu_3}}\\ \end{bmatrix} \cdot \label{eq:41402} \end{equation} The perturbed displacements of the crack surface can be found in terms of the distance from tip along the crack surface, $\theta \rightarrow \frac{\pi}{2}$, $\tilde r=\tilde \zeta$ \begin{equation} \mathbf{\bar u}^{\prime} = \mp \sqrt{\frac{2\tilde \zeta}{\pi}} \mathbf{Q}^{-1} \mathbf{\bar K} \label{eq:41601} \end{equation}% where $\mathbf{Q^{-1}} =\mathbf{Im} \left \{ \mathbf{pN^{-1} I^{-2}_{\mu}} \right \}$. The above analysis gives the functional form of the stress and strain fields close to the crack tip in terms of the unknown $\bar{K}$. In order to determine the stress intensity factor, we assume a finite sized crack with an elliptical shape such that the minor axis of dimensionless length $2\bar c$ is small compared to the major axis ($2 \bar a$), $\alpha = \frac{\bar c}{\bar a} \ll 1$. Eshelby \cite{eshelby57} has shown that for an elliptical crack, the strain is uniform around the crack. Following Hoenig\cite{hoenig82}, the displacement of the crack surface is given by ($i=1,2,3$), \begin{eqnarray} \bar U^{\prime}_i = A_{1i} \bar{x}_1 = \beta_i \sqrt{\bar a^2- {\bar{x}_2}^2 }\;\; \mathrm{where, } \;\; \beta_i = \frac{A_{1i}}{\alpha},\nonumber \label{eq:416} \end{eqnarray} and the strains by, \[ \epsilon^{e\prime}_{11} = \frac{\beta_1}{\alpha}, \; \epsilon^{e\prime}_{12} = \frac{\beta_2}{2\alpha},\; \mathrm{and}\; \epsilon^{e\prime}_{13} = \frac{\beta_3}{2\alpha}. \] Thus, the perturbed stress at the crack face for this simple crack model is related to $\beta_i$, $\bar {\sigma}^{\prime}_{1k} = C_{k l} \beta_{l} $. Here, the origin of the coordinate system ($\bar{x}_1$,$\bar{x}_2$) lies at the center of the ellipse with $\bar x_2$ directed along the major axis (Figure \ref{coord}). Writing the crack face displacement in terms of the coordinate system with the origin placed at the crack tip (Figure \ref{coasymp}), \begin{equation} \bar U^{\prime}_i = \beta_i \sqrt{2\bar a\tilde \zeta} \label{eq:417} \end{equation} where $\tilde \zeta$ is the distance from the crack tip along $\tilde x_2$. Comparing (\ref{eq:41601}) with (\ref{eq:417}) we get \begin{equation} \mathbf{Q}^{-1} \mathbf{\bar K} = [\beta] \sqrt{\pi\bar a}. \label{eq:41701} \end{equation} which relates the unknowns, $\mathbf{\bar K}$ and $[\beta]$. In order to complete the problem, we determine the elastic energy released from the simple crack model, \begin{equation} \bar \xi = 2 \frac{1}{2} \int^{\bar a}_{-\bar a} \bar\sigma^{o}_{1k} \bar U_k \mathrm{d}{\tilde x_2} = -C_{ik} \beta_i \beta_k \frac{\pi \bar a^2}{2} \label{eq:4163} \end{equation} and equate $\frac{\mathrm{d} \bar \xi}{\mathrm{d} \bar a} = 2 \bar J$ giving, \begin{equation} \bar K_i = \sqrt{\pi \bar a} \bar \sigma^{o}_{1j} \label{eq:41631} \end{equation} where $\bar J$ is value of the standard $J$-integral \citep{rice68}, determined in the limit as the integration path is shrunk so as to lie along the crack face, \begin{equation} \bar J = \displaystyle\lim_{\delta \rightarrow 0} \frac{1}{\delta} \displaystyle\int^{\delta}_{0} \bar {\sigma}^{\prime}_{1i}(\delta-\tilde r, -\frac{\pi}{2}) \bar u_{i}^{\prime}(\tilde r,\frac{\pi}{2}) \mathrm{d} \tilde r = -\frac{1}{2} \bar K_i \left( Q^{-1}_{il} \bar K_l \right). \label{eq:41632} \end{equation} Comparing (\ref{eq:41701}) and (\ref{eq:41631}) gives the expression for $\beta_i$ in terms of the far field stresses, \[\mathbf{\beta}_i = \mathbf{Q}_{ik}^{-1} \mathbf{\bar{\sigma}^{o}}_{1k} \cdot\] We are now in a position to write the elastic energy recovered (dimensional) due to the formation of a finite length mode-I crack of length $a$, \begin{equation} \xi = -\frac{\pi}{2} a^2 Q^{-1}_{11} \dfrac{(\sigma^{o}_{11})^2}{E} \label{eq:41633} \end{equation} The present problem requires the crack surface to have a normal stress ( $- \bar{\sigma}^{o}_{11}$ ) and the far field perturbed stress to be zero. Consequently, the complex stress function $G_{k}(z_k)$ in (\ref{eq:4121}) is replaced with $G_{k} (z_k)+ \Gamma_k z_k$ ($k=1,2$), where $\Gamma_k$ are real constants \cite{sih65}. Substituting the new expression for $G_{k}$ and applying the traction boundary condition at the crack surface, we get, \begin{equation} \begin{bmatrix} \Gamma_1 \\ \Gamma_2 \\ \end{bmatrix} = \mathbf{Re} \left \{ \frac{1}{\mu_1 \mu_2 (\mu_1 - \mu_2)} \begin{bmatrix} \mu_2 & \mu_2^2 \\ \mu_1 & \mu_1^2 \\ \end{bmatrix} \begin{bmatrix} -\bar {\sigma}^{o}_{11} \\ 0 \\ \end{bmatrix} \right\} \end{equation} The above expression along with the definition for the stresses (\ref{eq:4121}) suggests that both $\bar{\sigma}^{\prime}_{11}$ and $\bar {\sigma}^{\prime}_{22}$ are influenced by the traction condition. However, neither $\bar{\sigma}^{\prime}_{12}$ nor $\bar u^{\prime}_{1}$ along the crack face are effected implying that the energy calculations and the corresponding values of the stress intensity factors remain unchanged. The total energy for the system is $\mathcal{E} = \xi + \Gamma$, where $\Gamma=4\gamma a $ corresponds to surface energy and $\gamma$ is the surface tension. Following Griffith's argument \citep{grif21}, the crack will be in equilibrium when, \begin{eqnarray} && \frac{\mathrm{d} \mathcal{E}}{\mathrm{d} a} = 0 \Rightarrow \sigma^{o}_{11} = \sqrt{\dfrac{4\gamma E}{Q^{-1}_{11} a \pi}} \end{eqnarray} which relates the far field stress to the crack length and surface tension. Since $E$ is a linear function of the pre-crack strain which in turn is related to the far field stress, we have \begin{equation} \sigma^{o}_{11} (a)^{2/3} = \left[ \frac{2\gamma}{Q^{-1}_{11} \pi} \right]^{2/3} \left( \frac{G M \phi_{rcp}}{35}\right)^{1/3} \end{equation} A more useful relation is in terms of the capillary pressure, \begin{equation} \left (-\frac{P^o R}{2 \gamma} \right )\left (\frac{a}{R} \right) ^{2/3} =\left \{ \frac{3}{4} \left[ \frac{ 8}{35 Q^{-2}_{11} \pi^2} \right]^{1/3}\right \} \left( \frac{G M \phi_{rcp}R }{2 \gamma} \right)^{1/3} . \end{equation} Thus the dimensionless critical capillary pressure is related to the dimensionless crack length, \begin{equation} (-\tilde P^o)(\tilde a^{2/3}) = A W^{1/3} \label{eq:41634} \end{equation} where, $W=G M \phi_{rcp} R/ 2 \gamma$ represents the balance of the elastic and surface energy and $A$ is equal to 0.45 and 0.35 for the short and long time limit, respectively. \section{Numerical solution} The stress and the displacement fields obtained in the previous section is applicable to regions close to the crack tip. For the full solution, the momentum balance equations (\ref{eq:49}) for the perturbed stresses were solved numerically for the control volume highlighted in Figure \ref{coord} using finite element method (DIFFPACK$\textsuperscript{\textregistered}$). The boundary conditions are as follows, \begin{eqnarray*} \bar {\sigma}^{\prime}_{11} = & -\bar \sigma^o_{11}\; &\mathrm{for}\; \bar x_1 = 0, 0>\bar x_2> -\bar a, \\ \nonumber \bar u^{\prime}_1 = & 0\; &\mathrm{for}\; \bar x_1=0, -\tilde a >\bar x_2>- 1, \\ \nonumber \bar u^{\prime}_2 = & 0\; &\mathrm{for}\;0< \bar x_1<1, \bar x_2=0, \\ \nonumber \bar {\sigma}^{\prime}_{22} = & 0 \; &\mathrm{for}\;0< \bar x_1<1, \bar x_2=-1,\;\mathrm{and} \\ \nonumber \bar {\sigma}^{\prime}_{11} = & 0\; & \mathrm{for}\; \bar x_1 = 1, 0>\bar x_2> -1 \nonumber \end{eqnarray*} where $\bar a <<1$ so that the stress and the strain fields close to the cracks are not influenced by the size of the control volume. Rectangular elements were used with adaptive refinement of the grid near the crack tip. The total number of nodes in the control volume were about 50,000. \section{Results and Discussion} \subsection{Numerical Simulation} The perturbed stress, strain and displacements obtained from the numerical simulations are presented in Figure \ref{grid}-\ref{pru2}. Unless specified, all results pertain to the short time limit. Figure \ref{grid}(a) and (b) presents the initial and deformed grid, respectively. The surface displacements in Figure \ref{grid}(b) have been scaled so as to highlight their magnitude. \begin{figure}[t] \begin{center} \includegraphics[scale=0.28]{./fig4.pdf} \caption {(b) The basic grid before deformation. (b) The scaled surface deformation of the control volume for the short time limit. All displacements have been scaled with one-tenth the maximum displacement ($u^{\prime}_1(0,0)$).} \label{grid} \end{center} \end{figure} As expected, the displacement at the center of the crack is maximum with the tip of the crack moving upwards. Since the perturbed stresses are zero at the control volume boundaries, the effect of the surface displacement at the crack faces can be observed at the boundaries. The equilibrium shape of the crack surface is elliptical and the ratio of the length of the minor to major axis is very small ($\sim10^{-3}$), both of which are in agreement with the asymptotic solution. \begin{figure}[t] \begin{center} \includegraphics[scale=0.28]{./fig5.pdf} \caption {Variation of perturbed stresses for nondimensional half crack length, $\bar a = 0.3$ and $\epsilon^{o} = 7.8\times10^{-4}$ : (a) Contour plot of $\bar{\sigma}^{\prime}_{11}$, (b) Gray scale plot of $\bar{\sigma}^{\prime}_{11}$ close to crack tip, (c) contour plot of $\bar{\sigma}^{\prime}_{22}$, and (c) Gray scale plot of $\bar{\sigma}^{\prime}_{11}$ close to crack tip.} \label{sigma2233} \end{center} \end{figure} \begin{figure}[t] \begin{center} \includegraphics[scale=0.28]{./fig6.pdf} \caption {Variation of perturbed pressure in the short time limit for $\bar a = 0.3$ : (a) Contour plot of $\bar P^{\prime}$, (b) Gray scale plot of $\bar P^{\prime}$ close to crack tip. Variation of the particle concentration in the long time limit: (c) Contour plot of $\phi$, and (d) Gray scale plot of $\phi$ close to crack tip.} \label{pru2} \end{center} \end{figure} Figure \ref{sigma2233} presents the simulated values of $\bar {\sigma}^{\prime}_{11}$ and $\bar {\sigma}^{\prime}_{22}$ for the control volume. The contour plot (Figure \ref{sigma2233}(a) and (c)) and the gray scale plots of the region close to the crack tip (Figure \ref{sigma2233}(b) and (d)) demonstrate the sharp decrease in stress with increasing distance from the crack tip. The contours are perpendicular to the symmetry surfaces ($\bar x_2 = 0$ and $-0.3<\bar x_1<1, \bar x_2 = 0$) as expected from the boundary conditions while they decay to zero at $\bar x_1=1$ and $\bar x_2 =-1$. Figure \ref{pru2}(a) and (b) present the perturbed pressure for the short time limit. Interestingly, the pressure is negative close to the tip suggesting that the solvent will flow towards the tip once the crack nucleates. This is borne out in the simulations for the long time limit (Figure \ref{pru2}(c) and (d)) where the particle concentration has reduced at the tip. Note that while the particle concentration should always be equal or more than the close pack concentration at all times, values of $\phi<\phi_{rcp}$ near the crack tip in the long time limit are not physical since no such constraint has been imposed in the simulation. Instead, extra solvent could accumulate at the crack tip between the crack faces. \subsection{Comparison with Asymptotic Solution} Figure \ref{short_t_u} compares the spatial variation of perturbed displacement, $\bar u^{\prime}_1$, along the crack face obtained from the simulation with that predicted by the asymptotic solution. At the crack tip, $\bar u^{\prime}_1=0$ while for $10^{-4}<\bar x_1< 10^{-2}$, the displacement varies as the square root of the distance from the tip. The disagreement close to and far away from the crack tip is attributed to the limitation on grid refinement in case of numerical solution close to the tip (which is unable to capture the large variations in the stress) and to the non-applicability of the asymptotic solution far away from the crack tip. \begin{figure}[t] \begin{center} \includegraphics[scale=0.35]{./fig7.pdf} \caption {Displacement $\bar u^{\prime}_1$ for $\bar a=0.3$ along the crack face.} \label{short_t_u} \end{center} \end{figure} Figure \ref{short_t_sigma22} presents the spatial variation of $\bar{\sigma}^{\prime}_{11} $ along $\bar x_1$ away from the crack tip for $\bar a = 0.3$. As expected the stress diverges as $\bar x_1^{-\frac{1}{2}}$ close to the crack tip and the prediction matches well with the numerical solution for $10^{-4}<\bar x_1< 10^{-2}$. Far away from the crack tip, the perturbed stresses vanish. \begin{figure}[htp] \begin{center} \includegraphics[scale=0.35]{./fig8.pdf} \caption {Perturbed stress $\bar {\sigma}^{\prime}_{11}$ along $\bar{x} _1$ ($\bar x_2 = -\bar a$) for $\bar a=0.3$.} \label{short_t_sigma22} \end{center} \end{figure} The angular distribution of stresses obtained from the asymptotic solution agrees with that from the numerical solution in Figure \ref{sigma_theta_short} at $|\bar r - \bar a| = 0.012$. The distribution is somewhat similar to that obtained for the isotropic cases \citep{lawn90}. \begin{figure}[htp] \begin{center} \includegraphics[scale=0.32]{./fig9.pdf} \end{center} \caption {Angular variation of perturbed stresses for short time at $|\bar r - \bar a| = 0.012$ for $\bar a =0.1$.} \label{sigma_theta_short} \end{figure} Figure \ref{short_t_sigma} compares the angular distribution of the stresses at various radial distances from the crack tip, both in the short and the long time limits. The magnitude of the stresses at a given location in the long time limit are lower than those in the short time limit. This decrease may be attributed to the flow of the solvent that relieves any pressure variation that develops in the short time limit. Compared to the short time limit, the angular variation of the stress in the long time limit show larger deviations from the isotropic case as also suggested by the values of $\delta_i$ in (\ref{eq:4103}). \begin{figure}[htp] \begin{center} \includegraphics[scale=0.37]{./fig10.pdf} \caption {Angular variation of perturbed stresses at different radii ($\bar R = \bar r - \bar a $), (a)--(c) in the short time limit, and (d)--(f) in the long time limit for $\bar a=0.3$.} \label{short_t_sigma} \end{center} \end{figure} The Griffith's criteria (\ref{eq:41634}) shows that the pressure required to open a crack increases with decreasing size of the crack. Since the maximum dimensionless capillary pressure is about \cite{mason95} $ 5.3 $, the largest allowable flaw which will not crack the sample in the short time limit is, $\tilde a_{\mathrm{max}} = 0.025 \sqrt{W}$ which suggests that packings containing particles of larger size and/or higher shear moduli can resist cracking more effectively. Recently, Tirumkudulu and Russel\cite{mahesh05} have derived the expression for the critical capillary stress to drive an infinite crack through a drying colloidal thin film bound to a substrate, $(-\tilde P^o_{\infty}) (\tilde h^{2/3}) = 0.23 W^{1/3}$. Comparing the critical capillary pressure for the two cases suggests that when $\tilde a \ll \tilde h$, a significantly larger capillary pressure is required to expand a finite flaw in the film compared to that required to drive an infinite crack, \[ \frac{\tilde P^o}{\tilde P^o_{\infty}}\sim \left( \frac{\tilde h }{\tilde a}\right )^{\frac{2}{3}}. \] These results are in line with the recent theoretical results obtained by Russel et al.\cite{russel08a} using the more accurate constitutive relation and experiments measuring the critical capillary pressure for various particle packings\citep{russel08b}. The energy release rate, $\mathcal{G} = 2J$, is related to the stress intensity factor through the standard relation, $\mathcal{G} = \frac{K^2}{E_{\mathrm{eff}}}$ where the effective elastic modulus for the packing, \begin{equation} E_{\mathrm{eff}} = \frac{E}{Q^{-1}_{11}}\;, \end{equation} accounts for the particle size and packing, and also for the anisotropy resulting from the nucleation of crack. It is important to note the limitations of the analysis presented here. The boundary condition ${\sigma}^{\prime}_{11} = - \sigma^o_{11}$ at the surface of the crack implies perturbed strains of the order $\epsilon^{o}$ close to the crack surface, which is inconsistent with the linearization in (\ref{eq:4501}). The same applies to the diverging perturbed strains at the crack tip as predicted by the linear analysis. The extent of errors introduced by such approximations can be accurately determined only by solving numerically the full non-linear momentum balance equations. However, the recent experimental evidence of diverging stresses close to the tip of a crack in a colloidal packing supports the overall trend predicted by the linear analysis. Finally, the analysis presented here is general, in that the results relating to the asymptotic forms of the stress and displacement component, and the related expression for the energy release rate can easily be obtained for any other constitutive equation for a saturated packed bed once the stiffness matrix for the linearized equation is known. \section{Conclusions} We present the asymptotic analysis of the deformation field near a crack tip for a mode I crack in a two dimensional colloidal packing saturated with solvent. The stress and strain fields are linearized about the pre-crack state to yield the stress intensity factor for the two dimensional elastic field which is then related to the surface energy using the well known Griffith's criterion for equilibrium cracks. The calculated quantities are then compared with the numerical solution for the full problem. The main findings can be summarized as follows: \begin{itemize} \item Perturbation in the displacement and stress field due to the presence of crack introduces anisotropy in the material which can be quantified by (\ref{eq:4103}). \item The stress and displacement fields close to crack tip are given by (\ref{eq:41301}) where the expression of $\mathbf{B}$ is obtained from the components of stiffness matrix, $\mathbf{C}$. \item The critical pressure required to open a flaw of length $2a$ varies inversely with the crack length to the two thirds' power, \[ -P^o = A \left( {GM\phi_{rcp}}\right)^{1/3} \left(\frac{2\gamma}{a}\right)^{2/3},\] where $A$ is equal to 0.45 and 0.35 for the short and long time limit, respectively. It is independent of the particle size. \item The maximum flaw size that can resist cracking and result in a crack-free packing is set by the maximum possible capillary pressure, \[a_{\mathrm{max}} = \left( \frac{A}{5.3} \right)^{3/2} \left( \frac{GM\phi_{rcp} R^3}{2 \gamma}\right)^{1/2}. \] Colloidal beds containing large particles with high shear modulus are less susceptible to cracking. \item When $a \ll h$, the critical capillary pressure required to expand a flaw is much larger that that required to drive an infinite crack in a film of thickness, $h$, \[ \frac{P^o}{ P^o_{\infty}}\sim \left( \frac{ h }{ a}\right )^{\frac{2}{3}}. \] \end{itemize} \begin{acknowledgments} The research was financially supported in part by the Department of Science and Technology, India (Project \#07DS032). A.\ S.\ acknowledges IIT Bombay's support for teaching assistantship. \end{acknowledgments}
proofpile-arXiv_067-13229
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction and outline} Higher analogues of Reidemeister torsion and Ray-Singer analytic torsion were developed by J. Wagoner, J.R. Klein, M. Bismut, J. Lott, W. Dwyer, M. Weiss, E.B. Williams, W. Dorabiala, B. Badzioch, the authors of this paper and many others. (\cite{Wagoner:higher-torsion}, \cite {Klein:thesis}, \cite{IK1:Borel2}, \cite{Bismut-Lott95}, \cite{DWW}, \cite{BG2}, \cite{Goette01}, \cite{Goette03},\cite{Goette08}, \cite{I:BookOne}, \cite{BDW09}, \cite{BDKW}). There are three different definitions of the higher torsion due to Igusa-Klein \cite {IK1:Borel2}, \cite{I:BookOne}, Dwyer-Weiss-Williams \cite{DWW} and Bismut-Lott \cite {Bismut-Lott95}, \cite{BG2} which are now known to be related in a precise way \cite{BDKW}, \cite{Goette08}, \cite{I:Axioms0}. In this paper we use the Igusa-Klein (\text{\rm IK}) torsion as formulated axiomatically in \cite{I:Axioms0} (See the review of higher torsion and it basic properties in Section \ref{subsecA13}.) The results can be translated into results for the other higher torsion invariants using the formulas relating the Dwyer-Weiss-Williams (\text{\rm DWW}) smooth torsion, the nonequivariant Bismut-Lott (\text{\rm BL}) analytic torsion and the \text{\rm IK}-torsion. (See \cite{BDKW}, \cite{Goette08}.) Higher Reidemeister torsion invariants are cohomology classes in the base of certain smooth manifold bundles which can sometimes be used to distinguish between different smooth structures on the same topological manifold bundle. The main purpose of this work is to determine which cohomology classes occur as higher Reidemeister torsion invariants of exotic smooth structures on the same topological manifold bundle. We also determine to what extent the higher torsion distinguishes between different smooth structures on the same bundle. Since the higher torsion is a sequence of real cohomology classes which are ``stable'', it can only detect the torsion-free part of the group of stable smooth structures on topological bundles. Following Dwyer, Weiss and Williams we eschew classical smoothing theory by assuming that we are given a fixed linearization (vector bundle structure) on the vertical tangent microbundle of a topological manifold bundle. We also assume that there exists at least one smoothing. With these points in mind, we give a complete answer to these two questions in the relative. Also, in the process, we give an explicit construction of ``virtually all'' exotic smooth structures on smooth manifold bundles with closed fibers of sufficiently large odd dimension. \subsection{Statement of results} Suppose that $p:M\to B$ is a smooth bundle with fiber $X$. This means that $X,M,B$ are compact smooth manifolds and $B$ is covered by open sets $U$ so that $p^{-1}(U)$ is diffeomorphic to $U\times X$. We always assume that $B,X$ and $M$ are oriented since we need to use Poincar\'e duality. For the purpose of this introduction, we also assume that $X,B$ and $M$ are closed manifolds although we also need to consider disk bundles over $M$. Let $T^\vertical\! M$ be the \emph{vertical tangent bundle} of $M$, i.e. the kernel of the map of tangent bundles $Tp:TM\to TB$ induced by $p$. By an \emph{exotic smooth structure} on $M$ we mean another smooth bundle $M'\to B$ together with a fiberwise tangential homeomorphism $f:M\cong M'$. This in turn means that $f$ is a homeomorphism over $B$ and that $f$ is covered by an isomorphism of vector bundles $T^\vertical\! f:T^\vertical\! M\cong T^\vertical\! M'$ which is compatible with the topological derivative of the homeomorphism $f$. (See \cite{Second}, subsection 1.3.3.) There are two invariants that we can associate to an exotic smooth structure $M'$ on $M$. One is the higher relative \text{\rm IK}-torsion invariant (Section \ref{subsecA13}) \[ \tau^\text{\rm IK}(M',M)\in \bigoplus_{k>0} H^{4k}(B;\ensuremath{{\field{R}}}) \] and the other is the \emph{relative smooth structure class} $ \Theta(M',M)\in H_{\ast}(M;\ensuremath{{\field{R}}}) $ which is a more complete invariant given by the following theorem which is a reinterpretation of the results of \cite{DWW} as explained in \cite{Second} and \cite{WilliamsNotes06}. \begin{thm}\cite{Second}\label{main result of GIW} Let $\stdone{B}(M)$ be the direct limit over all linear disk bundles $D(M)$ over $M$ of the space of all exotic smooth structures on $D(M)$. Then $\pi_0\stdone{B}(M)$ is a finitely generated abelian group and \[ \pi_0\stdone{B}(M)\otimes\ensuremath{{\field{R}}} \cong \bigoplus_{k>0} H_{\dim B-4k}(M;\ensuremath{{\field{R}}}) \] \end{thm} In particular, any exotic smooth structure $M'\to B$ on $M$ gives an element of $\stdone{B}(M)$ and the corresponding element in the homology of $M$ is called the \emph{smooth structure class} of $M'$ relative to $M$ and will be denoted \[ \Theta(M',M)\in H_{q-4\bullet}(M):= \bigoplus_{k>0} H_{\dim B-4k}(M;\ensuremath{{\field{R}}}) \] We also use the indicated shortcut. The spot $\bullet$ will denote direct sum over $k>0$ as indicated. Coefficients will be in $\ensuremath{{\field{R}}}$ unless otherwise stated. We also aways denote the dimension of $B$ by $q$. The first main theorem of this paper is the following formula relating these invariants. \begin{thm}[Theorem \ref{second main theorem}, Corollary \ref{second main theorem: even case}]\label{intro: key result} \[ D\tau^\text{\rm IK}(M',M)=p_\ast\Theta(M',M)\in H_{q-4\bullet}(B) \] where $D:H^{4\bullet}(B)\cong H_{q-4\bullet}(B)$ is Poincar\'e duality and\[ p_\ast:H_{q-4\bullet}(M)\to H_{q-4\bullet}(B) \] is the map in homology induced by $p:M\to B$. \end{thm} This theorem can be interpreted to mean that, up to elements of finite order, differences in stable smooth structure which are not detected by higher torsion invariants are classified by homology classes in the kernel of the mapping $p_\ast$. Combining these theorems, we obtain the answer in the stable case to the question which motivated this project, namely which cohomology classes occur as higher torsion invariants: The union of all higher \text{\rm IK}-torsion invariants of exotic smooth structures on all linear disk bundles $D(M)$ over $M$ span the Poincar\'e dual of the image of $p_\ast$. The answer in the unstable case is given by the following result which is a reformulation of Corollary \ref{third main theorem}. \begin{thm} Let $p:M\to B$ be a smooth manifold bundle whose base $B$, fiber $X$ and total space $M$ are closed oriented smooth manifold. Suppose that $\dim X$ is odd and at least $2\dim B+3$. Let $\beta\in H^{4\bullet}(B)$ be a real cohomology class whose Poincar\'e dual is the image of an integral homology class in $M$. Then there exists another smooth bundle $p':M'\to B$ which is fiberwise tangentially homeomorphic to $p$ so that the relative torsion $\tau^\text{\rm IK}(M',M)$ is a nonzero multiple of $\beta$. \end{thm} The construction which produces these exotic smooth structures is a variation of the classical construction of Hatcher. We call our version the ``Arc de Triomphe'' (AdT) construction due to its appearance (Figure \ref{AdT figure}). The theorem above is therefore a consequence of Theorem \ref{intro: key result} and the following theorem. \begin{thm}[AdT Theorem \ref {AdT lemma}]\label{intro: AdT Thm} When $\dim X\ge 2\dim B+3$ is odd, the relative smooth structure classes of the smooth bundles $M'\to B$ given by the AdT construction span the vector space $H_{q-4\bullet}(M)$. \end{thm} \subsection{Outline of the proofs} { The proofs of the results outlined above are interrelated and revolve around the proof of the key result Theorem \ref{intro: key result} which can be restated as follows. We consider two homology invariants for stable exotic smooth structures on smooth bundles $M\to B$. One is the Poincar\'e dual of the higher $\text{\rm IK}$-torsion and the other is the image of the smooth structure class $\Theta_M(M')$ in the homology of the base. Our theorem is that these invariant are equal. To prove this we note that these invariants are homomorphisms with a common domain, namely the group of isomorphism classes of stable exotic smooth structures on $M$, and common target, namely the direct sum of the real homology groups of $B$ in degrees $q-4k$ where $q=\dim B$. To prove that these homomorphisms are equal, we construct examples of exotic smooth structures for which we can calculate both invariants and show that the invariants agree. Then we show that our examples span the domain, tensored with $\ensuremath{{\field{R}}}$. By linearity, the invariants must agree everywhere. The examples come from the theory of generalized Morse functions. We start with the main result of \cite{I:GMF} which we reformulate to say that the set of possible singular sets of fiberwise generalized Morse functions on $M$ produce a spanning set in the homology of $M$ in the correct degrees. These singular sets are examples of ``stratified subsets'' of $M$ with coefficients in $BO$. We can replace $BO$ with $G/O$ since they are rationally homotopy equivalent. Then we use the Arc de Triomphe construction which converts stratified subsets of $M$ with coefficients in $G/O$ into exotic smooth structures on $M$. Next we convert the Arc de Triomphe construction into an equivalent ``immersed Hatcher handle'' construction in order to compute its $\text{\rm IK}$-torsion. The immersed Hatcher construction has the property that it is supported on an embedded disk bundle $E\subseteq M$. By the functorial properties of the smooth structure invariant $\Theta$ proved in \cite{Second}, the corresponding formula for higher torsion invariants proved in Theorem \ref{torsion of immersed Hatcher} below and the fact that the two invariants $p_\ast\circ\Theta$ and $D\circ\tau^\text{\rm IK}$ agree on disk bundles, also proved in \cite{Second}, we conclude that they also agree on the immersed Hatcher construction. Therefore, these two invariants agree on the Arc de Triomphe construction and we only need to show that there are enough of these examples to generate a subgroup of the domain of finite index. To prove this last statement we use Theorem \ref{main result of GIW} above which is derived from the \text{\rm DWW}-version of smoothing theory and proved in \cite{Second}. The latter result says that the group of isomorphism classes of stable exotic smooth structures on $M$, when tensored with $\ensuremath{{\field{R}}}$, is isomorphic to the direct sum of the real homology groups of the total space in degrees $\dim B-4k$. But these are exactly the homology groups spanned by elements coming from singular sets of fiberwise GMF's. This proves simultaneously the key result: $D\circ\tau^\text{\rm IK}=p_\ast\circ\Theta$ and the AdT Theorem \ref{intro: AdT Thm}. The other theorems follow from these. } \subsubsection{Functorial properties of exotic smooth structures} The key property that $D\circ \tau^\text{\rm IK}=p_\ast\circ\Theta$ is known to hold for disk bundles and we use the functorial properties of these homology invariants to conclude that it holds for all smooth bundle. The functorial property of $\Theta$, as proved in \cite{Second}, is as follows (in the special case that $\partial B$ is empty). Suppose that $L$ is a compact smooth $q$-manifold with boundary where $q=\dim B$ and $\lambda:L\to B$ is an immersion. Choose a lifting of $\lambda$ to an embedding $\tilde\lambda:L\to M$. Then, a fiberwise neighborhood of the image of $L$ in $M$ is a smooth disk bundle $\pi:E\to L$ with fiber $D^N$ where $N=\dim X$ is the dimension of the fiber of $p:M\to B$. The inclusion map $D(\tilde\lambda):E\to M$ is a smooth embedding over $\lambda$ in the sense that the following diagram commutes. \[ \xymatrix E\ar[d]_\pi\ar[r]^{D(\tilde\lambda)} & M\ar[d]^p\\ L \ar[r]^\lambda& B \] The following naturality statement for the smooth structure class $\Theta$ is proved in \cite{Second}, Corollary 2.4.3. \begin{thm}[stratified deformation theorem] The following diagram commutes where the vertical arrows are induced by the embedding $D(\tilde\lambda):E\to M$ and immersion $\lambda:L\to B$. \[ \xymatrix{ \pi_0\std{L}{\partial L}(E)\ar[d]^{D(\tilde\lambda)_\ast}\ar[r]^(.4)\Theta & H_{q-4\bullet}(E)\ar[d]^{D(\tilde\lambda)_\ast}\ar[r]^{p_\ast}_\cong & H_{q-4\bullet}(L)\ar[d]^{\lambda_\ast}\\ \pi_0\stdone{B}(M) \ar[r]^(.4)\Theta & H_{q-4\bullet}(M)\ar[r]^{p_\ast}& H_{q-4\bullet}(B) \] where $\std{L}{\partial L}(E)$ is the space of stable exotic smooth structures on $E$ which agree with the given smooth structure of $E$ over $\partial L$ and all homology groups have coefficients in $\ensuremath{{\field{R}}}$. \end{thm} \subsubsection{Hatcher handles and Arc de Triomphe} The AdT construction and the immersed Hatcher handle construction (Section 2) are generalizations of a classical construction of Hatcher (reviewed in Section 1) which produces exotic smooth structures on linear disk bundles. Just as a standard $n$-handle is given by attaching a disk bundle $D^n(\xi)\oplus D^m(\eta)$ to the top $M\times 1$ of $M\times I$ along a fiberwise embedding $S^{n-1}\oplus D^m(\eta)\to M$, Hatcher handles are given by attaching thickenings of Hatcher's disk bundle to the top $M\times 1$ of the product $M\times I$ along certain attaching maps given embeddings $\tilde\lambda:L\to M$ which lie over codimension $0$ immersions $\lambda:L\to B$. We call this the \emph{immersed Hatcher construction}. The reason that $\lambda$ needs to be an immersion is because, following the proofs of Lemmas \ref{first version of AdT Lemma} and \ref{stratified deformation lemma}, we see that $L$ is constructed from a fiberwise generalized Morse function on $M$ and each Morse critical point of $f_b:M_b\to\ensuremath{{\field{R}}}$ of even index gives an element of $L$ mapping to $b$. Thus, we cannot expect $\lambda$ to be an embedding. For fixed $n$ and $m$ there are two kinds of Hatcher handles which we call ``negative'' and ``positive'' Hatcher handles. The attaching map for the negative Hatcher handle $A^{n,m}(\xi,\eta)$ can be deformed to be on top of the positive Hatcher handle $B^{n,m}(\xi,\eta)$ in such a way that they cancel as shown in Figure \ref{AdT figure}. We call this the \emph{Arc de Triomphe} (AdT) construction. This construction has as input data a ``stratified subset'' of $M$. This is a pair $(\Sigma,\psi)$ where $\Sigma$ is a smooth oriented $q$-manifold embedded in $M$ with the property that the projection $\Sigma\to B$ has only fold singularities (Definition \ref{stratified set}. The mapping $\psi:\Sigma\to G/O$ gives the data for positive and negative Hatcher handles to be attached along $\Sigma_+,\Sigma_-$ which are the closures of the subsets of $\Sigma$ along which the projection $p:\Sigma\to B$ is orientation preserving or reversing, respectively and these handles are cancelled using the AdT construction along the fold set $\Sigma_0=\Sigma_-\cap \Sigma_+$. We denote by $\sdgo{B}{\partial_0}(M)$ the group of deformation classes of stratified subsets $(\Sigma,\psi)$ of $M$. The Arc de Triomphe construction thus gives a map \[ AdT:\sdgo{B}{\partial_0}(M)\to \pi_0\std{B}{\partial_0}(M) \] which we show to be additive in Proposition \ref {sd is a group}. One of the key results (Lemma \ref{first version of AdT Lemma}) is that this map is rationally surjective, i.e., its cokernel is finite. To prove this we use the computation of the homotopy type of the space of generalized Morse functions \cite{I:GMF} which implies that there is a fiberwise generalized Morse function $f:M\to I$ whose singular set $\Sigma(f)$ together with a suitable multiple $\xi^n$ its vector bundle data $\xi$ given by the second derivative of $f$ gives an element of $\sdgo{B}{\partial_0}(M)$ which maps onto a spanning subset of the real homology group \[ H_{q-4\bullet}(M,M_{\partial_1B})\cong \pi_0\std{B}{\partial_0B}(M)\otimes \ensuremath{{\field{R}}} \] In the following diagram, this is expressed by saying that the curved mapping $(-1)^n2D\widetilde{ch}$ from $\ovsdgo{B}{\partial_0B}(M)$ to $H_{q-4\bullet}(M,M_{\partial_1B})$ maps onto a spanning subset where $D\widetilde{ch}$ is the map (Subsection \ref{ss312}) which sends $(\Sigma,\psi)\in \ovsdgo{B}{\partial_0B}(M)$ to the image in $H_{q-4\bullet}(M)$ of the Poincar\'e dual of the normalized Chern character (Def. \ref{def: normalized Chern character}) of the bundle over $\Sigma$ associated to $\psi$: \[ \widetilde{ch}(\Sigma,\psi)=\sum_{k>0}(-1)^k\zeta(2k+1)\tfrac12ch_{4k}(\psi\otimes\ensuremath{{\field{C}}})\in H^{4\bullet}(\Sigma;\ensuremath{{\field{R}}}) \] where $\zeta(s)=\sum\frac1{n^s}$ is the Riemann zeta function. \[ \xymatrix{ \sdgo{B}{\partial_0}(M)\ar[rr]_{{AdT}} \ar@/_2pc/[rrr]_{(-1)^n2D\widetilde{ch}} && \pi_0\std{B}{\partial_0}(M)\ar[r]_(.4){\Theta} & H_{q-4\bullet}(M,M_{\partial_1B} } \] We know from \cite{Second} that $\Theta$ is an isomorphism. So, it suffices to show that $\Theta\circ AdT=(-1)^n2D\widetilde{ch}$, i.e., that this diagram commutes. This is the statement of Lemma \ref{Th AdT=2ch}. Finally, we come to the stratified deformation lemma \ref{stratified deformation lemma} which is used to prove that every AdT construction can be deformed into an immersed Hatcher construction. This crucial lemma allows us to compute the higher \text{\rm IK}-torsion invariant and show that they agree with the other invariant $D\circ p_\ast\circ \Theta$ on the same collection of exotic smooth structures as those given by the AdT construction. The main theorems then follow as we have outlined above. \subsubsection{Stratified subsets} The Arc de Triomphe construction uses ``stratified subsets'' of the bundle $M$ with coefficients in $G/O$. In the case when $M$ is a closed manifold, these are defined to be closed oriented $q$-submanifolds $\Sigma\subseteq M$, where $q=\dim B$, so that the restriction of the projection map $p:M\to B$ to $\Sigma$ has only \emph{fold singularities}. These are points at which $p$ is given, in local coordinates, by \[ p(x_1,\cdots,x_q)=(x_1^2,x_2,x_3,\cdots,x_q) \] Then $\Sigma$ becomes the union of two submanifolds $\Sigma_+$ and $\Sigma_-$ where $\Sigma_+$, resp. $\Sigma_-$, is the closure of the set of all points at which $p|\Sigma$ is nonsingular and orientation preserving, resp. orientation reversing and the fold set is $\Sigma_0=\Sigma_+\cap \Sigma_-$. The coefficients are given by a continuous mapping $\Sigma\to G/O$. A \emph{fiberwise generalized Morse function} (GMF) on $M$ is a smooth map $f:M\to\ensuremath{{\field{R}}}$ with the property that $f$ has only Morse and birth-death singularities on the fibers. This is equivalent to saying that the vertical derivative of $f$, as a section of $T^\vertical\! M$, is transverse to the zero section and that the singular set is a stratified subset of $M$. Then $\Sigma_+$, resp. $\Sigma_-$, is the closure of the union of all Morse critical points of $f$ of even, resp. odd, index and $\Sigma_0$ is the set of degenerate critical points of $f$. The coefficient map $\Sigma\to BO$ is given by the Hessian of $f$ at each critical point. (See \cite{I:GMF}.) Using the fact that $G/O$ is rationally equivalent to $BO$, we can lift a nonzero multiple of the coefficient map to $G/O$. (Take the direct sum of the corresponding vector bundle over $\Sigma$ with itself several times.) The main theorem of \cite{I:GMF} is that the space of GMFs on a single manifold $X$ has the $\dim X$ homotopy type of $Q(BO\wedge X_+)$. Thus a fiberwise GMF is equivalent to a section of a bundle with that fiber and a standard homotopy theoretic argument proved in Corollary 2.2.2 of \cite{Second} implies that the corresponding stratified sets with coefficients lifted to $G/O$ will represent (a multiple of) any element of $ H_{q-4\bullet}(M)$. The AdT construction produces the exotic smooth structure corresponding to this homology class using this stratified set. \subsection{Comparison to \text{\rm DWW}-torsion} Our key result (Theorem \ref{intro: key result} above) can be interpreted as saying that the relative higher \text{\rm IK}-torsion is equal to the relative higher \text{\rm DWW}-torsion if we define the latter to be the Poincar\'e dual of the image of the relative smooth structure class in the homology of the base. This proposed definition agrees with the following recent theorem of Badzioch, Dorabiala, Klein and Williams \cite{BDKW} but the two results do not imply each other, even if the definitions were known to agree, since the absolute higher torsion (\text{\rm DWW}\ or \text{\rm IK}) is not always defined. \begin{thm}[Badzioch, Dorabiala, Klein and Williams]\label{Thm of BDKW} Suppose that $M\to B$ is a smooth unipotent bundle (Definition \ref{defn:unipotent}). Then, for all $k>0$, the degree $4k$ smooth \text{\rm DWW}-torsion invariant of $M$ is proportional to the \text{\rm IK}-torsion: \[ \tau_{2k}^{\text{\rm DWW}}(M)=\lambda_k\tau_{2k}^\text{\rm IK}(M)\in H^{4k}(B;\ensuremath{{\field{R}}}) \] for some nonzero real number $\lambda_k$ depending only on $k$. \end{thm} \begin{rem} Dwyer, Weiss and Williams originally defined their higher smooth torsion in the case where the action of $\pi_1 B$ on the homology of the fiber $X$ is trivial. This definition was later extended by Badzioch, Dorabiala and Williams \cite{BDW09} to the unipotent case where $H_\ast(X;\ensuremath{{\field{Q}}})$ has a filtration by $\pi_1B$-submodules so that the associated graded module has a trivial action of $\pi_1B$. In \cite{BDKW} Badzioch, Dorabiala, Klein and Williams show that this extended theory satisfies the axioms for higher torsion given in \cite{I:Axioms0}. Since these axioms characterize exotic higher torsion invariants up to a scalar multiple, the formula $\tau_{2k}^\text{\rm DWW}=\lambda_k\tau_{2k}^\text{\rm IK}$ above holds for all smooth unipotent bundles. The relation between Theorem \ref{Thm of BDKW} and our second main theorem \ref{intro: key result} is very roughly as follows. Given two smooth structures $M,M'$ on the same unipotent bundle $M\to B$, the two difference torsion invariants are defined and equal to the difference between the absolute torsions of $M,M'$: \[ \tau^\text{\rm DWW}(M',M)=\tau^\text{\rm DWW}(M')-\tau^\text{\rm DWW}(M) \] \[ \tau^{\text{\rm IK}}(M',M)=\tau^{\text{\rm IK}}(M')-\tau^{\text{\rm IK}}(M) \] This is proved in \cite{I:BookOne} for the case of $\tau^\text{\rm IK}$ and should be fairly straightforward to prove in the case of $\tau^\text{\rm DWW}$ if the relative \text{\rm DWW}-torsion is defined correctly. Therefore, Theorem \ref {Thm of BDKW} implies that these difference torsions are proportional in the case when the bundles are unipotent. Our Theorem \ref {intro: key result} could be interpreted as giving a conceptual definition of the higher \text{\rm DWW}-difference torsion in terms of \text{\rm DWW}-smoothing theory, showing that it is equal to the higher \text{\rm IK}-difference torsion defined using Morse theory. We believe that our version of \text{\rm DWW}-difference torsion (defined as the Poincar\'e dual of the image of $\Theta(M',M)$ in $H_{4\bullet}(B)$) is equal to $\tau^\text{\rm DWW}(M')-\tau^\text{\rm DWW}(M)$. By our Theorem \ref {second main theorem}, this would be equivalent to showing that the proportionality constant in the theorem of Badzioch, Dorabiala, Klein and Williams is equal to 1: \[ \lambda_k=1 \] so that $\tau^\text{\rm DWW}=\tau^\text{\rm IK}$! However, we do not attempt to prove this here. \end{rem} When the fibers are closed even dimensional manifolds, the theorem above still holds by Corollary \ref {second main theorem: even case}. However, the relative higher torsion class $\tau^\text{\rm IK}(M',M)$ is equal to zero in that case: \[ \tau^\text{\rm IK}(M',M)=\tau^\text{\rm IK}(M')-\tau^\text{\rm IK}(M)=0 \] since $\tau^\text{\rm IK}(M)$ depends only on the vertical tangent bundle of $M$ over $B$ by \cite{I:Axioms0} and $M'$ has the same vertical tangent bundle as $M$ by definition of tangential homeomorphism. This leads to the following conjecture. \begin{conj}[Rigidity conjecture] The stable smooth structure class vanishes when the fiber is a closed oriented even dimensional manifold: \[ \Theta(M',M)=0 \] In other words, rationally stably, there are no exotic smooth structures on manifold bundles with closed oriented even dimensional fibers. \end{conj} Theorem \ref{intro: key result} implies that $\Theta(M',M)$ must lie in the kernel of the map $p_\ast$ in the closed even dimensional fiber case since the higher relative torsion is zero in this case. The AdT construction shows that $M\times I$ admits exotic smooth structures if the fiber dimension of $M\times I\to B$ is sufficiently large and odd. \subsection{Acknowledgments} Research for this project was supported by the DFG special programme ``Global Differential Geometry'' and the National Science Foundation. An earlier version of this work was presented at the 2006 Arbeitsgemeinshaft at Oberwolfach on ``Higher Torsion Invariants in Differential Topology and Algebraic K-Theory.'' This was a very helpful and enjoyable meeting at which Bruce Williams gave us his famous notes on smoothing theory \cite{WilliamsNotes06} which we have expanded into a joint paper \cite{Second} giving a thorough exposition of the background material used in this paper. The American Institute of Mathematics in Palo Alto helped us to finish this project by hosting a workshop on higher torsion in 2009. This was a very productive meeting for which the directors of AIM deserve a lot of credit for keeping us focused. The second author would also like to thank the organizers of the CMS meeting at Fredericton, New Brunswick in June, 2010 for the opportunity to present the first completed version of this paper. Finally, we would like to thank the referee for many helpful suggestions, both in exposition and content of this paper and the companion paper \cite{Second}. \section{Hatcher's example} Hatcher's famous construction gives smooth disk bundles over $S^{4k}$ which are homeomorphic but not diffeomorphic to $S^{4k}\times D^n$. The exact statement is given below. \subsection{Homotopy theory}\label{subsecA11} John Klein helped us to find the lowest dimension in which this part of the construction works. Suppose that $B$ is a compact smooth $q$-manifold with $q$ and $\partial B=\partial_0B\cup \partial_1B$ as before. Let \[ f:B/\partial_0B\to G/O \] be a continuous map, i.e., $f$ is a continuous mapping on $B$ which sends $\partial_0B$ to the basepoint of $G/O$, the fiber of $BO\to BG$. This classifies a stable vector bundle over $B$ which is trivial over $\partial_0B$ and trivial over $B$ as a spherical fibration. Take $n\ge q+1$. Then $BO_n\to BO$ is $q+1$-connected and therefore this stable vector bundle is given by a unique oriented $n$-plane bundle $\xi$ over $B$ which is trivial over $\partial_0B$. We will show that the sphere bundle $S^{n-1}(\xi)\to B$ of $\xi$ is fiber homotopically trivial. Since $G/O$ is simply connected, we may assume that $q\ge2$ and thus $n\ge3$. \begin{rem}\label{rem:ch(xi) span H4k(B,d0)} Since $G/O$ is rationally homotopy equivalent to $BO$, the Chern characters of all real vector bundles $\xi$ obtained in this way will span the vector space \[ \bigoplus_{0<k\le q/4}H^{4k}(B,\partial_0B;\ensuremath{{\field{R}}}). \] \end{rem} Recall that $G_n$ is the topological monoid of all unpointed self-homotopy equivalences of $S^{n-1}$. Taking unreduced suspension we get a mapping $G_n\to F_n$ where $F_n\subset \Omega^nS^n$ is the union of the degree $\pm1$ components. It follows from a theorem of Haefliger \cite{Hf} that $(F_n,G_n)$ is $2n-3$ connected ($2n-3\ge n\ge q+1$). Furthermore, the components of $\Omega^nS^n$ are all homotopy equivalent and $\pi_kBG_n=\pi_{k-1}G_n\cong\pi_{k-1} F_n$ is stable and thus finite for $k<n$. (This also follows from the EHP sequence.) Therefore, \[ [B/\partial_0B,BG_n]\cong[B/\partial_0B,BG] \] {for $n>q$. So, the composition} \[ B/\partial_0B\xrightarrow} %right arrow {label on top{\xi}BO_n\to BG_n \] is null homotopic for $n>q$. This implies that the sphere bundle $S^{n-1}(\xi)$ associated to $\xi$ is fiberwise homotopy equivalent to the trivial bundle: \[ g:S^{n-1}(\xi)\simeq S^{n-1}\times B \] and this trivialization agrees with the given trivialization over $\partial_0B$. Take the fiberwise mapping cone of $g$. This gives a fibration over $B$ whose fibers are contractible $n$-dimensional cell complexes which are homeomorphic to the standard $n$-disk over $\partial_0B$. When we thicken this up we will get an exotic smooth structure on a trivial disk bundle over $B$. \begin{rem} For any space $X$ recall \cite{Adams, HusemollerFB} that $J(X)$ is the group of stable vector bundles over $X$ modulo the equivalence relation that $\xi\sim\eta$ if the sphere bundles over $\xi$ and $\eta$ are fiberwise homotopy equivalent. The group operation is fiberwise join which corresponds to direct sum of underlying bundles. If $\xi$ is any vector bundle over $X$ then $J(\xi)$ denotes its image in $J(X)$. If $X$ is a finite complex then it is well known that $J(X)$ is a finite group. (See, e.g. \cite{HusemollerFB}.) The above argument shows that if $J(\xi)$ is trivial in $J(B/\partial_0B)$ and $\dim \xi>\dim B$ then the sphere bundle of $\xi$ is fiberwise homotopically trivial. \end{rem} \subsection{Thickening}\label{subsecA12} We have a family of finite cell complexes over $B$ which we want to thicken to get a manifold bundle. If we embed this fibration in $D^N\times B$ and take a ``regular neighborhood'' we will get a smooth $N$ disk bundle over $B$ which is homeomorphic but not diffeomorphic to $D^N\times B$. We start by thickening the trivial sphere bundle $S^{n-1}\times B$ to get $S^{n-1}\times I\times D^{m}\times B$. This is the trivial bundle over $B$ with fiber $S^{n-1}\times I \times D^{m}$. We also need this to be embedded in a trivial disk bundle $D^{n}\times D^m\times B$ in a standard way. We will take the obvious embedding \[ f:S^{n-1}\times I\times D^{m}\into D_2^n\times D^m \] given by $ f(x,y,z)=\left (1+y)x,z \right) $ where $D^n_2$ is the $n$-disk of radius 2. Then the closure of the complement of the image of $f$ in $D^n_2\times D^m$ is $D^n\times D^m$. Note that $S^{n-1}\times0\times D^m$ is mapped into $S^{n-1}\times D^m$, the side of the ``donut hole''. We also need a fixed orientation preserving embedding $i:D^{n+m}\into S^{n-1}\times I\times D^m$ which we call the \emph{basepoint disk}. Assuming that $n\ge2$, $i$ is unique up to isotopy. We attach an $n$-handle $ D^n(\xi)\oplus D^m(\eta) $ to this (with $\eta$ necessarily being a complementary bundle to $\xi$) to fill in the donut hole and create a smooth (after rounding corners) bundle over $B$ with fiber \[ S^{n-1}\times I\times D^{m}\cup D^n\times D^m\cong D^{n+m} \] The data needed to attach such a handle embedded in $D_2^n\times D^m\times B$ is a smooth embedding of pairs \[ D(j):(D^{n}(\xi),S^{n-1}(\xi))\oplus D^m(\eta)\to (D^{n},S^{n-1})\times D^m\times B \] \begin{figure}[htbp] \begin{center} { \setlength{\unitlength}{1cm} {\mbox{ \begin{picture}(7,2.5) \put(0,-1.5){ \thicklines \put(0,2){\line(1,0){1.5}} \put(0,2){\line(0,1){1.5}} \put(0,3.5){\line(1,0){1.5}} \put(1.5,2){\line(0,1){1.5}} \put(5,2){\line(1,0){1.5}} \put(5,2){\line(0,1){1.5}} \put(5,3.5){\line(1,0){1.5}} \put(6.5,2){\line(0,1){1.5}} \thinlines \put(1.5,2){\line(1,0){3.5}} \put(1.5,3.5){\line(1,0){3.5}} \qbezier(1.5,2.4)(1.9,2.8)(3.25,2.4) \qbezier(3.25,2.4)(4.6,2)(5,2.4) \qbezier(1.5,3.1)(1.9,3.5)(3.25,3.1) \qbezier(3.25,3.1)(4.6,2.7)(5,3.1) \thicklines \put(0,.35){ \qbezier(1.5,2.4)(1.9,2.8)(3.25,2.4) \qbezier(3.25,2.4)(4.6,2)(5,2.4) } \thinlines \put(5.8,2.75){\circle{.5}} \qbezier(6,3)(6.7,3.6)(7.2,3.5) \put(7.4,3.4){$i(D^{n+m})$} % \put(3.1,3.7){$D^{n}$} \put(5.1,3.7){$S^{n-1}\times I$} \put(6.6,2.7){$D^m$} \put(2,1.4){$D(j)(D^n(\xi)\times 0)$} \put(3.4,1.8){\line(0,1){0.9}} \qbezier(3.4,2.7)(3.4,2.5)(3.3,2.5) \qbezier(3.4,2.7)(3.4,2.5)(3.5,2.5) } \end{picture}} }} \end{center} \end{figure} This embedding $D(j)$ is essentially given by $j$ its restriction to the core $D^n(\xi)\times0$. \begin{lem}\label{embedding lemma} If $m>n>q$ then there is a smooth fiberwise embedding of pairs: \[ j:(D^n(\xi),S^{n-1}(\xi))\to (D^n,S^{n-1})\times D^m\times B \] over $B$ which is the standard embedding over $\partial_0B$ and which is transverse to $S^{n-1}\times D^m$. Furthermore, if $m\ge q+3$ then this fiberwise embedding will be unique up to fiberwise isotopy. \end{lem} \begin{proof} When $q=0$, this holds by transversality. So suppose $q>0$. We use \cite[Thm 6.5]{I:Stability} which says that the inclusion \[ \Emb((D^n,S^{n-1}),(W^{n+m},\partial_0W))\to \Map((D^n,S^{n-1}),(W^{n+m},\partial_0W)) \]of the smooth embedding space into the mapping space is $c$-connected where \[ c=m-n-1+\min(s,n,m-2,n+m-4) \] and $s$ is the connectivity of the pair $(W,\partial_0W)$. In our case $s=n-1$. So the condition $m>n>q>0$ implies that $c\ge q$ giving the existence part of the lemma and if $m\ge q+3$ then either $m\ge n+2$ or $n\ge q+2$ and we get $c>q$ which implies the uniqueness part. \end{proof} The embedding $j$ gives an $m$-dimensional normal bundle $\eta$ for $\xi$ and a smooth codimension 0 embedding \[ D(j):D^{n}(\xi)\oplus D^m(\eta)\to D^{n}\times D^m\times B \] Restricting this to $\partial D^n(\xi)\oplus D^m(\eta)$ we get a fiberwise embedding \[ S(j):S^{n-1}(\xi)\oplus D^m(\eta)\to S^{n-1}\times D^m\times B \] We can use $S(j)$ to construct a smooth bundle (with corners rounded): \[ E^{n,m}(\xi)=D^n(\xi)\oplus D^m(\eta)\cup_{S(j)} S^{n-1}\times I\times D^{m}\times B. \] We can also use $D(j)$ to embed this in the trivial disk bundle of the same dimension: \[ F(j)= D(j)\cup f_B:E^{n,m}(\xi)\into D_2^n\times D^m\times B \] where $f_B=f\times id_B$. This is {\bf Hatcher's example}. Since $m>q$, the $m$-plane bundle $\eta$ is the stable complement to $\xi$ and is thus uniquely determined. If $m\ge q+3$ then, up to fiberwise diffeomorphism, $E(\xi)$ is independent of the choice of $j$. Finally, we note the crucial point that the bundle $E(\xi)$ is canonically diffeomorphic to the trivial bundle over $\partial_0B$. \subsection{Higher Reidemeister torsion}\label{subsecA13} We will briefly review the definition and basic properties of higher Reidemeister torsion invariants following \cite{I:Axioms0}, in particular the handlebody formula Theorem \ref{higher torsion of fiberwise handlebody}. Then we will use these formulas to calculate the higher \text{\rm IK}-torsion for Hatcher's example. The analytic torsion of Hatcher's example is computed in \cite{Goette03} as an application of the handlebody formula for analytic torsion proved in that paper. \begin{defn}\label{defn:unipotent} A smooth bundle $M\to B$ with compact oriented manifold fiber $X$ and connected base $B$ is called \emph{unipotent} if the rational homology of the fiber $H_\ast(X;\ensuremath{{\field{Q}}})$, considered as a $\pi_1B$-module has a filtration by submodules so that the action of $\pi_1B$ on the subquotients is trivial. For example, any oriented sphere bundle is unipotent. \end{defn} \begin{defn}\label{defn:axiomatic higher torsion} A {\bf higher torsion invariant} is a real characteristic class $\tau(M)\in H^{4\bullet}(B;\ensuremath{{\field{R}}})$ of smooth unipotent bundles $M\to B$ with closed fibers satisfying the following two axioms. (\emph{Additivity}) Suppose that $M=M_0\cup M_1$ where $M_0,M_1$ are unipotent compact manifold subbundles of $M$ which meet along their fiberwise boundary $M_0\cap M_1=\partial^\vertical\! M_0= \partial^\vertical\! M_1$. Then \[ \tau(M)=\frac12\tau(DM_0)+\frac12\tau(DM_1) \] where $DM_i$ is the \emph{fiberwise double} of $M_i$ (the union of two copies of $M_0$ along their fiberwise boundary). (\emph{Transfer}) Suppose that $M\to B$ is a unipotent bundle with closed fibers and $S^n(\xi)\to M$ is the sphere bundle of an $SO(n+1)$-bundle $\xi$ over $M$. Then $S^n(\xi)$ is a unipotent bundle over both $M$ and $B$ and thus has two higher torsion invariants $\tau_M,\tau_B$. These are required to be related as follows. \[ \tau_B(S^n(\xi))=\chi(S^n)\tau(M)+tr^M_B(\tau_M(S^n(\xi))) \] where $\chi$ is Euler characteristic and $tr^M_B:H^\ast(M)\to H^\ast(B)$ is the \emph{transfer}. (See \cite{I:Axioms0} for more details.) \end{defn} \begin{thm}\cite{I:Axioms0}\label{even torsion is a tangential invariant} If $M\to B$ has closed even dimensional fibers then any higher torsion invariant $\tau(M)$ depends only on the fiberwise tangential homeomorphism type of $M$. In other words, for any exotic smooth structure $M'$ for $M$, we have $\tau(M')=\tau(M)$. \end{thm} \begin{rem}\label{rem:extension to not closed fibers} Any higher torsion invariant can be extended to unipotent bundles $M\to B$ with compact oriented fibers using the following formula. \[ \tau(M):=\frac12\tau(DM)+\frac12\tau(\partial^\vertical\! M) \] The sign is positive ($+$) since the double $DM$ has only one copy to $\partial^\vertical\! M$. \end{rem} We say that a higher torsion invariant $\tau$ is \emph{stable} (``{exotic}'' in \cite{I:Axioms0}) if \[ \tau(M)=\tau(D(\xi)) \] for any oriented linear disk bundle $D(\xi)$ over $M$ considered as a unipotent bundle over $B$. \begin{thm}\cite{I:Axioms0}\label{uniqueness of exotic torsion} The higher \text{\rm IK}-torsion $\tau^\text{\rm IK}$ is a stable higher torsion invariant. Conversely, any stable higher torsion invariant is proportional to $\tau^\text{\rm IK}$ with a possibly different proportionality constant in each degree $4k$ (and is zero in other degrees). \end{thm} \begin{thm}[Badzioch, Dorabiala, Klein and Williams]\label{Lem of BDKW} The \text{\rm DWW}-higher smooth torsion is a stable higher torsion invariant. Consequently, it is proportional to \text{\rm IK}-higher torsion in every degree. \end{thm} \begin{rem} Analytic torsion does not satisfy the stability condition and is therefore not proportional to \text{\rm IK}-torsion or \text{\rm DWW}-smooth torsion. See \cite{Goette08} for a precise formula relating Bismut-Lott analytic torsion to \text{\rm IK}-higher torsion. \end{rem} In \cite{I:Axioms0} it is shown that the three properties: Additivity, Transfer and Stability imply that the higher torsion of any linear sphere bundle is proportional to the Chern character. For \text{\rm IK}-torsion the proportionality constant is given by the following definition. \begin{defn}\label{def: normalized Chern character} If $\xi$ is a real vector bundle over $B$, we define the {\bf normalized Chern character} of $\xi$ to be the real cohomology class $\widetilde{ch}(\xi)=\sum_{k>0}\widetilde{ch}_{4k}(\xi)$ where \[ \widetilde{ch}_{4k}(\xi)=(-1)^k\zeta(2k+1)\tfrac12ch_{4k}(\xi\otimes\ensuremath{{\field{C}}})\in H^{4k}(B;\ensuremath{{\field{R}}}) \] and $\zeta$ is the Riemann zeta function. \end{defn} \begin{thm}\cite{I:ComplexTorsion} For any linear oriented sphere bundle $S^n(\xi)\to B$, we have \[ \tau^\text{\rm IK}(S^n(\xi))=(-1)^{n}\widetilde{ch}(\xi) \] \end{thm} To calculate the higher torsion of Hatcher's example, we use the following formula. Suppose that a smooth bundle $M\to B$ has a fiberwise handlebody decomposition: \[ M=\bigcup D(\xi_i)\oplus D(\eta_i) \] where $\xi_i,\eta_i$ are oriented vector bundles over $B$ of dimension $n_i,m_i$ and $D^{n_i}(\xi_i)\oplus D^{m_i}(\eta_i)$ is attached to lower handles along $S^{n_i-1}(\xi_i)\oplus D^{m_i}(\eta_i)$. In other words, $D^{n_i}(\xi_i)\oplus D^{m_i}(\eta_i)$ is an $n_i$-handle with \emph{core} $D^{n_i}(\xi_i)$. \begin{thm}\label{higher torsion of fiberwise handlebody} The higher \text{\rm IK}-torsion of the fiberwise handlebody $M$ is given by \[ \tau^\text{\rm IK}(M)=\sum (-1)^{n_i}\widetilde{ch}(\xi_i) \] \end{thm} \begin{rem}\label{rem:handlebody lemma} This theorem is proved in \cite{I:Axioms0}, Lemma 6.6, inductively on the number of handles using the relative additivity property: \[ \tau^\text{\rm IK}(M\cup D^n(\xi)\oplus D^m(\eta),M)=(-1)^n\widetilde{ch}(\xi) \] when the $n$-handle $D^n(\xi)\oplus D^m(\eta)$ is attached to $\partial^\vertical\! M$ along $S^{n-1}(\xi)\oplus D^{m}(\eta)$ by any fiberwise embedding. We will also use this relative formula. Note that the $n$-handle is actually th pair $(D^n(\xi)\oplus D^m(\eta),S^{n-1}(\xi)\oplus D^{m}(\eta))$. But we refer to $D^n(\xi)\oplus D^m(\eta)$ as the $n$-handle with \emph{base} $S^{n-1}(\xi)\oplus D^{m}(\eta)$. \end{rem} The following theorem summarizes Hatcher's construction and gives its two main properties proved below. \begin{thm}\label{thm: Hatcher's example} Suppose that $B$ is a smooth $q$-manifold and $m>n>q$. Suppose that $\xi$ is an $n$-plane bundle over $B$ which is trivial over $\partial_0B\subset\partial B$ so that $J(\xi)=0\in J(B/\partial_0B)$. Then Hatcher's construction gives a smooth bundle $E^{n,m}(\xi)$ over $B$ with fiber $D^{n+m}$. Furthermore: \begin{enumerate} \item This bundle is fiberwise diffeomorphic to the trivial bundle over $\partial_0B$ and {fiberwise homeomorphic to the trivial bundle over $B$ with fiber $D^{n+m}$.} \item The higher \text{\rm IK}-torsion is given by \[ \tau^\text{\rm IK}(E^{n,m}(\xi))=(-1)^n\widetilde{ch}(\xi) \] \end{enumerate} \end{thm} \begin{proof} The higher torsion calculation follows from Theorem \ref{higher torsion of fiberwise handlebody} since $E^{n,m}(\xi)$ is given by attaching the $n$-handle $D^n(\xi)\oplus D^m(\eta)$ to a trivial bundle. The bundle is topologically trivial by the Alexander trick. (The topological group of homeomorphism of the disk $D^{n+m}$ which are the identity on the southern hemisphere is contractible.) \end{proof} Take $q=4k,n=4k+1, m\ge 4k+2, B=S^{4k}$ and using the well known fact that the order of the image of the $J$-homomorphism $J:\pi_{4k-1}O\to \pi_{4k-1}^s$, which we denote $a_k$, is the denominator of $B_k/4k$ where $B_k$ is the $k$-th Bernoulli number \cite{Adams}, we get the following. \begin{cor} For any $k>0, N\ge 8k+3$ Hatcher's construction gives a smooth $N$-disk bundle over $S^{4k}$ which is tangentially homeomorphic to $D^N\times S^{4k}$ but has higher \text{\rm IK}-torsion invariant $\tau^\text{\rm IK}_{2k}\in H^{4k}(S^{4k};\ensuremath{{\field{R}}})$ equal to $\zeta(2k+1)a_{k}$ times the generator of $H^{4k}(S^{4k};\ensuremath{{\field{Z}}})$ for $k$ odd and half of that number when $k$ is even. In both cases this gives a nontrivial element of $\pi_{4k-1} Dif\!f(D^N)/O_N\otimes\ensuremath{{\field{R}}}$. \end{cor} \begin{proof} It follows from Bott periodicity (\cite{Bott59}, \cite[18.9]{HusemollerFB}) that the Chern character of the stable complex vector bundle over $S^{2k}$ corresponding to a generator of $\pi_{2k}BU=\ensuremath{{\field{Z}}}$ is equal to a generator of $H^{2k}(S^{2k};\ensuremath{{\field{Z}}})$. Also, the homotopy fiber sequence $BO\to BU\to \Omega^6BO$ given by the inclusion map $O\to U$ implies that the generator of $\pi_{4k}BO$ maps to the generator of $\pi_{4k}BU$ for $k$ even and to twice the generator when $k$ is odd. The generator of the kernel of the $J$-homomorphism is $a_k$ times this element. By the theorem above, the higher torsion of this exotic bundle is given by multiplying this element by $\frac12\zeta(2k+1)$ giving the formula in the corollary up to sign. We can make the sign positive by taking the other generator of the kernel of the $J$-homomorphism in Hatcher's construction. \end{proof} \section{Variations of Hatcher's construction} We need several variations and extensions of Hatcher's construction in order to construct a full rank subgroup of the group of all possible tangential smooth structures on a smooth manifold bundle with sufficiently large odd dimensional fibers. The idea is to construct ``positive'' and ``negative'' ``suspensions'' of Hatcher's basic construction which will cancel. We call this the ``Arc de Triomphe'' construction due to the appearance of the figures used to explain the construction. Since the stabilization of bundles with even dimensional fibers includes bundles whose fiber dimensions are arbitrarily large and odd, this construction also produces ``all'' stable tangential smooth structures on bundles with even dimensional fibers. \subsection{Arc de Triomphe: basic construction}\label{ss:AdT}\label{subsecA21} There are two ``suspensions'' of $E^{n,m}$ to one higher dimension. We will see that their union is trivial: \[ E^{n,m+1}(\xi)\cup E^{n+1,m}(\xi)\cong D^{n+m+1}\times B \] This is in keeping with the calculation of their higher torsions: \[ \tau^\text{\rm IK}(E^{n,m+1}(\xi))+\tau^\text{\rm IK}(E^{n+1,m}(\xi))=(-1)^n\widetilde{ch}(\xi)+(-1)^{n+1}\widetilde{ch}(\xi)=0 \] and the handlebody theorem \ref{higher torsion of fiberwise handlebody} which implies that the higher torsion of a union of fiberwise handlebodies is the sum of torsions of the pieces. The {\bf positive suspension} of $E^{n,m}(\xi)$ is defined simply as the product (with corners rounded): \[ \sigma_+E^{n,m}(\xi)=E^{n,m}(\xi)\times I \] An examination of the definitions shows that this is the same as $E^{n,m+1}(\xi)$. The {\bf negative suspension} of $E^{n,m}(\xi)$ uses the embedding $F(j)=D(j)\cup f_B:E^{n,m}(\xi)\into D_2^n\times D^m\times B$ and is defined as follows. \[ \sigma_-E^{n,m}(\xi)=D_2^n\times D^m\times[-1,0]\times B\cup_{F(j)\times 0} E^{n,m}(\xi)\times I\cup_{F(j)\times 1} D_2^n\times D^m\times [1,2]\times B \] This is a subbundle of $D_2^n\times D^m\times[-1,2]\times B$. We claim that $\sigma_-E^{n,m}(\xi)$ is a model for $E^{n+1,m}(\xi)$ over $B$ in the sense that the construction of $E^{n+1,m}(\xi)$, which may not be unique, could give $\sigma_-E^{n,m}(\xi)$. (We view $\xi$ as a stable vector bundle.) Lemma \ref{embedding lemma} then tells us that we have uniqueness after stabilizing just once: \[ \sigma_-E^{n,m}(\xi)\times I\cong E^{n+1,m}(\xi)\times I=E^{n+1,m+1}(\xi) \] since $m+1\ge q+3$. To verify this claim note that $\sigma_-E^{n,m}(\xi)$ contains the trivial bundle over $B$ with fiber \[ F=D^n\times D^m\times[-1,0]\cup S^{n-1}\times I\times D^m\times [0,1] \cup D^n\times D^m\times[1,2] \] which is diffeomorphic to $S^n\times D^{m+1}$ after its corners are rounded. On this is attached the $n+1$ handle $D^n(\xi)\oplus D^m(\eta)\times I$ which is equivalent to $D^{n+1}(\xi)\oplus D^m(\eta)$ after corners are rounded. Since $D^{n+1}(\xi)$ is the core of this handle, the result is $E^{n+1,m}(\xi)$. When we take the union of the positive and negative suspensions of $E^{n,m}(\xi)$, they cancel. This will follow from the following lemma which does not require proof. \begin{lem}\label{first trivial lemma} Suppose that $E_0,E_1$ are compact smooth manifold bundles over $B$ with the same fiber dimension. Let $f:E_0\to E_1$ be a smooth embedding over $B$. Then \[ E_0\times [0,1]\cup_{f\times 1}E_1\times [1,2] \] is fiberwise diffeomorphic to $E_1\times I$ after rounding off corners. \end{lem} \begin{rem}\label{basic AdT} The example that we have in mind is \[ E^{n,m}(\xi)\times [0,1]\cup_{F(j)\times 1} D^n\times D^m\times[1,2]\times B\cong D^n\times D^m\times I\times B \] We denote the construction on the left by $V^{n,m}(\xi)$. \end{rem} Next we use another trivial lemma: \begin{lem}\label{second trivial lemma} Suppose that $\partial^\vertical\! E_1= \partial_0E_1\cup \partial_1E_1$ where $ \partial_iE_1$ are smooth manifold bundles over $B$ with the same fiberwise boundary. Let $f,g:\partial_0E_1\to\partial^\vertical\! E_0$ be smooth embeddings over $B$ which are fiberwise isotopic. Then $ E_0\cup_f E_1 $ and $ E_0\cup_g E_1 $ are fiberwise diffeomorphic over $B$ after rounding off the corners. \end{lem} In our example, $\partial_0E_1$ will be a disk bundle. So, we need the following well-known lemma. { \begin{lem}\label{third trivial lemma} Suppose that $D,D_0$ are smooth $n$-disk bundles over $B$ so that $D_0$ is a subbundle of $D$ which is disjoint from the fiberwise boundary $\partial^\vertical\! D$. Let $E\to B$ be another smooth manifold bundle with fiber $F$. Then any two fiberwise embeddings $D\to E$ over $B$ are fiberwise isotopic if and only if their restrictions to $D_0$ are fiberwise isotopic. \end{lem} \begin{proof} Necessity of the condition is clear. To prove sufficiency, it suffices to show that there is an isotopy of $D$ into $D_0$, i.e., a smooth family of embeddings $f_t:D\to D$ over $B$ so that $f_0$ is the identity and $f_1(D)\subseteq D_0$. Then, for any two embeddings $g,h:D\to E$ whose restrictions to $D_0$ are fiberwise isotopic, we can first compose these embeddings with the isotopy $f_t$ then use the given isotopy $g|D_0\circ f_1\simeq h|D_0\circ f_1$. To construct the isotopy $f_t$ we triangulate the base and construct the isotopy over the simplices one at a time and use the isotopy extension theorem. For each $q\ge -1$ we will construct an embedding $f_q:D\to D$ over $B$ with the following two properties. \begin{enumerate} \item $f_q$ is fiberwise isotopic to the identity map $id_D$ \item $f_q(D)$ is contained in $ int\,D_0$, the fiberwise interior of $D_0$ over $B^q$, the $q$-skeleton of $B$ under the triangulation. \end{enumerate} Start with $q=-1$ when $B^{-1}=\emptyset$. Then $f_{-1}=id_D$ satisfies all conditions. Now suppose that $q\ge0$ and $f_{q-1}$ has been constructed. Then on each $q$-simplex $\Delta^q$ of $B$, the bundle pair $(D,D_0)$ is trivial. So we may assume they are product bundles \[ (D,D_0)|\Delta^q=(D_2^n\times \Delta^q,D^n\times \Delta^q) \] where $D_2^n$ is the disk of radius $2$ in $\ensuremath{{\field{R}}}^n$. We are given that $f_{q-1}$ sends $D$ into $int\,D_0$ over $\partial\Delta^q$. Since $f(D)\subset int\,D_0$ is an open condition, this also holds over a neighborhood of $\partial\Delta^q$. Then there is no problem finding an isotopy of $f_{q-1}$ to some $f_q$ over $B^q$ fixing a neighborhood of $B^{q-1}$ so that $f_q$ sends $D$ into $int\,D_0$ over $B^q$. By the isotopy extension theorem, this isotopy extends to an isotopy over all of $B$ completing the induction. When $q$ reaches the dimension of $B$, we are done. \end{proof} } We use Lemmas \ref{second trivial lemma} and \ref{third trivial lemma} for \[ E_1 =E^{n,m}(\xi)\times [0,1]\cup_{F(j)\times 1} D_2^n\times D^m\times[1,2]\times B \] $\partial_0E_1=E^{n,m}(\xi)\times 0$ and $E_0=M\times [-1,0]$ with \[ M=E^{n,m}(\xi)\cup_{h_B} D_2^n\times D^m\times B \] where $h_B=h\times id_B$ and $h$ is an orientation reversing diffeomorphism of a disk $D_0^{n+m-1}$ embedded in $\partial(D_2^n\times D^m)$ onto another disk $D_1^{n+m-1}$ embedded in $S^{n-1}\times 1\times D^m$ (the outside surface of the donut). The pasting map $h$ needs to be orientation reversing in order for orientations of the two pieces to agree. Assuming that $n\ge 2$, $h$ is unique up to isotopy. And it is a special case of Lemma \ref{first trivial lemma} that $M$ is fiberwise diffeomorphic to $E^{n,m}(\xi)$. Note that both pieces of $M$ contain the product bundle $S^{n-1}\times I\times D^m\times B$ and each of these contains a basepoint disk. We call the two embeddings $i_0, i_1:D^{n+m}\times B\to M$. Since these two embeddings have image in the product bundle \[ (S^{n-1}\times I\times D^m\cup_h D^n_2\times D^m)\times B\subseteq M \] which has connected fibers and since $i_0,i_1$ are equal to fixed orientation preserving embeddings on every fiber, the two embeddings are isotopic and therefore the two bundle embeddings $i_0,i_1$ are fiberwise isotopic. As an example of Lemma \ref{second trivial lemma}, take the mapping $f:\partial_0E_1\to\partial^\vertical\! E_0$ to be the inclusion map \[ f:E^{n,m}(\xi)\times 0\subseteq M\times 0\subseteq\partial^\vertical\! E_0 \] and $g:\partial_0E_1\to \partial^\vertical\! E_0$ to be the embedding: \[ g:E^{n,m}(\xi)\times 0\xrightarrow} %right arrow {label on top{F(j)}D_2^n\times D^m\times B\subseteq M\times 0\subseteq\partial^\vertical\! E_0 \] We claim that $f$ and $g$ are fiberwise isotopic. To see this we restrict both maps to the basepoint disk $i(D^{n+m})\times B\subseteq S^{n-1}\times I\times D^m\times B\subseteq E^{n,m}(\xi)\times 0$. The restriction of $f$ to the basepoint disk is $i_0$ and the restriction of $g$ to the basepoint disk is $i_1$. We have just seen that $i_0$ and $i_1$ are fiberwise isotopic. Therefore, by Lemma \ref{third trivial lemma}, $f$ and $g$ are fiberwise isotopic. Therefore, by Lemma \ref{second trivial lemma}, \[ M\times [-1,0]\cup_f E_1\cong M\times [-1,0]\cup_g E_1 \] where $\cong$ indicates fiberwise diffeomorphism over $B$. But, when we attach $E_1$ on top of $D^n\times D^m\times B\times [-1,0]$ using the map $F(j)$ we get exactly the negative suspension $\sigma_-E^{n,m}(\xi)$. So, we have a diffeomorphism which preserves all the corner sets: \[ M\times [-1,0]\cup_g E_1= \sigma_-E^{n,m}(\xi)\cup_{h_B}\sigma_+E^{n,m}(\xi) \] and \[ M\times [-1,0]\cup_f E_1= V^{n,m}(\xi)\cup_{h_B}D_2^n\times D^m\times B\times [-1,0]\cong D^{n+m+1}\times B \] where $V^{n,m}(\xi)$ is given in Remark \ref {basic AdT}. Since $h$ is unique up to isotopy, any two choices of $h$ will produce fiberwise diffeomorphic bundles. So we get the following. (See Figure \ref{AdT figure}. The notation $E_1=A^{n,m}(\xi,\eta)$ is from subsection \ref {ss: Hatcher handles}.) \begin{prop}[basic cancellation lemma]\label{basic AdT cancellation lemma} The oriented union of the positive and negative suspensions of $E^{n,m}(\xi)$ glued together along fixed $n+m$ disk bundles in the fixed parts of their boundary is fiberwise diffeomorphic to the trivial $n+m+1$ disk bundle over $B$: \[ \sigma_-E^{n,m}(\xi)\cup_{h_B}\sigma_+E^{n,m}(\xi)\cong D^{n+m+1}\times B. \] \end{prop} \begin{center} \begin{figure}\label{AdT figure} \includegraphics[width=5.5in]{AdT3.eps \caption{Positive and negative Hatcher handles are cancelled using Arc de Triomphe} \end{figure} \end{center} \subsection{Twisted version}\label{subsecA22} Remark \ref{rem:ch(xi) span H4k(B,d0)} above and the main theorem (Corollary 2.2.2) of \cite{Second} show that, rationally stably, all exotic smooth structures on trivial disk bundles are given by Hatcher's example. Now we consider nontrivial disk bundles. Stably, it is easy to construct exotic smooth structures on nontrivial linear disk bundles. If we start with any vector bundle $\xi_0$ over $B$ which is trivial over $\partial_0B$, we can take the associated disk bundle $D^N(\xi_0)$. The fiberwise product \[ D^N(\xi_0)\oplus E^{n,m}(\xi) \] with corners rounded is a smooth disk bundle fiberwise homeomorphic to $D^N(\xi_0)\times D^{n+m}$ with the same higher torsion as $E^{n,m}(\xi)$ since $IK$ torsion has the property that it is invariant under passage to linear disk bundles. \begin{cor} Given any linear disk bundle $D^N(\xi_0)$ over $B$ which is trivial over $\partial_0B$, the collection of all stable smooth structures on $D^N(\xi_0)$ given by Hatcher's construction spans the vector space \[ \pi_0\std{B}{\partial_0}(D^N(\xi_0))\otimes\ensuremath{{\field{R}}}\cong H^{4\bullet}(B,\partial_0B) \] \end{cor} Now we give the unstable version of the last corollary and use it to define ``Hatcher handles''. Suppose that $(B,\partial_0 B)$ is a manifold pair as before with $\dim B=q$. Let $\xi,\eta$ be vector bundles over $B$ of dimension $n,m$ so that $\xi$ is trivial over $\partial_0B$ and $J(\xi)=0\in J(B/\partial_0B)$. As in Lemma \ref{embedding lemma} we have the following. \begin{lem}\label{twisted embedding lemma} If $m>n>q$ then there is a smooth fiberwise embedding of pairs: \[ j:(D^n(\xi),S^{n-1}(\xi))\to (D^n,S^{n-1})\times D^m(\eta) \] over $B$ which is a standard linear embedding over $\partial_0B$ and which is transverse to $S^{n-1}\times D^m(\eta)$. Furthermore, if $m\ge q+3$ then this fiberwise embedding is unique up to fiberwise isotopy. \end{lem} Let $\eta_0$ be the unique $m$-plane bundle over $B$ so that $\xi\oplus\eta_0\cong\epsilon^n\oplus\eta$ where $\epsilon^n$ is the trivial $n$-plane bundle over $B$. Then the embedding given by the lemma thickens to a codimension 0 fiberwise embedding \[ (D(j),S(j)):(D^n(\xi),S^{n-1}(\xi))\oplus D^m(\eta_0)\hookrightarrow (D^n,S^{n-1})\times D^m(\eta) \] which is a standard linear embedding over $\partial_0B$. Let $E^{n,m}(\xi,\eta)$ denote the $n+m$ disk bundle over $B$ given by \[ E^{n,m}(\xi,\eta)= D^n(\xi)\oplus D^m(\eta_0) \cup_{S(j)} S^{n-1}\times I\times D^{m}(\eta) \] with corners rounded. Up to fiberwise diffeomorphism, this is independent of the choice of $g$ if $m\ge q+3$. As before we have a fiberwise embedding $F(j):E^{n,m}(\xi,\eta)\into D^n\times D^m(\eta)$ and we can define the positive and negative suspensions of $E^{n,m}(\xi)$ to be \[ \sigma_+E^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)\times I \] which is fiberwise diffeomorphic to $E^{n,m+1}(\xi,\eta)$ after corners are rounded and \[ \sigma_-E^{n,m}(\xi,\eta)=D^n\times D^m(\eta)\times [-1,0]\cup_{F(j)\times 0} E^{n,m}(\xi,\eta)\times I\cup_{F(j)\times 1} D^n\times D^m(\eta)\times[1,2] \] which is a model for $E^{n+1,m}(\xi,\eta)$. As before, the Framing Principle implies that the higher \text{\rm IK}-torsion of this bundle is the normalized Chern character (Def. \ref{def: normalized Chern character}) of $\xi$: \begin{thm}\label{torsion of twisted Hatcher disk bundle} $E^{n,m}(\xi,\eta)$ is a smooth $n+m$ disk bundle over $B$ which is fiberwise diffeomorphic to the linear disk bundle $D^{n+m}(\eta)$ over $\partial_0B$ and fiberwise homeomorphic to $D^{n+m}(\eta)$ over $B$. Furthermore, \[ \tau^{\text{\rm IK}}(E^{n,m}(\xi,\eta))=(-1)^{n}\widetilde{ch}(\xi)\in H^{4\bullet}(B,\partial_0B) \] \end{thm} \begin{rem} This theorem can be stated as the commutativity of the following diagram: \[ \xymatrix{ &G(B,\partial_0B) \ar[rd]^{(-1)^n\widetilde{ch}}\ar[ld]_{E^n(-,\eta)} \\ \pi_0\std{B}{\partial_0}(D(\eta))\ar[rr]^{\tau^\text{\rm IK}}&& H^{4\bullet}(B,\partial_0B) \] where $G(B,\partial_0B)$ is the group of all homotopy classes of pointed maps $\xi:B/\partial_0B\to G/O$. Here $E^n(-,\eta)$ is the map which sends $\xi$ to the direct limit of $E^{n,m}(\xi,\eta)$ as $m$ goes to $\infty$. \end{rem} Since the torsion of a linear disk bundle is trivial, the torsion of the disk bundle $E^{n,m}(\xi,\eta)$ is equal to the torsion of the $h$-cobordism bundle given by deleting a neighborhood of a section. The fiberwise boundary of $E^{n,m}(\xi,\eta)$ is a smooth $n+m-1$ dimensional sphere bundle over $B$ which is fiberwise tangentially homeomorphic to the linear sphere bundle $S^{n+m-1}(\eta)$. \begin{cor}\label{cor: torsion of twisted Hatcher sphere bundle} Suppose that $n+m-1$ is odd. Then the vertical boundary $\partial^\vertical\! E^{n,m}(\xi,\eta)$ of this disk bundle is a smooth sphere bundle which is fiberwise tangentially homeomorphic to the linear sphere bundle $S^{m+n-1}(\eta)$ and fiberwise diffeomorphic to this bundle over $\partial_0B$ and the difference torsion is twice the normalized Chern character of $\xi$: \[ \tau^{\text{\rm IK}}(\partial^\vertical\! E^{n,m}(\xi,\eta),S^{n+m-1}(\eta))=(-1)^{n}2\widetilde{ch}(\xi)\in H^{4\bullet}(B,\partial_0B) \] In particular, assuming that $\xi$ is rationally nontrivial, this gives an exotic smooth structure on $S^{n+m-1}(\eta)$. \end{cor} \begin{proof} For oriented sphere bundles, the absolute torsion is defined and the difference torsion is just the difference: \[ \tau^\text{\rm IK}(\partial^\vertical\! E^{n,m}(\xi,\eta),S^{n+m-1}(\eta))=\tau^\text{\rm IK}(\partial^\vertical\! E^{n,m}(\xi,\eta))-\tau^\text{\rm IK}(S^{n+m-1}(\eta)) \] Each term can be computed using the equatio \[ \tau(E)=\tfrac12\tau(\partial^\vertical\! E)+\tfrac12\tau(DE) \] where $DE$ is the vertical double of $E$. (See Remark \ref{rem:extension to not closed fibers}.) If we take $E$ to be the linear disk bundle $E=D^{n+m}(\eta)$, then the triviality of the Igusa-Klein torsion for linear disk bundles implies that \[ \tau^\text{\rm IK}(S^{n+m-1}(\eta))=-\tau^\text{\rm IK}(S^{n+m}(\eta)) \] If we take $E=E^{n,m}(\xi,\eta)$, then the fiberwise double $DE$, having closed even dimensional manifold fibers, has the same higher torsion as the linear sphere bundle $S^{n+m}(\eta)$ (by Theorem \ref{even torsion is a tangential invariant} since $n+m$ is even): \[ \tau^\text{\rm IK}(\partial^\vertical\! E^{n,m}(\xi,\eta))= 2\tau^\text{\rm IK}(E^{n,m}(\xi,\eta))-\tau^\text{\rm IK}(S^{n+m}(\eta)) \] The relative torsion is the difference: \[ \tau^\text{\rm IK}(\partial^\vertical\! E^{n,m}(\xi,\eta))-\tau^\text{\rm IK}(S^{n+m-1}(\eta))= 2\tau^\text{\rm IK}(E^{n,m}(\xi,\eta))=(-1)^n2\widetilde{ch}(\xi) \] by Theorem \ref{torsion of twisted Hatcher disk bundle} above. \end{proof} \subsection{Hatcher handles}\label{ss: Hatcher handles}\label{subsecA23} Suppose that $p:M\to B$ is a smooth manifold bundle whose fiber dimension is $N=n+m$ where $m>n>q$. Let $s:B\to M$ be a smooth section of $p$ with image in the fiberwise interior of $M$. Since $m=N-n>q+1$, the space of $n$ frames in $\ensuremath{{\field{R}}}^N$ is $q+1$-connected. So there exists a smooth fiberwise embedding $f:D^{n}\times B\to M$ equal to $s$ along the zero section and $f$ is uniquely determined up to isotopy by $s$. Let $\eta$ be the vertical normal bundle to the image of $f$ in $M$. This is the unique $m$ plane bundle over $B$ which is stably isomorphic to the pull back along $s$ of the vertical tangent bundle of $M$. Then $f$ extends to a fiberwise embedding \begin{equation}\label{eq:D(s)} D(s):D^{n}\times D^{m}(\eta)\into M \end{equation} whose image is a tubular neighborhood of the image of the section $s$ and $D(s)$ is determined up to isotopy by $s$. We will use this embedding $D(s)$ to attach positively and negatively suspended Hatcher disk bundles to the top $M\times 1$ of the bundle $M\times I\to B$. We call these \emph{positive} and \emph{negative Hatcher handles}. We will also show that, when the negative Hatcher handle is attached on top of the positive Hatcher handle, they form the Arc de Triomphe which cancels. To visualize these three situations, it may help to think of the positive Hatcher handle as two balls attached together on a string with one ball attached to the ground. This configuration is topologically contractible to its attachment point on the ground but not smoothly (Figure \ref{fig: positive Hatcher handle}). The negative Hatcher handle resembles the handle on a briefcase with a flexible membrane filling in the ``hole''. This is also topologically contractible to the base (the briefcase) but not smoothly (Figure \ref{fig: negative Hatcher handle}). The Arc de Triomphe resembles the hook on a coat hanger together with a semicircular membrane attached only to the curved part of the hook. This is smoothly contractible since the membrane smoothly deforms into the metal part and then the metal hook smoothly contracts to the base (Figure \ref{fig: cancelling Hatcher handles}). The term ``Arc de Triomphe'' may be misleading since this structure has one end up in the air and only the other end attached to the ground along a ``stem''. The Arc de Triomphe and negative Hatcher handles are diffeomorphic but they have different properties since the former is attached trivially and the latter is attached nontrivially to the base. \subsubsection{Positive Hatcher handles} Let $h_0:D_0^n\into S^{n-1}\times I$ be a fixed smooth embedding where $D_0^n=D^n$ is a copy of the standard $n$-disk. Taking the product with $D^m(\eta)$ we get a fiberwise embedding of $D_0^n\times D^m(\eta)$ into $E^{n,m}(\xi,\eta)$: \[ h=h_0\times id_{D^m(\eta)}:D_0^n\times D^m(\eta)\into S^{n-1}\times I\times D^m(\eta)\subseteq E^{n,m}(\xi,\eta) \] We define the {\bf positive Hatcher handle} to be the pair $(B^{n,m}(\xi,\eta),\partial_0B^{n,m}(\xi,\eta))$ where \[ B^{n,m}(\xi,\eta)=D_0^n\times D^m(\eta)\times I\cup_{h\times1} E^{n,m}(\xi,\eta)\times [1,2] \] and $\partial_0B^{n,m}(\xi,\eta)=D_0^n\times D^m(\eta)\times0$. We can attach $B^{n,m}(\xi,\eta)$ to $M\times I$ along any fiberwise embedding $D(s):\partial_0B^{n,m}(\xi,\eta)\to M\times 1$ where $s:B\to M$ is a smooth section of $M$ as in \eqref{eq:D(s)} above. The result will be denoted: \[ E_+^{n,m}(M,s,\xi)=M\times I\cup_{D(s)}B^{n,m}(\xi,\eta) \] Since the bundle pair $(B^{n,m}(\xi,\eta),\partial_0B^{n,m}(\xi,\eta))$ is fiberwise homeomorphic to the disk bundle pair $D^n\times D^m(\eta)\times (I,0)$, the bundle $E_+^{n,m}(M,s,\xi)$ is fiberwise homeomorphic to the bundle $M\times I$. However, $E_+^{n,m}(M,s,\xi)$ is a smooth bundle (when corners are rounded) whose fibers are $h$-cobordisms. \begin{figure}[ht] \begin{center} { \setlength{\unitlength}{3cm} {\mbox{ \begin{picture}(3.5,1.5) \thicklines \put(.5,.5){\line(1,0){2} } \put(2.95,.5){\line(1,0){.45} } \put(0,0){ \qbezier(0,0)(.25,.25)(.5,.5) \line(1,0){3} } \put(3,0){ \qbezier(0,0)(.2,.25)(.4,.5) } \put(.5,.1){$M\times 1$ } % \put(1.2,0){ \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){.5} } \put(1.3,1.4){ \line(1,0){.5} } \put(1.8,.9){ \line(0,1){.5} \qbezier(0,0)(-.1,-.1)(-.2,-.2) }} \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){.5} } \put(1.3,1.4){ \line(1,0){.5} \qbezier(0,0)(0,-.01)(0,-.06) } \put(1.7,1.25){ \qbezier(0,0)(.3,-.1)(.75,0.05) \line(0,-1){.5} } \put(1.8,1.35){ \qbezier(0,0)(.2,-.1)(.75,0.05) } \put(1.7,.75){ \qbezier(0,0)(.2,-.1)(.65,0.05) } \put(1.65,.7){ \qbezier(0,0)(.02,.02)(.05,.05) } % \put(2.5,.3){\line(1,0){.3}} \put(2.5,.3){\line(0,1){.4}} \put(2.8,.3){\line(0,1){.4}} \put(2.8,.3){ \qbezier(0,0)(.1,.1)(.15,.15) } \put(2.9,.45){ \line(0,1){.35} } \put(-.1,1){ $E^{n,m}(\xi,\eta)\times [1,2]$ } \put(3,.6){ $D_0^n\times D^m(\eta)\times I$ } % \end{picture}} }} \caption{(Positive Hatcher handle) The positive suspension $\sigma_+E^{n,m}(\xi,\eta)$ is attached to the top $M\times 1$ of $M\times I$ by the ``stem'' $D_0^n\times D^m(\eta)\times I$.} \label{fig: positive Hatcher handle} \end{center} \end{figure} \begin{thm} Let $T$ be a closed fiberwise tubular neighborhood of $s(B)$ in $M$. Then there is a fiberwise homeomorphism $M\times I\to E_+^{n,m}(M,s,\xi)$ which is the identity (and thus a diffeomorphism) on $M\times 0$ and a diffeomorphism on the closure of $(M-T)\times I$. Furthermore the difference torsion is the same as the IK-torsion of $E^{n,m}(\xi,\eta)$: \[ \tau^\text{\rm IK}(E_+^{n,m}(M,s,\xi),M\times I) =\tau^\text{\rm IK}(E^{n,m}(\xi,\eta))=(-1)^{n}\widetilde{ch}(\xi)\in H^{4\bullet}(B,\partial_0B) \] \end{thm} \begin{rem} This theorem can be viewed as the commutativity of the diagram: \[ \xymatrix{ &G(B,\partial_0B) \ar[rd]^{(-1)^n\widetilde{ch}}\ar[ld]_{E^n(-,\eta)}\ar[d]^(.65){E_+^n(M,s,-)} \\ \pi_0\std{B}{\partial_0}(D(\eta))\ar[r]_{s_\ast}\ar@/_2pc/[rr]^{\tau^\text{\rm IK}}& \pi_0\std{B}{\partial_0}(M)\ar[r]_(.4){\tau^\text{\rm IK}}& H^{4\bullet}(B,\partial_0B) \] \end{rem} Let $M'=\partial_1E_+^{n,m}(M,s,\xi)$ be the top boundary of the $h$-cobordism bundle $E_+^{n,m}(M,s,\xi)$. \begin{cor} $M'$ is fiberwise tangentially homeomorphic to $M$ and, if the fiber dimension $N=n+m$ of $M'$ is odd, then the relative \text{\rm IK}-torsion is equal to twice the normalized Chern character of $\xi$: \[ \tau^\text{\rm IK}(M',M)=(-1)^{n}2\widetilde{ch}(\xi)\in H^{4\bullet}(B,\partial_0B) \] \end{cor} \subsubsection{Negative Hatcher handles} The {\bf negative Hatcher handle} is defined to be the pair $(A^{n,m}(\xi,\eta),\partial_0A^{n,m}(\xi,\eta))$ where \[ A^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)\times I\cup_{F(j)\times 1} D_2^n\times D^m(\eta)\times [1,2] \] and $\partial_0A^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)\times 0$. When we attach this to the top of $M\times I$ using the composite map \[ E^{n,m}(\xi,\eta)\xrightarrow} %right arrow {label on top{F(j)}D_2^n\times D^m(\eta)\xrightarrow} %right arrow {label on top{D(s)}M \] we denote the result by \[ E_-^{n,m}(M,s,\xi)=M\times I\cup_{D(s)\circ F(j)}A^{n,m}(\xi,\eta) \] \begin{figure}[ht] \begin{center} { \setlength{\unitlength}{3cm} {\mbox{ \begin{picture}(3.5,2) \thicklines % \put(1.2,0){ \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.6,1.2){ \line(0,1){.5} } \put(1.65,1.7){ \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){.5} } \put(1.3,1.4){ \line(1,0){.5} } \put(1.8,.9){ \line(0,1){.5} \qbezier(0,0)(-.1,-.1)(-.2,-.2) } \put(1.8,.9){ \line(0,1){1} } } \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.15,1.2){ \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.3,1.9){ \line(1,0){1.7} } \put(1.3,1.9){ \line(0,-1){.5} } \put(1.1,.7){ \line(0,1){1} } \put(1.1,1.7){ \line(1,0){1.7} } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){1.5} } \put(1.3,1.4){ \line(1,0){.5} \qbezier(0,0)(0,-.01)(0,-.06) } \put(1.8,1.4){ \line(1,0){1} } \put(1.7,1.25){ \qbezier(0,0)(.3,-.1)(.75,0.05) \line(0,-1){.5} } \put(1.8,1.35){ \qbezier(0,0)(.2,-.1)(.75,0.05) } \put(1.7,.75){ \qbezier(0,0)(.2,-.1)(.65,0.05) } \put(1.65,.7){ \qbezier(0,0)(.02,.02)(.05,.05) } % \put(0,1){$E^{n,m}(\xi,\eta)\times [0,1]$} \put(-.15,1.6){$D_2^n\times D^m(\eta)\times [1,2]$} \put(.05,1.4){(transparent)} \put(-.8,.4){ \line(1,0){4} } \put(-.8,.4){ \line(2,3){.6} } \put(3.2,.4){ \line(1,2){.45} } \put(-.5,.5){$M\times 1$} \put(1.1,.7){ \line(1,0){.5} } % \end{picture}} }} \caption{(Negative Hatcher handle) $A^{n,m}(\xi,\eta)$ is attached to the top $M\times 1$ of $M\times I$ along its base $E^{n,m}(\xi,\eta)\times 0$.} \label{fig: negative Hatcher handle} \end{center} \end{figure} The negative Hatcher handle is shown in Figure \ref {fig: negative Hatcher handle} and also in the top figure in Figure \ref{AdT figure} where $A^{n,m}(\xi,\eta)=E_1$. \begin{lem} When we attach the negative suspension of $E^{n,m}(\xi,\eta)$ to the top of $M\times I$ along the map $D(s):D_2^n\times D^m(\eta)\times 1\to M\times1$, the result \[ M\times I\cup_{D(s)} \sigma_-E^{n,m}(\xi,\eta) \] is fiberwise diffeomorphic to $E_-^{n,m}(M,s,\xi)$ with higher difference torsion given by \[ \tau^\text{\rm IK}(E_-^{n,m}(M,s,\xi),M\times I) =-\tau^\text{\rm IK}(E^{n,m}(\xi,\eta))=(-1)^{n+1}\widetilde{ch}(\xi)\in H^{4\bullet}(B,\partial_0B) \] \end{lem} \begin{proof} When we attach $D_2^n\times D^m(\eta)\times [1,2]\subseteq \sigma_-E^{n,m}(\xi,\eta)$ to $M\times 1\subseteq \partial^\vertical\! M\times I$ using the map $D(s):D_2^n\times D^m(\eta)\times 1\to M\times 1$, the result is fiberwise diffeomorphic to $M\times I$: \[ M\times I\cup_{D(s)} D_2^n\times D^m(\eta)\times [1,2]\cong M\times I \] since we can pull $D_2^n\times D^m(\eta)\times I$ into $M\times I$ by the trivial Lemma \ref{first trivial lemma}. Therefore, \[ M\times I\cup_{D(s)} \sigma_-E^{n,m}(\xi,\eta)=M\times I\cup_{D(s)} D_2^n\times D^m(\eta)\times [1,2]\cup_{F(j)} A^{n,m}(\xi,\eta) \] is fiberwise diffeomorphic to $E_-^{n,m}(M,s,\xi)=M\times I\cup_{D(s)\circ F(j)} A^{n,m}(\xi,\eta)$. The higher torsion calculation follows from the relative handlebody lemma (Remark \ref{rem:handlebody lemma}). \end{proof} \subsubsection{Cancellation of Hatcher handles} We will take the ``union'' of the two constructions given above and attach both positive and negative Hatcher handles along the same section $s:B\to M$ and show that they cancel. As before, we have a smooth embedding \[ D(s):D^n\times D^m(\eta)\to M \] whose image is a tubular neighborhood of $s(B)$. Inside this disk bundle we create two smaller isomorphic disk bundles using embedding: \[ j_+,j_-:D^n\times D^m(\eta)\to D^n\times D^m(\eta) \] given by $j_+(x,y)=(\tfrac13(x+e_n),y)$ where $e_n$ is the last unit vector of $D^n$ and $j_-(x,y)=(\tfrac13(x-e_n),y)$. Since they are less than half as wide, these two embeddings are disjoint. Suppose that $E^{n,m}(\xi,\eta)$ is a Hatcher disk bundle as in the construction above. We first attach the positive Hatcher handle $B^{n,m}(\xi,\eta)$ along its base $\partial_0B^{n,m}(\xi,\eta)=D^n\times D^m(\eta)\times0$ to the top $M\times 1$ of $M\times I$ using the fiberwise embedding $D(s)\circ j_-$. Next we attach the negative Hatcher handle $A^{n,m}(\xi,\eta)$ to the top of $M\times I$ along its base $\partial_0A^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)$ using the composite map \[ E^{n,m}(\xi,\eta)\xrightarrow} %right arrow {label on top{F(j)}D^n\times D^m(\eta)\xrightarrow} %right arrow {label on top{ j_+} D^n\times D^m(\eta)\xrightarrow} %right arrow {label on top{D(s)}M \] Let $T$ be the image of $D(s)$ with corners rounded. Thus $T$ is a $D^{n+m}$-bundle over $B$. Let $S=\partial^\vertical\! T$ be the fiberwise boundary of $T$. This is a sphere bundle over $B$. After attaching the positive and negative Hatcher handles to the top of $M\times I$ we get a new bundle \[ W=M\times I\cup_{D(s)\circ j_-} B^{n,m}(\xi,\eta)\cup_{D(s)\circ j_+\circ F(j)}A^{n,m}(\xi,\eta) \] Note that since $B^{n,m}(\xi,\eta)$ and $A^{n,m}(\xi,\eta)$ are both attached in the interior of $T$, this new bundle is the union of $C\times I$ and $T\times I\cup B\cup A$ where $C$ is the closure of $M-T$ and $A,B$ denote the Hatcher handles. \begin{figure}[ht] \begin{center} { \setlength{\unitlength}{3cm} {\mbox{ \begin{picture}(3.5,2.5) \thicklines \put(.5,.5){\line(1,0){2} } \put(2.95,.5){\line(1,0){.45} } \put(0,0){ \qbezier(0,0)(.25,.25)(.5,.5) \line(1,0){3} } \put(0,0.5){ \put(1.3,1.4){ \line(1,0){1.5} } \put(1.1,1.2){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(2.8,1.2){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(3,1.4){ \line(0,1){.5} } \put(1.3,1.4){ \line(0,1){.5} } \put(1.3,1.9){ \line(1,0){1.7} } \put(1.1,1.7){ \line(1,0){1.7} } \put(1.1,1.2){ \line(1,0){1.7} } } \put(3,0){ \qbezier(0,0)(.2,.25)(.4,.5) } \put(.5,.1){$M\times 1$ } % \put(1.2,0){ \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){.5} } \put(1.3,1.4){ \line(1,0){.5} } \put(1.8,.9){ \line(0,1){.5} \qbezier(0,0)(-.1,-.1)(-.2,-.2) }} \put(1.2,0.5){ \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){.5} } \put(1.3,1.4){ \line(1,0){.5} } \put(1.3,1.4){ \line(0,-1){.5} } \put(1.8,.9){ \line(0,1){.5} \qbezier(0,0)(-.1,-.1)(-.2,-.2) }} \put(1.1,.7){ \line(1,0){.5} } \put(1.1,.7){ \line(0,1){.5} \qbezier(0,.5)(.1,.6)(.2,.7) } \put(1.6,1.2){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.6,1.7){ \line(0,-1){.5} \qbezier(0,0)(.1,.1)(.2,.2) } \put(1.1,1.2){ \line(1,0){.5} } \put(1.1,1.2){ \line(0,1){.5} } \put(1.3,1.4){ \line(1,0){.5} \qbezier(0,0)(0,-.01)(0,-.06) } \put(1.7,1.25){ \qbezier(0,0)(.3,-.1)(.75,0.05) \line(0,-1){.5} } \put(1.8,1.35){ \qbezier(0,0)(.2,-.1)(.75,0.05) } \put(0,.5){ \put(1.7,1.25){ \qbezier(0,0)(.3,-.1)(.75,0.05) \line(0,-1){.5} } \put(1.8,1.35){ \qbezier(0,0)(.2,-.1)(.75,0.05) } } \put(1.7,.75){ \qbezier(0,0)(.2,-.1)(.65,0.05) } \put(1.65,.7){ \qbezier(0,0)(.02,.02)(.05,.05) } % \put(2.5,.3){\line(1,0){.3}} \put(2.5,.3){\line(0,1){.4}} \put(2.8,.3){\line(0,1){.4}} \put(2.8,.3){ \qbezier(0,0)(.1,.1)(.15,.15) } \put(2.9,.45){ \line(0,1){.35} } \put(1.8,1.4){ \line(1,0){1} } \put(1.6,1.2){ \line(1,0){1} } \put(1.34,1.4){\line(0,1){.5} } \put(1.84,1.4){\line(0,1){.5} } \put(1.8,1.36){\line(0,1){.5} } \put(2.45,1.31){\line(0,1){.5} } \put(1.14,1.7){ \qbezier(0,0)(.1,.1)(.2,.2) } \put(0,.2){ \put(0.15,1.6){$A^{n,m}(\xi,\eta)$} \put(.05,1.4){(transparent)} } \put(0.15,.9){$B^{n,m}(\xi,\eta)$} % \end{picture}} }} \caption{(Arc de Triomphe) The negative Hatcher handle $A^{n,m}(\xi,\eta)$ is attached on top of the positive Hatcher handle $B^{n,m}(\xi,\eta)$ forming the Arc de Triomphe $V^{n,m}(\xi,\eta)$ which is diffeomorphic to $A^{n,m}(\xi,\eta)$ but attached to the top $M\times 1$ of $M\times I$ on the ``stem'' $D_0^n\times D^m(\eta)\times I$.} \label{fig: cancelling Hatcher handles} \end{center} \end{figure} \begin{prop}[second cancellation lemma]\label{second AdT cancellation lemma} $W$ is fiberwise diffeomorphic to $M\times I$ after rounding corners and this diffeomorphism is the identity on $C\times I$ and on $M\times 0$. \end{prop} \begin{proof} The argument is almost the same as in Proposition \ref{basic AdT cancellation lemma}. Since $\partial_0A^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)$ is a disk bundle attached using the same tangential data as $B^{n,m}(\xi,\eta)$, there is an isotopy of the attaching map ${D(s)\circ j_+\circ F(j)}$ of the negative Hatcher handle $A^{n,m}(\xi,\eta)$ to the mapping \[ E^{n,m}(\xi,\eta)\to E^{n,m}(\xi,\eta)\times 1\subset (E^{n,m}(\xi,\eta)\cup D_0^n\times D^m(\eta))\times I=B^{n,m}(\xi,\eta) \] placing $A^{n,m}(\xi,\eta)$ onto the top sides $E^{n,m}(\xi,\eta)\times 1$ of the positive Hatcher handle $B^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)\cup D_0^n\times D^m(\eta)\times I$. After moving the attaching map, $A^{n,m}(\xi,\eta)$ is attached on top of $E^{n,m}(\xi,\eta)\times I$ and their union is \[ V^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)\times I\cup A^{n,m}(\xi,\eta)=E^{n,m}(\xi,\eta)\times [0,2]\cup D_2^n\times D^m(\eta)\cong A^{n,m}(\xi,\eta) \] which is attached on $M\times 1$ along the image of $D(s)\circ j_-$ by the ``stem'' $D_0^n\times D^m(\eta)$. By Lemma \ref{first trivial lemma}, $V^{n,m}(\xi,\eta)\cup D_0^n\times D^m(\eta)$ is fiberwise diffeomorphic to $D_2^n\times D^m(\eta)\cup D_0^n\times D^m(\eta)$. This is a linear disk bundle and, therefore, attaching this to the top of $T\times I$ gives a bundle $X$ diffeomorphism of $T\times I$ fixing $S\times I$. This sequence of deformations and diffeomorphisms gives a diffeomorphism $T\times I\cup B\cup A\cong T\times I$ which is the identity on $S\times I$ and therefore, can be pasted with $C\times I$ to give a fiberwise diffeomorphism $W=C\times I\cup T\times I\cup B\cup A\cong M\times I$ as claimed. \end{proof} \subsection{Immersed Hatcher handles}\label{subsecA24} Since ``Hatcher handles'' are attached in a neighborhood of one point, several of them can be attached at different points at the same time. And, in the AdT construction, there are necessarily two Hatcher handles attached to the same fiber. Let $L$ be a $q$ manifold with boundary $\partial L=\partial_0 L\cup \partial_1 L$ where $\partial_0 L,\partial_1 L$ are $q-1$ manifolds meeting along their common boundary. {Let $\lambda:L\to B$ be an immersion so that $\lambda^{-1}(\partial_1 B)=\partial_1 L$ and let $\tilde\lambda:L\to M$ be an embedding over $\lambda$. } Then the immersed Hatcher handle construction will modify the smooth structure of $M$ in a neighborhood of the image of $\tilde\lambda$. The reason that $\lambda:L \to B$ will be an immersion and not an embedding is because, in the proof of key result, we will start with an Arc de Triomphe construction and separate the positive and negative Hatcher handles into immersed Hatcher handles. Since the AdT construction requires two handles to be attached over the same point in $B$, the mapping $\lambda:L\to B$ parametrizing the separate handles will be 2 to 1 near these points. So, we cannot assume that $\lambda$ is an embedding. Suppose as before that $m>n>q$ and let \[ D(\tilde\lambda):D_2^n\times D^m(\eta)\into M \] be a smooth embedding over $\lambda:L\to B$ where $\eta$ is the pull-back along $\tilde\lambda:L\to M$ of the stable vertical tangent bundle of $M$. As before, $D_2^n$ is the disk of radius 2 in $\ensuremath{{\field{R}}}^n$. Let $\xi$ be an $n$-plane bundle over $L$ which is trivial over $\partial_1L$ so that $J(\xi)=0\in J(L/\partial_1L)$ and let $\eta_0$ be the unique $m$-plane bundle over $L$ so that $\xi\oplus\eta_0\cong \eta$. We define $W=E_+^{n,m}(M,\tilde\lambda,\xi)$ to be the smooth $h$-cobordism bundle over $B$ so that $\partial_0W=M$ given by \[ E_+^{n,m}(M,\tilde\lambda,\xi)=M\times I\ \cup_{D(\tilde\lambda)\circ F(j)} B^{n,m}(\xi,\eta) \] where $B^{n,m}(\xi,\eta)$ is the positive Hatcher handle parametrized by $L$. {This Hatcher handle will be ``tapered off'' along $\partial_0L$ by which we mean (in the case when $\lambda$ is an embedding) that we construct a fiberwise diffeomorphism $E_+^{n,m}(M,\tilde\lambda,\xi)\cong M\times I$ over $\lambda(\partial_0L)$. When $\lambda$ is an immersion, there will be points $b\in B$ so that $\lambda^{-1}(b)$ contains more than one point. I.e., more than one Hatcher handle will be attached to the fiber $M_b$ of $M$ over $b$. In this case, we will delete those handles corresponding to the elements of $\partial_0L$. By ``tapering off'' we mean that we will make this deletion operation smooth with respect to $b\in B$. To do this, we ``dig a hole'' underneath the Hatcher handle. The idea is the the ``hole'' is perfectly cylindrical, but we fill it will a deformed plug (the Hatcher handle). Over a neighborhood of $\partial_0L$, the Hatcher handle is fiberwise diffeomorphic to the trivial disk bundle. So, over these points, the plug will fit perfectly into the hole and the result will be that fewer holes will be noticeably refilled. When $b$ moves around $B$ and the number of inverse image points in $L$ varies, this trick will make the transition smooth.} {First we note that the smooth disk bundle over $L$ given by \[ E_L^{n,m+1}(\xi,\eta)=D_2^n\times D^m(\eta)\times I\cup_{F(j)} B^{n,m}(\xi,\eta) \] is fiberwise diffeomorphic to $D_2^n\times D^m\times I$ over a small neighborhood of $\partial_0L$. We choose such a diffeomorphism.} Let $T$ be the image of $D(\tilde\lambda):D_2^n\times D^m(\eta)\to M$. So $T\times I\subseteq M\times I$ is fiberwise diffeomorphic to $D^n\times D^m(\eta)\times I$. (In the analogy, $T\times I$ is the cylindrical chunk of dirt we pull out of the ``ground'' $M\times I$ creating a cylindrical hole: $(M-T)\times I$. We now fill the hole with $E_L^{n,m+1}(\xi,\eta)$ which is equivalent to $T\times I$ near $\partial_0L$ by the chosen fiberwise diffeomorphism.) The smooth $h$-cobordism bundle $E_+^{n,m}(M,\tilde\lambda,\xi)$ is given by: \[ E_+^{n,m}(M,\tilde\lambda,\xi)=(M-T)\times I\cup E_L^{n,m+1}(\xi,\eta) \] \begin{thm}[torsion of immersed Hatcher handle]\label{torsion of immersed Hatcher} The higher \text{\rm IK}-difference torsion of this bundle with respect to $M\times I$ is the image under the mapping \[ \lambda_\ast:H^{4\bullet}(L,\partial_0L)\cong H_{q-4\bullet}(L,\partial_1L)\to H_{q-4\bullet}(B,\partial_1B)\cong H^{4\bullet}(B,\partial_0B) \] of the normalized Chern character of $\xi$: \[ \tau^\text{\rm IK} (E_+^{n,m}(M,\tilde\lambda,\xi),M\times I)= \lambda_\ast\left((-1)^n\widetilde{ch}(\xi)\right)\in H^{4\bullet}(B,\partial_0B;\ensuremath{{\field{R}}}) \] \end{thm} \begin{rem} This theorem can be viewed as the commutativity of the diagram: \[ \xymatrix{ G(L,\partial_0L)\ar[r]^{E_L^n(-,\eta)}\ar[dr]_{E_+^n(M,\tilde\lambda,-)}\ar@/^2pc/[rr]^{(-1)^n\widetilde{ch}} & \pi_0\std{L}{\partial_0}(D(\eta))\ar[d]^{D(\tilde\lambda)_\ast} \ar[r]_{\tau^\text{\rm IK}}& H^{4\bullet}(L,\partial_0L;\ensuremath{{\field{R}}})\ar[d]^{\lambda_\ast}\\ \qquad\qquad\qquad\qquad& \pi_0\std{B}{\partial_0}(M)\ar[r]_{\tau^\text{\rm IK}}& H^{4\bullet}(B,\partial_0B;\ensuremath{{\field{R}}}) \] The commutativity of the upper curved triangle is Theorem \ref {torsion of twisted Hatcher disk bundle}. \end{rem} To prove this, we need to recall the precise statement of the Framing Principle from \cite{I:ComplexTorsion}. Suppose that $W\to B$ is a smooth $h$-cobordism bundle with fiberwise boundary equal to \[ \partial^\vertical\! W=M\cup \partial^\vertical\! M\times I\ \cup M_1 \] and $f:W\to I$ is a fiberwise generalized Morse function equal to $0$ on $M$ and $1$ on $M_1$ and equal to projection to $I$ on $\partial^\vertical\! M\times I$. Suppose that the fiberwise singular set $\Sigma(f)$ of $f$ does not meet $W_{\partial_0B}$. In particular, $W_{\partial_0B}\cong M_{\partial_0B}\times I$. We are in the restricted case when the birth death points of $f$ are {\bf framed} in the sense that the negative eigenspace bundle of $D^2f$ is trivial over the birth-death points. This implies that, over the set $\Sigma_i(f)$ of Morse points of $f$ of index $i$, the negative eigenspace bundle of $D^2f$ is trivial along $\partial_0\Sigma_i(f)$ which is equal to the set of birth-death points to which $\Sigma_i(f)$ converges. The Framing Principle was proved in this restricted case in \cite{I:BookOne}. In general, the negative eigenspace bundle is a well defined stable vector bundle $\xi=\xi(f)$ on the entire singular set $\Sigma(f)$. It is defined as follows. {At each index $i$ critical point $x$ of $f$ let $\xi(x)=\xi_i(x)\oplus \epsilon^{N-i}$ where $\xi_i(x)$ is the $i$-dimensional negative eigenspace of $D^2f$ and $\epsilon^{N-i}$ is the trivial bundle with dimension $N-i$ where $N=n+m+1$ is the dimension of the fiber of $W\to B$.} This defines an $N$-plane bundle over $\Sigma_i(f)$. At each cubic point we identify the positive cubic direction with the positive first coordinate direction in $\epsilon^{N-i}$. This has the effect of pasting together these $N$-plane bundles over $\Sigma_i(f)$ and $\Sigma_{i+1}(f)$ along their common boundary for each $i$. The result is an $N$-plane bundle over all of $\Sigma(f)$. The projection mapping $p:(\Sigma(f),\partial\Sigma(f))\to (B,\partial_1B)$ induces a map in cohomology using Poincar\'e duality assuming that $B$ is oriented. (If $B$ is not oriented then just replace it with the disk bundle of the orientation line bundle.) \[ p^\Sigma_\ast:H^\ast(\Sigma(f))\cong H_{q-\ast}(\Sigma(f),\partial\Sigma(f))\to H_{q-\ast}(B,\partial_1B)\cong H^\ast(B,\partial_0B) \] Similarly, for each index $i$ we have the push-down operator: \[ p_\ast:H^\ast(\Sigma_i(f),\partial_0\Sigma_i(f))\cong H_{q-\ast}(\Sigma_i(f),\partial_1\Sigma_i(f))\to H_{q-\ast}(B,\partial_1B)\cong H^\ast(B,\partial_0B) \] where $\partial_1\Sigma_i(f)=\Sigma_i(f)\cap\partial\Sigma(f)$ and $\partial_0\Sigma_i(f)$ is the set of birth-death points in the closure of $\Sigma_i(f)$. We use the orientation for $\Sigma_i(f)$ which agrees with the orientation of $B$ and we take the orientation of $\Sigma(f)$ which agrees with the orientation of $\Sigma_i(f)$ for $i$ even. As a result of these sign conventions we have the following observation. \begin{lem} In the restricted case when the birth-death points of $f$ are framed, then the image under $p^\Sigma_\ast$ of the Chern character of $\xi(f)$ is equal to the the alternating sum of images under the push-down operators: \[ p_\ast:H^{4\bullet}(\Sigma_i(f),\partial_0\Sigma_i(f))\to H^{4\bullet}(B,\partial_0B) \] of the Chern character of $\xi_i=\xi|\Sigma_i(f)$: \[ p^\Sigma_\ast(ch(\xi\otimes\ensuremath{{\field{C}}})=\sum_i (-1)^ip_\ast(ch(\xi_i\otimes\ensuremath{{\field{C}}})\in H^{4\bullet}(B,\partial_0B) \] \end{lem} \begin{thm}[Relative Framing Principle]\label{relative framing principle} Suppose that the manifold $B$ and the stable bundle $\xi=\xi(f)$ are both oriented. Then the higher relative \text{\rm IK}-torsion invariant $\tau^\text{\rm IK}(W,M)\in H^{4\bullet}(B,\partial_0B)$ is given by the higher torsion of the family of acyclic chain complexes $C(f)$ given by $f$ plus the push down of the normalized Chern character of $\xi$: \[ \tau^\text{\rm IK}(W,M)=\tau(C(f))+p^\Sigma_\ast(\widetilde{ch}(\xi))\in H^{4\bullet}(B,\partial_0B) \] \end{thm} \begin{proof} The published version of the Framing Principle \cite{I:ComplexTorsion} assumes that $\partial_0B$ is empty. However, the relative case follows easily from the absolute case in the present setting where we have an $h$-cobordism bundle $W$. Just take the base $\partial_0W=M$ and embed it into the boundary of a very large dimensional trivial disk bundle $B\times D^N$. Let $\nu_M$ be the vertical normal bundle of $M$ in $B\times S^{N-1}$ and let $\nu_W$ be the extension of $\nu_M$ to $W$. Then we have a new bundle: \[ \Delta=B\times D^N\cup D(\nu_W) \] over $B$. Since $D(\nu_W)$ is an $h$ cobordism bundle, this is a smooth $N$-disk bundle over $B$ (after rounding off corners). By additivity and invariance after passing to linear disk bundles, we have: \[ \tau^\text{\rm IK}(W,M)=\tau^\text{\rm IK}(D(\nu_W,\nu_M))=\tau^\text{\rm IK}(\Delta,B\times D^N)=\tau^\text{\rm IK}(\Delta) \] But, $\Delta$ is a disk bundle over $B$ which is trivial over $\partial_0B$. So, we can collapse $\partial_0B$ to a point to get a new bundle $\overline\Delta$ over $B/\partial_0B$. The Framing Principle for $\overline\Delta\to B/\partial_0B$ is then equivalent to the relative Framing Principle for $(W,M)$. To do this more precisely, we do the same trick as before, removing a tube $T=D(\nu_M)\times I$ in a collar neighborhood of $B\times S^{N-1}$ and replace it with $W$. The new fiberwise Morse function will be equal to the distance squared from the origin in $B\times D^N-T$ and equal to $f$ (rescaled to match) on $W$. Now we collapse the bundle over $\partial_0B$. By construction, the fiberwise generalized Morse function will factor through this quotient bundle and the original Framing Principle applies. \end{proof} \begin{proof}[Proof of Theorem \ref{torsion of immersed Hatcher}] We will start with a fiberwise oriented Morse function on the bundle $E_L^{n,m}(\xi,\eta)\to L$ and then modify it to give a fiberwise oriented generalized Morse function which is framed on the birth-death set. {The bundle $E_L=E_L^{n,m}(\xi,\eta)$ is obtained from $D_2^n\times D^m(\eta)\times I$ by attaching two handles with cores of dimension $n-1$ and $n$.} (For a more elaborate version of this with more details, see \cite{Goette03}) This means it has a fiberwise Morse function $f:E_L\to I$ which is equal to the projection map to $I$ in a neighborhood of the bottom $D_2^n\times D^m(\eta)\times 0$ and sides $\partial(D_2^n\times D^m(\eta))\times I$. Furthermore $f$ will have two critical points over every point $t\in L$. These critical points $x_t,y_t$ have index $n-1$ and $n$ respectively. The vertical tangent bundle of $E_L$ splits as $\epsilon^{n-1}\oplus (\eta\oplus \epsilon^1)$ along the section $x_t$ of $E_L$ where the trivial $n-1$ plane bundle $\epsilon^{n-1}$ is the negative eigenspace of $D^2f_t$ along $x_t$. The vertical tangent bundle of $E_L$ along $y_t$ splits as $\xi\oplus (\eta_0\oplus \epsilon^1)$ where the vector bundle $\xi$, which is homotopically trivial in the sense that $J(\xi)=0$, is the negative eigenspace bundle. Along $\partial_0L$, the bundle $\xi$ is trivial and the handle corresponding to $y_t$ is in cancelling position with the handle corresponding to $x_t$ since they are both standard linear handle along $\partial_0L$ by construction. This implies that these critical points can be cancelled along a birth-death set of index $n-1$. Since the negative eigenspace bundle $\xi$ is trivial along this set, this is a framed birth-death set. The new singular set $\Sigma(f)$ is now a $q$-manifold with boundary lying over $\partial_1L$. It has a framed birth-death set and Morse sets in two indices $\Sigma_n(f)$ and $\Sigma_{n-1}(f)$. The descending bundles are $\xi_{n-1}=\epsilon^{n-1}$ and $\xi_n=\xi$. These are oriented bundle since they are homotopically trivial. Also the cellular chain complex is trivial at every point. Therefore, by the Framing Principle, the higher relative \text{\rm IK}-torsion of $E_L^{n,m}(\xi,\eta)$ is \[ \tau^\text{\rm IK}(E_L^{n,m}(\xi,\eta),D^n\times D^m(\eta)\times I)=(-1)^n\widetilde{ch}(\xi)\in H^{4k\bullet}(L,\partial_0L) \] From this fiberwise oriented generalized Morse function we can construct a fiberwise oriented generalized Morse function $F$ on $E_+^{n,m}(M,\tilde\lambda,\xi)=(M-T)\times I\cup E_L$ by taking projection to $I$ on the first piece $(M-T)\times I$ and $f$ on the second piece $E_L$. The singular set of $F$ is the image under $D(\tilde\lambda)$ of the singular set of $f$. Consider the following commuting diagram. \[ \xymatrix{ \Sigma_n(f)\ar[r]^\subset\ar[rd]^\simeq&\Sigma(f)\ar[d]\ar[r]^(.45){D(\tilde\lambda)} & \Sigma(F)\ar[d]^p\\ &L \ar[r]^\lambda& B \] This implies that the image of the push-down of the Chern character of $\xi$ along the map $p$ is equal to the image of the Chern character of $\xi$ under $\lambda$. So, by the relative Framing Principle, we have \[ \tau^\text{\rm IK}(E_+^{n,m}(M,\tilde\lambda,\xi),M)=(-1)^np_\ast(\widetilde{ch}(\xi))=(-1)^n\lambda_\ast(\widetilde{ch}(\xi)) \] as claimed. \end{proof} \section{Main Theorems} There are two main theorems in this paper. The first concerns the set of possible higher torsion invariants of exotic smooth structures on smooth manifold bundles. The second theorem is that, rationally stably, the immersed Hatcher construction gives all possible exotic smooth structures on smooth manifold bundles with odd dimensional fibers. This is a combination of the following two theorems. First recall from Section 2 of \cite{Second} that \[ \pi_0\std{B}{\partial_0}(M)\otimes\ensuremath{{\field{R}}} \cong H_{q-4\bullet}(M,M_{\partial_1B}) \] where the spot $\bullet$ indicates direct sum over all $k>0$ with real coefficients unless otherwise indicated and the image of an exotic smooth structure $M'$ on $M$ is denoted \[ \Theta_M(M')=\Theta(M',M)\in H_{q-4\bullet}(M,M_{\partial_1B}) \] and we call it the {\bf (rational) exotic structure class} of $M'$. \begin{thm}\label{first main theorem} When the fiber dimension is odd, the rational exotic structure class $\Theta(M',M)$ given by the immersed Hatcher construction $E_+^{n,m}(M,\tilde\lambda,\xi)$ is the image of the Poincar\'e dual of twice the normalized Chern character of $\xi$ under the map in homology induced by the embedding $\tilde\lambda:(L,\partial_1L)\to (M,M_{\partial_1B})$. Thus: \[ \Theta(M',M)=(-1)^n\tilde\lambda_\ast D(2\widetilde{ch}(\xi)) \] where $\widetilde{ch}(\xi)\in H^{4\bullet}(L,\partial_0L)$ is given in Definition \ref {def: normalized Chern character} and $\tilde\lambda_\ast\circ D$ is the composition:\[ H^{4\bullet}(L,\partial_0L) \xrightarrow{\cong} H_{q-4\bullet}(L,\partial_1L)\xrightarrow} %right arrow {label on top{\tilde\lambda_\ast} H_{q-4\bullet}(M,M_{\partial_1B}) \] \end{thm} \begin{rem}\label{rem: scalar multiple of integral class} By definition of the normalized Chern character, the exotic structure class $\Theta(M',M)$ lies in the image of \[ H_{q-4\bullet}(M,M_{\partial_1B};\zeta(2k+1)\ensuremath{{\field{Q}}}) \] In particular, $\Theta(M',M)$ is a scalar multiple of an integral class in every degree. \end{rem} \begin{proof} The proof will show the commutativity of the following diagram which is a slightly stronger statement: \[ \xymatrix{ G(L,\partial_0L)\ar[rr]_{top\,E_L^n(-,\eta)}\ar[drr]_{top\,E_+^n(M,\tilde\lambda,-)}\ar@/^2pc/[rrrr]_{D\circ (-1)^n2\widetilde{ch}} && \pi_0\std{L}{\partial_0}(E)\ar[d]^{D(\tilde\lambda)_\ast} \ar[rr]_(.4){D\circ\tau^\text{\rm IK}}&& H_{q-4\bullet}(L,\partial L)\ar[d]^{\lambda_\ast}\ar[dl]^{\tilde\lambda_\ast}\\ && \pi_0\std{B}{\partial_0}(M)\ar[r]_(.4){\Theta}& H_{q-4\bullet}(M,M_{\partial_1B})\ar[r]_{p_\ast} & H_{q-4\bullet}(B,\partial_1B) \] Here $G(L,\partial_0L)=[L/\partial_0L,G/O]$. The middle portion can be expanded into the following diagram where $E=D^n\times D^m(\eta)$ is the disk bundle over $L$ which is diffeomorphic to a tubular neighborhood of the image of $\tilde\lambda:L\to M$. \[ \xymatrix{ \pi_0\std{L}{\partial_0}(E)\ar[d]_{D(\tilde\lambda)_\ast}\ar[r]_(.45){\Theta_E}\ar@/^2pc/[rr]_{D\circ\tau^\text{\rm IK}} & H_{q-4\bullet}(E,E_{\partial_1})\ar[d]_{D(\tilde\lambda)_\ast}\ar[r]_\cong & H_{q-4\bullet}(L,\partial_1L)\ar[dl]^{\tilde\lambda_\ast}\\ \pi_0\std{B}{\partial_0}(M) \ar[r]^(.45){\Theta_M}& H_{q-4\bullet}(M,M_{\partial_1}) \] The morphisms $\Theta_E,\Theta_M$ in the second diagram are isomorphisms of vector spaces after tensoring with $\ensuremath{{\field{R}}}$ by Theorem 2.2.3 of \cite{Second} and the vertical maps are all induced by $\tilde\lambda:L\to M$ and $D(\tilde\lambda):E\to M$. The square commutes by Corollary 2.4.3 of \cite{Second}. The triangle on the right commutes since it comes from a commuting diagram of spaces. The composition of the top two arrows is equal to $D\circ\tau^\text{\rm IK}$ by normalization of $\Theta_E$ (Proposition 2.2.4 of \cite{Second}). Therefore, the second diagram commutes. So, the middle quadrilateral in the first diagram commutes. If we look at the top of the immersed Hatcher handle we get an element \[ top(E_+^{n,m}(M,\tilde\lambda,\xi))\in \std{B}{\partial_0}(M) \] which, by construction is the image of the Hatcher disk bundle \[ E'=top(E_+^{n,m}(E,0,\xi))\in \std{L}{\partial_0}(E) \] under the stratified map $\std{L}{\partial_0}(E)\to\std{B}{\partial_0}(M)$. The composition of the horizontal mappings on the top row of the first diagram takes $\xi\in G(L,\partial_0L)$ to $\tau^\text{\rm IK}(E')$. And the last statement we need to prove is: \[ \tau^\text{\rm IK}(E')=(-1)^n2\widetilde{ch}(\xi). \] This follows from the following four equations where $E$ is the bottom of $E_+^{n,m}(E,0,\xi)$. Since $E$ is a linear disk bundle, we have $\tau(E)=0$ for any stable torsion invariant $\tau$. \begin{enumerate} \item $\tau(E)=0=\frac12\tau(DE)+\frac12\tau(\partial^\vertical\! E)$ by Remark \ref{rem:extension to not closed fibers}. \item $\tau(E')=\frac12\tau(DE')+\frac12\tau(\partial^\vertical\! E)$ since $\partial^\vertical\! E'=\partial^\vertical\! E$. \item $\tau(\partial^\vertical\! E_+^{n,m}(E,0,\xi))=\frac12\tau(DE)+\frac12\tau(DE')$ by Additivity Axiom \ref{defn:axiomatic higher torsion}. \item $\tau^\text{\rm IK}(\partial^\vertical\! E_+^{n,m}(E,0,\xi))-\tau^\text{\rm IK}(DE)=(-1)^n2\widetilde{ch}(\xi)$ by Corollary \ref{cor: torsion of twisted Hatcher sphere bundle}. \end{enumerate} This proves the commutativity of the first diagram and the theorem follows. \end{proof} \begin{prop}\label{main proposition} The vector space $H_{q-4\bullet}(M,M_{\partial_1B})$ is spanned by the images of the possible maps \[ G(L,\partial_0L)\to H_{q-4\bullet}(M,M_{\partial_1B}) \] given by $\tilde\lambda_\ast\circ D\circ(-1)^n2\widetilde{ch}=\Theta_M\circ top\, E_+^n(M,\tilde\lambda,-)$ in the theorem above. \end{prop} This proposition is proved below using the Arc de Triomphe construction. \begin{thm}\label{second main theorem} When the fiber dimension $N$ of $M\to B$ is odd and $B$ is oriented, the higher \text{\rm IK}-relative torsion of an exotic smooth structure $M'$ on $M$ over $(B,\partial_0B)$ and the rational exotic smooth structure class $\Theta(M',M)$ are related by \[ D\tau^\text{\rm IK}(M',M)=p_\ast\Theta(M',M) \] where $D$ is Poincar\'e duality and $p_\ast$ is the map in homology induced by $p:M\to B$. In other words, the following diagram commutes. \[ \xymatrix{ \pi_0\std{B}{\partial_0}(M)\ar[r]^(.45){\Theta} \ar@/_2pc/[rr]^{D\circ\tau^\text{\rm IK}} & H_{q-4\bullet}(M,M_{\partial_1B})\ar[r]^{p_\ast} & H_{q-4\bullet}(B,\partial_1B) \] \end{thm} \begin{proof} The map $p_\ast$ is $\ensuremath{{\field{R}}}$-linear, and Theorem \ref {first main theorem} and Proposition \ref{main proposition} above say that the immersed Hatcher construction gives generators for $\pi_0\std{B}{\partial_0}(M)\otimes\ensuremath{{\field{R}}}\cong H_{q-4\bullet}(M,M_{\partial_1B})$ and $p_\ast$ sends these generators to their higher relative IK-torsion. The theorem follows. \end{proof} We have the following immediate corollary. \begin{cor}\label{third main theorem} If $M$ is a smooth bundle over $B$ and both fiber and base are oriented manifolds with odd fiber dimension $N\ge 2q+3$ then the possible values of the higher \text{\rm IK}-relative torsion $\tau^\text{\rm IK}(M',M)$ for $M'$ an exotic smooth structure on $M$ which agrees with $M$ over $\partial_0B$ will span the image of the push-down map \[ p_\ast:H^{N+4\bullet}(M,\partial_0M)\to H^{4\bullet}(B,\partial_0B) \] where $\partial_0M=M_{\partial_0B}\cup\partial^\vertical\! M$. \end{cor} The immersed Hatcher construction and the Arc de Triomphe do not work to produce exotic smooth structures on smooth bundles $M\to B$ with closed even dimensional fibers. If the vertical boundary $\partial^\vertical\! M$ is nonempty, these constructions can be used to modify the smooth structure near the vertical boundary and we conjecture that this is the most that can be done. Nevertheless, if there is any way to produce an exotic smooth structure in the case of an even dimensional fiber, it follows easily by reduction to the odd dimensional case that the first part of Theorem \ref{second main theorem} still holds. \begin{cor}\label{second main theorem: even case} Theorem \ref{second main theorem} holds for even dimensional fibers. Thus: \[ D\tau^\text{\rm IK}(M',M)=p_\ast\Theta(M',M)\in H_{q-4\bullet}(B,\partial_1B) \] \end{cor} \begin{proof} If $M,M'\to B$ have even dimensional fibers then $M'\times I,M\times I\to B$ have odd dimensional fibers and we have: \[ \tau^\text{\rm IK}(M',M)=\tau^\text{\rm IK}(M'\times I,M\times I) \] since $\tau^\text{\rm IK}$ is a stable invariant. Since the fibers of $M\times I\to B$ are odd dimensional, Theorem \ref{second main theorem} applies and \[ D\tau^\text{\rm IK}(M'\times I,M\times I)=p_\ast\Theta(M'\times I,M\times I) \] This is equal to $p_\ast\Theta(M',M)$ since $\Theta$ is, by definition, a stable invariant. \end{proof} \subsection{Arc de Triomphe 2}\label{subsecA31} Proposition \ref {main proposition} follows from the Arc de Triomphe construction and the stratified deformation lemma \ref{stratified deformation lemma}. The Arc de Triomphe construction is an extension of the Hatcher construction which rationally stably produces all exotic smooth structures on a compact manifold bundle. The stratified deformation lemma shows that each AdT construction can be deformed into an immersed Hatcher construction. We explained the basic construction in subsection \ref{ss:AdT}. It only remains to describe the full construction and prove the following theorem. \begin{thm}[Arc de Triomphe Theorem]\label{AdT lemma} The AdT construction gives virtually all stable exotic smooth structures on a compact manifold bundle with odd dimensional fibers. In other words, AdT gives all elements in a subgroup of finite index in the group of all stable exotic smooth structures. \end{thm} \begin{rem} If $M\to B$ is a smooth bundle whose fibers are even dimensional, the AdT construction rationally stably produces all exotic smooth structures on $M\times I\to B$. By definition these are stable smooth structures on $M\to B$. So, the theorem implies that \emph{the AdT construction produces virtually all stable smooth structures on all compact manifold bundles}. \end{rem} \subsubsection{AdT construction} The Arc de Triomphe construction goes as follows. Suppose that $M\to B$ is a smooth manifold bundle over a compact oriented $q$-manifold $B$ with odd fiber dimension $N=n+m$ where $m>n>q$. Suppose $\partial B=\partial_0B\cup \partial_1B$ where $\partial_0B, \partial_1B$ meet along their common boundary. Then we will construct elements of $\utd{B,\partial_0B}(M)$, the space of exotic smooth structures on $M$ relative to $\partial_0M=M_{\partial_0B}\cup \partial^\vertical\! M$. \begin{defn}\label{stratified set} By a {\bf stratified set} \emph{over $B$ with coefficients in $X$} we mean a pair $(\Sigma,\psi)$ where $\Sigma$ is a compact smooth oriented $q$ manifold together with a smooth mapping $\pi:\Sigma\to B$ and $\psi:\Sigma\to X$ is a continuous mapping satisfying the following. \begin{enumerate} \item $\pi$ sends $\partial \Sigma$ to $\partial B$. \item $\pi:\Sigma\to B$ has only fold singularities, i.e. it is given in local coordinates near critical points by $\pi(x_1,\cdots,x_q)=(x_1^2,x_2,\cdots,x_q)$, and the singular set $\Sigma_0$ is a $q-1$ submanifold of $\Sigma$ transverse to $\partial \Sigma$. \end{enumerate} Let $\Sigma_+$ and $\Sigma_-$ denote the closures of the subsets of $\Sigma-\Sigma_0$ on which the map $\pi:\Sigma\to B$ is orientation preserving and orientation reversing, respectively. Thus $\Sigma_-\cap \Sigma_+=\Sigma_0$ and $\Sigma_-\cup \Sigma_+=\Sigma$. We say that $(\Sigma,\psi)$ is a {\bf stratified subset} of a smooth bundle $M$ over $B$ if $\Sigma$ is a smooth submanifold of $M$ and $\pi:\Sigma\to B$ is the restriction of $p:M\to B$. \end{defn} \begin{rem}\label{rem: vertical vector field} For any stratified subset $(\Sigma,\psi)$ in $M$ over $B$, there is a nowhere zero vertical vector field $v$ along $\Sigma_0$ which points from $\Sigma_-$ to $\Sigma_+$. If the fiber dimension is greater than the base dimension, this vector field extends to a nowhere zero vertical vector field on all of $\Sigma$. If the fiber dimension is at least two more than the base dimension, this extension is unique up to homotopy. \end{rem} Let $SD^X_{B,\partial_0}(M)$ be the set of stratified deformation classes of stratified subsets $(\Sigma,\psi)$ of $M$ over $B$ with coefficients in $X$ so that $\pi(\Sigma)$ is disjoint from $\partial_0B$. By a {\bf stratified deformation} of stratified subsets $(\Sigma,\psi)\simeq(\Sigma',\psi')$ of $M$ we mean a stratified subset $(S,\Psi)$ of $M\times I$ over $B\times I$ with coefficients in $X$ so that the image of $S$ in $B\times I$ is disjoint from $\partial_0B\times I$ and so that $(\Sigma,\psi),(\Sigma',\psi')$ are the restrictions of $(S,\Psi)$ to $B\times 0,B\times 1$ respectively. Here are two examples that we will use later in the proof of the Stratified Deformation Lemma \ref{stratified deformation lemma}. In both cases, $M=B\times J$ where $J\subset\ensuremath{{\field{R}}}$ is one dimensional. Using Remark \ref{rem: vertical vector field} we will be able to embed these examples into a general stratified subset with sufficiently large fiber dimension. \begin{eg}[$k$-lens]\label{eg:k-lens} By a \emph{$k$-lens} we mean a stratified set $\Sigma$ diffeomorphic to $S^k$ with $\Sigma_+$ and $\Sigma_-$ both diffeomorphic to $D^k$. Here is an explicit example. Let $M=B\times J$ where $(B,\partial_0B)=(D^k,S^{k-1})$ and $J=[0,1]$. Let $\Sigma$ be the ellipsoid given by \[ \Sigma=\{(x,h)\in D^k\times [0,1]\,:\, ||2x||^2+(4h-2)^2=1\} \] with $\Sigma_+$ given by $h\ge\frac12$ and $\Sigma_-$ given by $h\le\frac12$. This set can also be given in polar coordinates by the equation \[ ||2r||^2+(4h-2)^2=1 \] where $(r,\theta,h)\in [0,1]\times S^{k-1}\times J$. Since $\theta\in S^{k-1}$ does not occur in the equation, the set $\Sigma$ is given by \emph{spinning} the subset of the $r,h$-plane given be the above equation. \begin{figure}[htbp] \begin{center} { \setlength{\unitlength}{2cm} {\mbox{ \begin{picture}(2,1) \thicklines \qbezier(1,.25)(2,.25)(2,.5) \thinlines \qbezier(1,.75)(2,.75)(2,.5) \put(2,0){ \thicklines \qbezier(-1,.25)(-2,.25)(-2,.5) \thinlines \qbezier(-1,.75)(-2,.75)(-2,.5) } \put(2,.2){$\Sigma_-$} \put(2,.7){$\Sigma_+$} % \put(1,0){ \line(0,1){1} \qbezier(-.2,0)(-.2,-.1)(0,-.1) \qbezier(.2,0)(.2,-.1)(0,-.1) \qbezier(.2,0)(.19,-.05)(.22,-.06) \qbezier(.2,0)(.19,-.05)(.14,-.03) \put(.3,-.1){$S^{k-1}$} % } \end{picture}} }} \label{fig:k-lens} \end{center} \end{figure} \end{eg} \begin{eg}[mushroom]\label{eg:mushroom} Let $M=B\times J$ where $(B,\partial_0B)=(D^{k},\emptyset)$ and $J=[-3,3]$. Let $\Sigma\subset M\times (-1,1)$ be the stratified deformation given in polar coordinates $(r,\theta,h,t)\in [0,1]\times S^{k+1}\times J\times (-1,1)$ by the equation \[ 4(r^2+t^2)=h-\frac{h^3}3+1 \] \begin{figure}[htbp] \begin{center} { \setlength{\unitlength}{.7in} {\mbox{ \begin{picture}(7,1.2) \put(0,.2){ \thinlines \qbezier(1,.3)(1.5,.3)(2,.2) \put(2,0){ \qbezier(-1,.3)(-1.5,.3)(-2,.2) } \put(1.7,.35){$\Sigma_+$} % \put(.95,0){ \line(0,1){1} \qbezier(-.2,0)(-.2,-.1)(0,-.1) \qbezier(.2,0)(.2,-.1)(0,-.1) \qbezier(.2,0)(.19,-.05)(.22,-.06) \qbezier(.2,0)(.19,-.05)(.14,-.03) \put(.3,-.1){$S^{k-1}$} % } \put(.7,-.4){$t=-1,1$} } \put(2.5,.2){ \thinlines \qbezier(1,.75)(1.5,.75)(1.5,.65) \qbezier(1,.55)(1.5,.55)(1.5,.65) \qbezier(1,.3)(1.5,.3)(2,.2) \put(2,0){ \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1,.55)(-1.5,.55)(-1.5,.65) \qbezier(-1,.3)(-1.5,.3)(-2,.2) } \put(1.7,.35){$\Sigma_+$} \put(1.2,.85){$T_+$} % \put(.95,0){ \line(0,1){1} \qbezier(-.2,0)(-.2,-.1)(0,-.1) \qbezier(.2,0)(.2,-.1)(0,-.1) \qbezier(.2,0)(.19,-.05)(.22,-.06) \qbezier(.2,0)(.19,-.05)(.14,-.03) \put(.3,-.1){$S^{k-1}$} % } \put(.7,-.4){$t=-\frac12,\frac12$} } \put(5,.2){ \thicklines \qbezier(1,.75)(1.5,.75)(1.5,.65) \qbezier(1.35,.5)(1.5,.5)(1.5,.65) \qbezier(1.35,.5)(1.2,.5)(1.2,.4) \thinlines \qbezier(1.4,.3)(1.8,.25)(2,.28) \qbezier(1.4,.3)(1.2,.3)(1.2,.4) \put(2,0){ \thicklines \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1.35,.5)(-1.5,.5)(-1.5,.65) \qbezier(-1.35,.5)(-1.2,.5)(-1.2,.4) \thinlines \qbezier(-1.4,.3)(-1.8,.25)(-2,.28) \qbezier(-1.4,.3)(-1.2,.3)(-1.2,.4) } \put(1.7,.35){$\Sigma_+$} \put(1.2,.85){$T_+$} % \put(.95,0){ \line(0,1){1} \qbezier(-.2,0)(-.2,-.1)(0,-.1) \qbezier(.2,0)(.2,-.1)(0,-.1) \qbezier(.2,0)(.19,-.05)(.22,-.06) \qbezier(.2,0)(.19,-.05)(.14,-.03) \put(.3,-.1){$S^{k-1}$} % } \put(.85,-.4){$t=0$} } \end{picture}} }} \caption{For $t=\pm\frac12$, the stratified subset is a $k$-lens union a regular $\Sigma_+$ component. For $t=\pm1$, this $k$-lens disappears. For $t=0$, the rotated shape resembles a \emph{mushroom}. We call the deformation $t:-1\to 0$ ``planting a mushroom.'' } \label{fig: mushroom} \end{center} \end{figure} \end{eg} A stratified subset is $\Sigma\subset M$ together with a continuous mapping $\psi:\Sigma\to X$. The coefficient spaces $X$ that we are interested in are $X=BSO$, classifying oriented stable vector bundles over $\Sigma$ and $X=G/O=SG/SO$ classifying vector bundles with homotopy trivializations of the corresponding spherical fibration. The latter is the input for Hatcher's construction and the Arc de Triomphe construction will be a mapping \[ AdT:\sdgo{B}{\partial_0}(M)\to \std{B}{\partial_0}(M) \] The claim is that this map is rationally split surjective. In other words, rationally stably, all exotic tangential smoothings on $M$ are given by the construction that we will now give. The idea of the construction is to attach negative Hatcher handles along $\Sigma_-$ and positive Hatcher handles along $\Sigma_+$ and have them cancel along $\Sigma_0$. The map $\psi:\Sigma\to G/O$ gives the bundle $\xi$ in the Hatcher handle. Suppose that $m>n>q$ and $M\to B$ is a smooth bundle with fiber dimension $m+n$ which we assume is odd ($2q+3$ is the minimum). Suppose we have a stratified subset $\Sigma\subset M$ with coefficient map $\psi:\Sigma\to G/O$. This gives a stable vector bundle $\xi$ over $\Sigma$. Let $\eta$ be the unique $m$-plane bundle over $\Sigma$ isomorphic to the pull-back of the vertical tangent bundle of $M$ and let $\eta_-,\eta_+,\eta_0$ be the restrictions of $\eta$ to $\Sigma_-,\Sigma_+,\eta_0$. Then we have an embedding \[ D(\tilde\pi_+):D^n\times D^m(\eta_+)\into M \] lying over the restriction $\pi_+:\Sigma_+\to B$ of $\pi$ to $\Sigma_+$. This gives a tubular neighborhood of $\Sigma_+$. Replacing $+$ with $-$ we get $D(\tilde\pi_-)$ lying over $\pi_-$ giving a thickening of $\Sigma_-$. The embeddings $D(\tilde\pi_+)$ and $D(\tilde\pi_-)$ are disjoint except near $\Sigma_0$. To correct this we move $D(\tilde\pi_-)$ slightly in the fiber direction near $\Sigma_0$ so that the images of $D(\tilde\pi_+)$ and $D(\tilde\pi_-)$ are disjoint everywhere. We do this move systematically by moving in the direction of, say, the last coordinate vector $e_n$ in $D^n$. The result will be that the image of $D(\tilde\pi_-)$ will no longer contain $\Sigma_-$ close to $\Sigma_0$. Do this in such a way that there is an embedding \[ D(\tilde\pi_0):D^n\times D(\eta_0)\to M \] so that $D(\tilde\pi_-)(x,y)=D(\tilde\pi_0)(\tfrac14(x+2e_n),y)$ and $D(\tilde\pi_+)(x,y)=D(\tilde\pi_0)(\tfrac14(x-2 e_n),y)$. Or, start with embedding $D(\tilde\pi_0)$ and move the mappings $D(\tilde\pi_+),D(\tilde\pi_-)$ vertically (along the fibers) so that they land in the two halves of the image of $D(\tilde\pi_0)$ as indicated. Take the bundle $M\times I$ over $B$ and, using the map $D(\tilde\pi_+)$ we attach the positive Hatcher handle $B^{n,m}(\xi,\eta_+)$ along its base $\partial_0B^{n,m}(\xi,\eta_+)=D^n\times D^m(\eta_+)\times0$ to the top $M\times 1$ of $M\times I$. Then we attach the negative Hatcher handle $A^{n,m}(\xi,\eta_-)$ to the top of $M\times I$ using the composite map \[ E^{n,m}(\xi,\eta_-)\xrightarrow} %right arrow {label on top{F(j)}D^n\times D^m(\eta_-)\xrightarrow} %right arrow {label on top{D(\tilde\pi_-)}M \] Since the images of $D(\tilde\pi_+)$ and $D(\tilde\pi_-)$ are disjoint, these attachments are disjoint. Over $\pi(\Sigma_0)$ we have a positive and negative Hatcher handle attached on the interior of the image of $D(\tilde\pi_0)$. Next, we slide the attachment map for the negative Hatcher handle until it ``cancels'' the positive Hatcher handle. It is very easy to see how this works. Over $\Sigma_0$ the negative Hatcher handle $A^{n,m}(\xi,\eta_0)$ is attached along its base $\partial_0A^{n,m}(\xi,\eta_0)=E^{n,m}(\xi,\eta_0)$ and the positive Hatcher handle is \[ B^{n,m}(\xi,\eta_0)=D^n\times D^m(\eta_0)\cup_{h\times1}E^{n,m}(\xi,\eta_0)\times [1,2] \] By Lemma \ref{third trivial lemma}, we can slide the base $E^{n,m}(\xi,\eta_0)$ of $A^{n,m}(\xi,\eta_0)$ along the top of the $M\times 1\cup B^{n,m}(\xi,\eta_+)$ until it is equal to $E^{n,m}(\xi,\eta_0)\times2\subseteq B^{n,m}(\xi,\eta_0)$. We can do this in a precise way since we are working inside of the model which is the image of $D(\tilde\pi_0)$ in $M\times 1$. We extend this deformation (arbitrarily) to $A^{n,m}(\xi,\eta_-)$. Then we will have the desired bundle over $B$ whose fibers are $h$-bordisms with base equal to the original bundle $M$. We call this new bundle $W(\Sigma,\psi)$ (suppressing $n,m$): \[ W(\Sigma,\psi)=M\times I\cup B^{n,m}(\xi,\eta_+)\cup A^{n,m}(\xi,\eta_-) \] To be sure, we need to round off the corners. And we also need to taper off the cancelling Hatcher handles along $\Sigma_0$. But, along $\Sigma_0$, the two Hatcher handles cancel and we have a local diffeomorphism of $W(\Sigma,\psi)$ with $M\times I$ near $\Sigma_0$. Using this diffeomorphism we can identify $W$ with $M\times I$ along this set and we have a smooth bundle over $B$. The local diffeomorphism exists by Proposition \ref {second AdT cancellation lemma}. The reason that we have a bundle at the end is because, in a neighborhood of the AdT construction along $\Sigma_0$ we either have two Hatcher handles, which are a smooth continuation of what we have at $\Sigma_0$ or we have $M\times I$ locally (which means we are only looking at the portion in the image of $D(\tilde\pi_0)$) and there we are using the diffeomorphism given by Proposition \ref {second AdT cancellation lemma} to identify $M\times I$ with the $M\times I$ with the pair of Hatcher handles attached. So, we have local triviality and thus a smooth bundle $W\to B$. Let \[ AdT(\Sigma,\psi)=top(W(\Sigma,\psi)) \] with tangential homeomorphism given by $W$. If we have any deformation of $(\Sigma,\psi)$ then we can apply the same construction to this stratified set over $B\times I$ and we get a isotopy between the two constructions showing that $AdT(\Sigma,\psi)$ changes by an isotopy. \begin{prop}\label{sd is a group} (a) The AdT construction as described above gives a well defined mapping \[ AdT:\sdgo{B}{\partial_0}(M)\to \pi_0\std{B}{\partial_0}(M) \] from the set of stratified deformation classes of stratified subsets $(\Sigma,\psi)$ of $M$ with coefficients in $G/O$ to the space of stable tangential smoothings of $M$. (b) This mapping is a homomorphism of additive groups where addition in $\sdgo{B}{\partial_0}(M)$ is given by disjoint union and addition in $ \pi_0\std{B}{\partial_0}(M)$ is given by the little cubes operad on the stabilization. \end{prop} \begin{proof} It is clear that $\sdgo{B}{\partial_0}(M)$ is a monoid with addition given by disjoint union using transitivity to make any two stratified subsets of $M$ disjoint by a small perturbation. We also have additive inverses given as follows. For any stratified subset $(\Sigma,\psi)$ in $M$, we claim that there are stratified subsets $(S,\Psi), (T,\Psi)$ each deformable to the empty set by a stratified deformation, making them equal to zero in the group $\sdgo{B}{\partial_0}(M)$, and so that $(S\cup T,\Psi)$ is also deformable into the disjoint union of $(\Sigma,\psi)$ and another stratified subset $(U,\psi')$ of $M$. This makes $(U,\psi')$ the additive inverse of $(\Sigma,\psi)$. The construction of $(S,\Psi), (T,\Psi)$ is as follows. By Remark \ref{rem: vertical vector field}, there is a nowhere zero vertical vector field $v$ along $\Sigma$ so that, along $\Sigma_0$, it point from $\Sigma_-$ to $\Sigma_+$. Using this vector field, we can embed a ``ribbon'' $R_+\cong\Sigma_+\times I$ in $M$ so that $R_+$ contains $\Sigma_+$ as $\Sigma_+\times 0$. In this ribbon we take $S=S_+\cup S_-$ where $S_+=\Sigma_+\times \frac13$ and $S_-=\Sigma_+\times\frac23$. As we approach $\Sigma_0$ we replace $\frac13,\frac23$ by numbers converging to $\frac12$. Let $\Psi:S\to X$ be given by $\Psi(x,t)=\psi(x)$ for all $(x,t)\in \Sigma_+\times I$. \vs2 \underline{Claim 1}. $(S,\Psi)$ is deformable into the empty set. Pf: Deform $S$ inside the ribbon by letting the coordinates $\frac13,\frac23$ converge to $\frac12$. Extend $\Psi$ using the same equation. This gives the null deformation.\vs2 Construct $(T,\Psi)$ inside of the ribbon $R_-\cong \Sigma_-\times I$ containing $\Sigma_-$ as $\Sigma_-\times 1$ in a similar way.\vs2 \underline{Claim 2}. $(S\cup T,\Psi)$ is deformable into the disjoint union of $(\Sigma,\psi)$ and another stratified subset of $M$. Pf: Along $\Sigma_0$ we can merge the bottom of $S$ with the top of $T$, just like the deformation $t=\frac12$ to $t=0$ in Example \ref{eg:mushroom} above. This is illustrated in the following diagram. \begin{figure}[htbp] \begin{center} { \setlength{\unitlength}{1in} {\mbox{ \begin{picture}(4,1) \thicklines \put(0,0.3){ \qbezier(.1,.5)(1,1.3)(1.8,.5) \qbezier(.1,.5)(.05,.45)(0.1,.4) \qbezier(0.1,.4)(.13,.38)(.2,.45) \qbezier(.1,.5)(.05,.45)(0.1,.4) \qbezier(1.8,.4)(1.77,.38)(1.7,.45) \qbezier(1.8,.5)(1.85,.45)(1.8,.4) \qbezier(.2,.45)(1,1.15)(1.7,.45) \put(.3,.8){$S_-$} \put(.6,.6){$S_+$} } \put(0,1){ \qbezier(.1,-.5)(1,-1.3)(1.8,-.5) \qbezier(.1,-.5)(.05,-.45)(0.1,-.4) \qbezier(0.1,-.4)(.13,-.38)(.2,-.45) \qbezier(.1,-.5)(.05,-.45)(0.1,-.4) \qbezier(1.8,-.4)(1.77,-.38)(1.7,-.45) \qbezier(1.8,-.5)(1.85,-.45)(1.8,-.4) \qbezier(.2,-.45)(1,-1.15)(1.7,-.45) \put(.3,-.9){$T_+$} \put(.6,-.65){$T_-$} } \put(2,.6){$\Rightarrow$} \put(2.3,0){ \put(0,0.3){ \qbezier(.1,.5)(1,1.3)(1.8,.5) \qbezier(.1,.5)(.05,.45)(0.1,.4) \qbezier(0.2,.25)(.13,.35)(.2,.45) \qbezier(1.7,.25)(1.77,.35)(1.7,.45) \qbezier(0.1,.4)(.13,.35)(.1,.3) \qbezier(1.8,.4)(1.77,.35)(1.8,.3) \qbezier(.1,.5)(.05,.45)(0.1,.4) \qbezier(1.8,.5)(1.85,.45)(1.8,.4) \qbezier(.2,.45)(1,1.15)(1.7,.45) \put(.6,.6){$\Sigma_+$} } \put(0,1){ \qbezier(.1,-.5)(1,-1.3)(1.8,-.5) \qbezier(.1,-.5)(.05,-.45)(0.1,-.4) \qbezier(.1,-.5)(.05,-.45)(0.1,-.4) \qbezier(1.8,-.5)(1.85,-.45)(1.8,-.4) \qbezier(.2,-.45)(1,-1.15)(1.7,-.45) \put(.6,-.65){$\Sigma_-$} } \end{picture}} }} \caption{The sum of the trivial stratified sets $(S,\Psi),(T,\Psi)$ deform to the disjoint union of $(\Sigma,\psi)$ and another stratified set.} \label{figure eight} \end{center} \end{figure} To show that the mapping $AdT$ is additive, we take two smooth structures $\theta_1,\theta_2$ on the stabilized $M\times D^{2k-1}\times I$ which by the stabilization construction are equal to the original smooth structure on $\partial^\vertical\! (M\times D^{2k-1})\times I\cup M\times D^{2k-1}\times 0$ and on the complements of $E_1\times D^{2k}$ and $E_2\times D^{2k}$ respectively. By transversality, these two subsets, the supports of the two exotic smooth structures are disjoint. Therefore, by Proposition 1.5.10 of \cite{Second}, $\theta_1+\theta_2$ is given by changing the smooth structure of both $E_1$ and $E_2$. This show that $SdT$ is additive. \end{proof} \begin{rem}\label{formula for inverse of stratified set} The proof above shows that the inverse of $(\Sigma,\psi)\in \sdgo{B}{\partial_0}(M)$ has the form $(\Sigma',\psi')$ where $\psi'$ is the composition \[ \Sigma'\xrightarrow} %right arrow {label on top \rho\Sigma\xrightarrow} %right arrow {label on top \psi G/O \] Where $\rho:\Sigma'\to \Sigma$ maps a subset $U_+\subset\Sigma_+'$ homeomorphically onto the interior of $\Sigma_-$ and a subset $U_-\subset\Sigma_-'$ homeomorphically onto the interior of $\Sigma_+$. Furthermore, the restriction of $\rho$ to $U_+\cup U_-$ is compatible with the projection to $B$. \end{rem} \begin{prop}\label{prop: induced map on ovAdT} If $\psi:\Sigma\to G/O$ is trivial then so is $AdT(\Sigma,\psi)$. Therefore, $AdT$ induces a homomorphism \[ \overline{AdT}:\ovsdgo{B}{\partial_0}(M)\to \pi_0\std{B}{\partial_0}(M) \] Where $\ovsdgo{B}{\partial_0}(M)$ is the quotient of $\sdgo{B}{\partial_0}(M)$ by all $(\Sigma,\psi)$ where $\psi$ is null homotopic. \end{prop} \begin{proof} If $\psi$ is constant then the positive and negative Hatcher handles in the Arc de Triomphe construction are standard disk bundles and attaching these to the top of $M\times I$ will not change its fiber diffeomorphism type. \end{proof} \subsubsection{Homotopy calculation}\label{ss312} To prove Theorem \ref{AdT lemma} we need calculations in the form of more commuting diagrams. Let \[ D\widetilde{ch}:\sdgo{B}{\partial_0}(M)\to H_{q-4\bullet}(M,\partial_1M) \] be the mapping given by sending $(\Sigma,\psi)$ to the image of the normalized Chern character of the bundle $\xi$ under the mapping \[ \widetilde{ch}(\xi)\in H^{4\bullet}(\Sigma)\cong H_{q-4\bullet}(\Sigma,\partial\Sigma)\xrightarrow} %right arrow {label on top{j_\ast} H_{q-4\bullet}(M,\partial_1M) \] induced by the inclusion $j:(\Sigma,\partial\Sigma)\to (M,\partial_1M)$. Since $\xi$ is an oriented bundle, the Framing Principle applies to prove the following. \begin{lem}\label{IK torsion of AdT3} The following diagram commutes if $n+m$ is odd. \[ \xymatrix{ \ovsdgo{B}{\partial_0}(M)\ar[rr]_{\overline{AdT}} \ar@/_2pc/[rrr]_{(-1)^n2D\widetilde{ch}} && \pi_0\std{B}{\partial_0}(M \ar@/_2pc/[rr]_{D\circ\tau^\text{\rm IK}} & H_{q-4\bullet}(M,M_{\partial_1B})\ar[r]_{p_\ast} & H_{q-4\bullet}(B,\partial_1B) \] \end{lem} Although we claim that the Framing Principle implies this lemma, we don't need to verify it since this lemma follows from the next lemma. \begin{lem}\label{representation by immersed Hatcher} Every element of $\ovsdgo{B}{\partial_0}(M)$ is in the image of a homomorphism \[ \Sigma_{\tilde\lambda}:G(L,\partial_0L)\to \ovsdgo{B}{\partial_0}(M) \] where $\lambda:(L,\partial_1L)\to (B,\partial_1B)$ is a codimension $0$ immersion covered by an embedding $\tilde\lambda:L\to M$ which makes the following diagram commute. \[ \xymatrix{ G(L,\partial_0L)\ar[d]_{\Sigma_{\tilde\lambda}} \ar[drr]^{top\,E_+^n(M,\tilde\lambda,-)} \ar[rrr]^{D\circ(-1)^n2\widetilde{ch}} && & H_{q-4\bullet}(L,\partial_1 L \ar[d]^{\Sigma_{\tilde\lambda}} \\ \ovsdgo{B}{\partial_0}(M)\ar[rr]_{\overline{AdT}} \ar@/_2pc/[rrr]_{(-1)^n2D\widetilde{ch}} && \pi_0\std{B}{\partial_0}(M & H_{q-4\bullet}(M,M_{\partial_1B} & \] \end{lem} \begin{proof}[Proof of Lemma \ref {IK torsion of AdT3}] First we note that both maps coming out of $\sdgo{B}{\partial_0}(M)$ factor through $\ovsdgo{B}{\partial_0}(M)$. Each element then lifts to $G(L,\partial_0L)$. Next we chase the diagram at the beginning of the proof of Theorem \ref{first main theorem} to show that the two images of this element in $\bigoplus H_{q-4k}(B,\partial_1)$ are equal. The diagram in Lemma \ref{representation by immersed Hatcher} above shows that the two images obtained are the same as the two images in the diagram of Lemma \ref {IK torsion of AdT3} which we are proving. \end{proof} \begin{proof}[Proof of Lemma \ref{representation by immersed Hatcher}] The mapping $\Sigma_{\tilde\lambda}$ takes a map $\xi:L\to G/O$ which is trivial over $\partial_0L$ and produces a stratified subset \[ \Sigma_{\tilde\lambda}(\xi)=(\Sigma,\psi) \] where $\Sigma$ is two copies of $L$, thus $\Sigma_-\cong\Sigma_+\cong L$, glued together along $\partial_0L$ and embedded in $M$ using two small perturbations of the embedding $\tilde\lambda:L\to M$. The mapping psi is equal to $\xi$ on $\Sigma_+$ and is trivial on $\Sigma_-$. Since $\psi$ is trivial on $\Sigma_-$, the negative Hatcher handles in $W(\Sigma,\psi)$ are standard disk bundles. So, the bundle $AdT(\Sigma,\psi)$ will not change if we remove these ``trivial'' Hatcher handles. The result is then equivalent to the immersed Hatcher handle. This shows that the triangle in the diagram commutes. Commutativity of the (curved) square follows from the definition of $D\widetilde{ch}(\xi)$ on $\sdgo{B}{\partial_0B}$, namely, $D\widetilde{ch}\Sigma_{\tilde\lambda}(\xi)$ is the push-forward along the embedding $D(\tilde\lambda):E\to M$ of the Poincar\'e dual of the normalized Chern character of $\xi$ as a bundle over $L$. It remains to prove the element-wise surjectivity statement. This follows from the stratified deformation lemma \ref{stratified deformation lemma} whose proof we leave until the end. This lemma shows that any stratified subset $(\Sigma,\psi)$ of $M$ can be deformed so that every component of $\Sigma_-$ is contained in a disjoint contractible subset of $\Sigma$. Then we can deform $\psi$ so that it is constant on each component of $\Sigma_-$ and therefore also on $\Sigma_0$. Then let $(L,\partial_0L)=(\Sigma_+,\Sigma_0)$ and let $\lambda:L\to B$ be the map $\pi_+:\Sigma_+\to B$. Let $\tilde\lambda:L\to M$ be the inclusion map of $\Sigma_+$. Then we claim that the image of $(\Sigma,\psi)$ in $\ovsdgo{B}{\partial_0}(M)$ is equal to the image $\Sigma_{\tilde\lambda}(\xi_+)$ of $\xi_+=\xi|\Sigma_+\in G(L,\partial_0L)$. Since we started with an arbitrary element of $\sdgo{B}{\partial_0}(M)$ this will prove the lemma. To see that $(\Sigma,\psi)$ and $\Sigma_{\tilde\lambda}(\xi_+)$ are equal in $\ovsdgo{B}{\partial_0}(M)$, we just take the difference $\Sigma_{\tilde\lambda}(\xi_+)-(\Sigma,\psi)$. The negative of $(\Sigma,\psi)$ given in Remark \ref{formula for inverse of stratified set} has the form $(\Sigma',\psi')$ where $\psi'=\psi\circ\rho:\Sigma'\to\Sigma\to G/O$. But then $\psi'$ is trivial on $\rho^{-1}(\Sigma\backslash U_-)$ and $U_-\subset \Sigma'_-\cong \Sigma_+$ has the same $G/O$ coefficient map as $\Sigma_{\tilde\lambda}(\xi_+)$ has on its positive part. Therefore, the subset $U_-$ of the negative part of $\Sigma'$ cancels the interior of the positive part of $\Sigma_{\tilde\lambda}(\xi_+)$ by a stratified deformation. The result has trivial coefficient map to $G/O$ and therefore is trivial in $\ovsdgo{B}{\partial_0}(M)$ as claimed. \end{proof} \subsubsection{Proof of the AdT Theorem} The Arc de Triomphe Theorem \ref{AdT lemma} will follow from the following first version of the theorem. \begin{lem}\label{first version of AdT Lemma} The mapping \[ D\widetilde{ch}:\sdgo{B}{\partial_0}(M)\to H_{q-4\bullet}(M,\partial_1M) \] is rationally surjective in the sense that its image generates $H_{q-4\bullet}(M,\partial_1M)$ as a vector space over $\ensuremath{{\field{R}}}$. \end{lem} \begin{proof} We review the properties of generalized Morse functions (GMF) as described in \cite{I:GMF}, namely the singular set of a fiberwise GMF is a stratified set $\Sigma$ together with a coefficient mapping $\Sigma\to BO$. See also \cite{Goette08} for the relationship between generalized Morse functions and analytic torsion. {Consider the bundle $M\times I\to B$ and consider an arbitrary fiberwise generalized Morse function} $f:M\times I\to I$ which agrees with the projection map over $\partial_0B$ and in a neighborhood of the vertical boundary. Thus $f=pr_I$ on the set \[ A=\partial_0M\times I\cup M\times \{0,1\} \] The fact that $f$ is a fiberwise GMF is equivalent to the property that the $f$ is in general position, so that its singular set $\Sigma(f)$ is a submanifold of $M\times I$, and so that the projection map $\Sigma(f)\to B$ has only fold singularities and the Morse point set which are the regular points of the projection $\Sigma(f)\to B$ are stratified by index $i$. We will use just the sign $(-1)^i$ making $\Sigma_+$ into the set of Morse points of even index and $\Sigma_-$ the set of odd index Morse points of $f$. It is important to note that $\Sigma(f)$ is a manifold with boundary and $\partial\Sigma(f)=\Sigma(f)\cap M_{\partial_1B}\times I$. The singular set is the inverse image of zero under the vertical derivative $D^\vertical\!(f)$ of $f$ and therefore a framed manifold with boundary. (Add the vertical normal bundle to see the framing.) Since the space of all smooth functions on $M\times I$ equal to $pr_I$ on $A$ is contractible and contains a function without critical points, this framed manifold is framed null cobordant and represents the trivial element of the fiberwise framed cobordism group of $M$ relative to $M_{\partial_1B}$ which is $\pi_0\Gamsub{B}{\partial_0} Q_B(M)$ where $Q_B(M)$ is the bundle over $B$ with fiber $Q(X_+)=\Omega^\infty\Sigma^\infty(X_+)$ over $b\in B$ if $X$ is the fiber of $M\times I$ over $b$. The negative eigenspace of $D^2(f)$ gives a stable vector bundle $\xi$ over $\Sigma(f)$. So $\Sigma(f)$, together with $\xi$ gives a stratified subset of $M\times I$ with coefficients in $BO=\colim BO(k)$. Since $\Sigma(f)$ is a framed manifold with boundary which is framed null cobordant when we ignore this vector bundle, we get an element of the kernel of the map from the fiberwise framed cobordism group of $BO\times M$ to that of $M$. This kernel is $\pi_0$ of the fiber of the map: \[ \gamma:\Gamsub{B}{\partial_0}Q_B(BO\times M)\to \Gamsub{B}{\partial_0}Q_B(M) \] In \cite{I:GMF}, it is shown that the space of generalized Morse functions on a manifold $X$ is $\dim X$-equivalent to $Q(BO\wedge X_+)$. If we apply that theorem fiberwise, we get that the space of fiberwise generalized Morse functions on $M\times I$ has the $n+m-q$ homotopy type of the fiber of the map $\gamma$ above. However, it is a standard homotopy argument to show that there is a split surjection \[ Q(BO\wedge X_+)\to \Omega^\infty(BO\wedge X_+) \] which is rationally equivalent to the homology of $X$ in every 4th degree since $BO$ is rationally equivalent to $\prod_{k>0}K(\ensuremath{{\field{Z}}},4k)$. Therefore, $\pi_0(fiber(\gamma))$ has a split summand which is rationally isomorphic to the group: \[ H:=H_{q-4\bullet}(M,M_{\partial_1B};\ensuremath{{\field{Q}}}) \] by the basic homotopy calculation (Corollary 2.2.2 of \cite{Second}). This implies that a set of generators for the vector space $H\otimes \ensuremath{{\field{R}}}$ is given by taking $D\widetilde{ch}(\Sigma,\xi)$ for all possible stratified sets $(\Sigma,\xi)\in SD^{BO}_{B,\partial_0}(M\times I)$ given by all fiberwise generalized Morse functions on $M\times I$ fixing the subspace $A$. Using the fact that the group $J(\Sigma)$ is finite with order, say $m$, we know that $J(\xi^m)=0$ in $J(\Sigma)$ and therefore lifts to a map $\Sigma\to G/O$. So, these various stratified sets $(\Sigma,\xi^m)\in \sdgo{B}{\partial_0}(M\times I)$ will have $D\widetilde{ch}(\Sigma,\xi^m)$ generating the vector space $H\otimes\ensuremath{{\field{R}}}$ as claimed. \end{proof} \begin{lem}\label{Th AdT=2ch} The following diagram commutes \[ \xymatrix{ \sdgo{B}{\partial_0}(M)\ar[rr]_{{AdT}} \ar@/_2pc/[rrr]_{(-1)^n2D\widetilde{ch}} && \pi_0\std{B}{\partial_0}(M)\ar[r]_(.4){\Theta} & H_{q-4\bullet}(M,M_{\partial_1B} } \]where $\Theta:M'\mapsto \Theta(M',M)$ gives the rational exotic structure class of $M'$. \end{lem} This lemma proves the Arc de Triomphe Theorem \ref{AdT lemma} since we just proved in Lemma \ref{first version of AdT Lemma} that the normalized Chern character is rationally surjective and we know by the smoothing theorem that $\Theta$ is a rational isomorphism. \begin{proof} Take the diagram from Lemma \ref{representation by immersed Hatcher} and add the arrow $\Theta$: \[ \xymatrix{ G(L,\partial_0L)\ar[d]_{\Sigma_{\tilde\lambda}} \ar[drr]^{top\,E_+^n(M,\tilde\lambda,-)} \ar[rrr]^{D\circ(-1)^n2\widetilde{ch}} && & H_{q-4\bullet}(L,\partial_1 L \ar[d]^{\Sigma_{\tilde\lambda}} \\ \ovsdgo{B}{\partial_0}(M)\ar[rr]_{\overline{AdT}} \ar@/_2pc/[rrr]_{(-1)^n2D\widetilde{ch}} && \pi_0\std{B}{\partial_0}(M)\ar[r]_(.4){\Theta} & H_{q-4\bullet}(M,M_{\partial_1B} & \] The outside curved square commutes by Theorem \ref{first main theorem}. The map $\Sigma_{\tilde\lambda}$ can be chosen to hit any element of $\ovsdgo{B}{\partial_0}(M)$ by the previous lemma. Therefore, the curved triangle at the bottom commutes. This implies the lemma since the maps factor uniquely through $\ovsdgo{B}{\partial_0}(M)$. \end{proof} \subsection{Stratified deformation lemma}\label{subsecA32} It remains to prove the following lemma which was used to show that each Arc de Triomphe construction can be deformed into an immersed Hatcher construction. \begin{lem}[Stratified Deformation Lemma]\label{stratified deformation lemma} If the fiber dimension of $M$ is $\ge q+2$, then any element of $\sdgo{B}{\partial_0}(M)$ is represented by a stratified subset $(\Sigma,\psi)$ of $M$ with the property that the components of $\Sigma_-$ are contained in disjoint contractible subsets of $\Sigma$. \end{lem} \begin{proof} This is the same proof which appears in \cite{I:FF} on page 446-447 with five figures and in \cite{I:ComplexTorsion} on page 73 with one figure. We repeat the argument and pictures here since the statements are not the same, only analogous. To clarify the statement of this lemma we point out that the mushroom ($t=0$ in Example \ref{eg:mushroom}) is already in the desired form since $\Sigma_-\cong S^{k-1}\times I$ is contained in the contractible subset $\Sigma_-\cup T_+\cong D^k$ of $\Sigma$. Thus the contractible set can contain parts of $\Sigma_+$. The dimension hypothesis implies that all deformations of $\Sigma$ in $M$ can be made into isotopies of smooth embeddings over $B$ by transversality. So, we will not concern ourselves with that point. Also, by Remark \ref{rem: vertical vector field}, there is a nowhere zero vertical vector field along any stratified subset $\Sigma\subseteq M$ which points from $\Sigma_-$ to $\Sigma_+$ along $\Sigma_0$. As in the proof of Proposition \ref{sd is a group} we can use this to find a ribbon $R_-\cong \Sigma_-\times I$ containing $\Sigma_-$ in $M$. Suppose that $\partial_1B$ is empty. {Then we will deform any $(\Sigma,\psi)$ into the desired shape} (so that the union of $\Sigma_-$ and a portion of $\Sigma_+$ is a contractible subset of $\Sigma$.) When $\partial_1B$ is nonempty, we double $B$ along $\partial_1B$ and double $M$ along $M_{\partial_1B}$ and similarly for $(\Sigma,\psi)$. Then do the deformation $\ensuremath{{\field{Z}}}/2$ equivariantly. The fixed point sets of the $\ensuremath{{\field{Z}}}/2$ action on the new $B$ and new $M$ are the original $\partial_1B$ and $M_{\partial_1B}$. First choose an equivariant triangulation of $\Sigma_-$ so that the fixed point set is a subcomplex and so that each simplex maps monomorphically into $B$. Then we will cut apart the set $\Sigma_-$ by deleting a tubular neighborhood of each interior simplex $\Delta^m$ starting with the lowest dimension $m=0$. Let $w$ be a vertex in the interior of $\Sigma_-$. Near $w$ we embed the ribbon $\Sigma_-\times I$ and we will perform the deformation completely inside of this ribbon. ($m=0$) The desired stratified deformation is given in Example \ref{eg:mushroom} but with the labels $\Sigma_-,\Sigma_+$ reversed and with $k=q$. In words, we create a $q$-lens above the point $w$ (above means in the direction of the vector field $v$ of Remark \ref{rem: vertical vector field}) together with the coefficient map sending the entire $q$-lens to $\psi(w)\in G/O$. This is the $t=\frac12$ part of Example \ref{eg:mushroom}. Then we attach the mushroom and cancel the portion of $\Sigma_-$ around $w$ as in the $t=0$ picture of the example. Remembering that we have reversed $\Sigma_-,\Sigma_+$ we see that the new $\Sigma_-$ is the disjoint union of $D^q$ (the ``top'' of the mushroom) and the old $\Sigma_-$ with a disk shaped hole cut out around $w$. In other words, we have \emph{removed} a neighborhood of $w$ from the old $\Sigma_-$. ($m=1$) Next, take a 1-simplex in $\Sigma_-$. Since we have attached mushrooms on the two endpoints, the picture of this 1-simplex is as follows. Since $\Sigma$ is $q$-dimensional, we have the product with $D^{q-1}$ in a small neighborhood of the 1-simplex where the endpoints stay inside the stems of the mushrooms planted on the endpoints. \begin{figure}[htbp] \begin{center} { \setlength{\unitlength}{.7in} {\mbox{ \begin{picture}(5,1) \put(0,0){ \thinlines \qbezier(1,.75)(1.5,.75)(1.5,.65) \qbezier(1.35,.5)(1.5,.5)(1.5,.65) \qbezier(1.35,.5)(1.2,.5)(1.2,.4) \thicklines \qbezier(1.4,.3)(1.8,.25)(2,.28) \qbezier(1.4,.3)(1.2,.3)(1.2,.4) \put(1.7,.35){$\Sigma_-$} % } \thicklines \put(2,.27){\line(1,0){1}} \put(3,0){ \thinlines \put(2,0){ \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1.35,.5)(-1.5,.5)(-1.5,.65) \qbezier(-1.35,.5)(-1.2,.5)(-1.2,.4) \thicklines \qbezier(-1.4,.3)(-1.8,.25)(-2,.28) \qbezier(-1.4,.3)(-1.2,.3)(-1.2,.4) } \put(1.5,.45){$\times\ D^{q-1}$} % } \end{picture}} }} \end{center} \end{figure} We focus attention to a small neighborhood of the 1-simplex in $\Sigma_-$ (ignoring the $\Sigma_-$ tops of the old mushrooms). Then, on the two $\Sigma_+$ segments ($\times D^{q-1}$) we plant two new mushrooms (with $T_+\subseteq \Sigma_+$ tops) and perform the deformation in Figure \ref{fig: deleting an edge}. When one of the endpoints of the 1-simplex lies on the boundary of the original set $\Sigma_-$, we will not have a mushroom and the figure above is not quite accurate in that case. However, we will still have a $\Sigma_+$ segment which meets the boundary of $\Sigma_-$ along $\Sigma_0$ and we can still perform this deformation (plant two mushrooms) and Figure \ref{fig: deleting an edge} will be accurate in this case. When we plant the two new mushrooms, a new $\Sigma_-$ component $S$ diffeomorphic to $S^{q-1}\times I\times S^0$ is created. This is contained in $S\cup T_+\cong D^q\times S^0$. When we do the deformation indicated, we attach a solid $q$-handle to this to form a contractible subset of the new $\Sigma$ containing the new component of $\Sigma_-$ (which is now homotopy equivalent to a wedge of two $q-1$ spheres, but with each sphere filled in with a $q$-disk $T_+ \subset \Sigma_+$) we also extend the coefficient map by using the values of $\psi$ on the old 1-simplex in $\Sigma_-$ at the bottom of the figure. This old 1-simplex is now at the bottom of a 1-lens and this can be cancelled by the deformation obtained by rotating the lens since the top of lens and the bottom of the lens have matching coefficient maps. The result is that the set $\Sigma_-$ is changed by the deletion of the 1-simplex. In more standard language, this last step performs surgery on a circle $S^1$ embedded in $\Sigma$ so that half the circle is in $\Sigma_+$ and half is in $\Sigma_-$. This second half is the 1-simplex which has been ``eliminated.'' \begin{figure}[htbp] \begin{center} { \setlength{\unitlength}{.5in} {\mbox{ \begin{picture}(9,1.5) \put(0,0){ \put(0.9,1.3){$T_+$} \thinlines \put(2,0.3){ \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1.35,.5)(-1.5,.5)(-1.5,.65) \qbezier(-1.35,.5)(-1.1,.5)(-1.1,.2) \put(-1.17,.18){$\bullet$} \qbezier(-1.39,0)(-1.1,0)(-1.1,.2) } \put(2,0){ \put(0,0){$\Sigma_-$} \qbezier(-1.4,.3)(-1.8,.25)(-1.8,.15)} \put(0,.3){ \qbezier(1,.75)(1.5,.75)(1.5,.65) \qbezier(1.35,.5)(1.5,.5)(1.5,.65) \qbezier(1.35,.5)(1.2,.5)(1.2,.4) \qbezier(1.4,.3)(1.8,.3)(2,.4) \qbezier(1.4,.3)(1.2,.3)(1.2,.4) \qbezier(1,.75)(1.5,.75)(1.5,.65) } \qbezier(.2,.15)(.2,-.1)(2.25,-.1) \qbezier(4.3,.15)(4.3,-.1)(2.25,-.1) \put(2.5,0){ \thinlines \put(0.9,1.3){$T_+$} \put(2,.3){ \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1.35,.5)(-1.5,.5)(-1.5,.65) \qbezier(-1.35,.5)(-1.2,.5)(-1.2,.4) \qbezier(-1.4,.3)(-1.8,.3)(-1.8,.4) \qbezier(-1.4,.3)(-1.2,.3)(-1.2,.4) } \put(0,.3){ \qbezier(1.35,.5)(1.5,.5)(1.5,.65) \qbezier(1.35,.5)(1.1,.5)(1.1,.2) \put(1.05,.18){$\bullet$} \qbezier(1.39,0)(1.1,0)(1.1,.2) \qbezier(1,.75)(1.5,.75)(1.5,.65) } \qbezier(1.4,.3)(1.8,.25)(1.8,.15) } } \put(4.6,0.2){ $\Rightarrow$} \put(5,0){ \thinlines \put(2,0.3){ \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1.35,.5)(-1.5,.5)(-1.5,.65) } \put(2,0){ \qbezier(-1.4,.3)(-1.8,.25)(-1.8,.15)} \put(0,.3){ \qbezier(1,.75)(1.5,.75)(1.5,.65) \qbezier(1.35,.5)(1.5,.5)(1.5,.65) \qbezier(1.35,.5)(1.2,.5)(1.2,.4) \qbezier(1.4,.3)(1.8,.3)(2,.4) \qbezier(1.4,.3)(1.2,.3)(1.2,.4) \qbezier(1,.75)(1.5,.75)(1.5,.65) } \qbezier(.2,.15)(.2,-.1)(2.25,-.1) \qbezier(4.3,.15)(4.3,-.1)(2.25,-.1) \put(2.5,0){ \thinlines \put(2,.3){ \qbezier(-1,.75)(-1.5,.75)(-1.5,.65) \qbezier(-1.35,.5)(-1.5,.5)(-1.5,.65) \qbezier(-1.35,.5)(-1.2,.5)(-1.2,.4) \qbezier(-1.4,.3)(-1.8,.3)(-1.8,.4) \qbezier(-1.4,.3)(-1.2,.3)(-1.2,.4) } \put(0,.3){ \qbezier(1.35,.5)(1.5,.5)(1.5,.65) \qbezier(1.35,.5)(0,-.5)(-1.85,.5) \qbezier(1.35,-0)(0,-.5)(-1.85,-0) \qbezier(1,.75)(1.5,.75)(1.5,.65) } \qbezier(1.4,.3)(1.8,.25)(1.8,.15) } } \end{picture}} }} \caption{Plant two new mushrooms and cancel the two points indicated with spots.} \label{fig: deleting an edge} \end{center} \end{figure} ($m\ge2$) {Suppose by induction that the $m-1$ skeleton of $\Sigma_-$ has been removed where $m\ge2$. Let $D^m$ be what remains of one of the original $m$-simplices of $\Sigma_-$. Then $D^m$ has boundary $S^{m-1}\subseteq\Sigma_0$. Part of this boundary comes from the original boundary of $\Sigma_-$ and the other part comes from the inductive procedure. There are remnants of mushrooms from previous steps in the construction and we need to avoid them and use only those structures which exist in all parts of the boundary of $D^m\subset\Sigma_-$. Since $\Sigma$ is $q$ dimensional, this disk sits in $D^m\times D^{q-m}\subset \Sigma_-$. The next step in the deformation is given by planting the product of $S^{m-1}$ with a mushroom of dimension $q-m+1$. The picture is the same as Figure \ref{fig: deleting an edge}. So, we do not redraw it. However, we give a new interpretation of the same figure.} Take the left hand figure in Figure \ref{fig: deleting an edge}. This is a planar figure which is now being spun around the middle vertical axis over all $\theta\in S^{m-1}$. The tops of the mushrooms, which are given locally by $t=0$ in Example \ref{eg:mushroom}, become diffeomorphic to $S^{m-1}\times I$. For all $z\in D^{q-m}$, we replace these mushrooms with the $t=||z||$ picture from Example \ref{eg:mushroom} and spin around $S^{m-1}$. This gives a stratified set over $D^m\times D^{q-m}$ which contains a new components $S\subset \Sigma_-$ diffeomorphic to $S^{m-1}\times I\times S^{q-m}$. However, with the tops of the mushrooms we get $S\cup T_+\cong S^{m-1}\times D^{q-m+1}$. The deformation (passing from left to right) in Figure \ref{fig: deleting an edge} is to be carried out only for $z$ close to the origin in $D^{q-m}$ otherwise the points indicated with spots are not in the picture and, again, we use the value of the coefficient map on the $m$-simplex in $\Sigma_-$ to extend the value of $\psi$ to the top of the new $m$-lens that we have formed. This deformation performs surgery on a $m-1$ sphere in $\Sigma$ which lies in $\Sigma_0$ on the boundary of $S\cup T_+$. This changes $S\cup T_+$ into a $q$-disk. So, the new component of $\Sigma_-$ is contained in a contractible subset of $\Sigma$. On the right hand side of Figure \ref{fig: deleting an edge} we have an $m$-lens which can be eliminated by Example \ref{eg:k-lens} since the value of $\psi$ on top and bottom match by construction. This performs surgery on an $m$-sphere in $\Sigma$ which meets $\Sigma_-$ is an $m$-disk which is the remains of the $m$-simplex which we are trying to eliminate. This deformation therefore completes the induction and proves the lemma. \end{proof} This completes the proof of all the theorems in this paper. % % \bibliographystyle{amsplain}
proofpile-arXiv_067-13410
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The discovery of charged Higgs bosons at the running Large Hadron Collider (LHC) would be unambiguous evidence for new physics. Important mechanisms to produce $H^{\pm}$ include $gb\to tH^-$; $q\bar{q},gg \to H^+H^-$ and $b\bar{b},gg\to W^\pm H^\mp$ (see \cite{Djouadi:2005gj} for a review and references). The first and the last channel are of particular interest since they allow to search for CP-violating effects at the LHC associated with physics beyond the Standard Model. Recently, the CP-violating asymmetry for $tH^-/\bar{t}H^+$ production has been calculated in~\cite{Christova:2008jv}. \begin{comment} If the charged Higgs mass is below the $t\bar{b}$ production threshold, the Higgs will mostly decay into $\tau\bar{\nu}_{\tau}$. For heavier mass, the main decay is $H^+\to t\bar{b}$. The first production channel has largest production rate but can suffer from large QCD backgrounds if the Higgs mass is heavy. The $W^\pm H^\mp$ signal is interesting since charged Higgs production in association with $W$ boson is considered as an attractive way out since the $W$ signal can be easily reconstructed. \end{comment} There have been many discussions devoted to the $pp\to W^\pm H^\mp$ processes in the Minimal Supersymmetric Standard Model (MSSM) over the last two decades. These studies assume that all the soft supersymmetry-breaking parameters are real and hence CP violation is absent. The two main partonic processes are $b\bar b$ annihilation and the loop-induced $gg$ fusion. The first study~\cite{Dicus:1989vf} computed the tree-level $b\bar b$ contribution and the $gg$ process with third-generation quarks in the loops using $m_b=0$ approximation. This calculation was then extended for finite $m_b$, thus allowing the investigation of the process for arbitrary values of $\tan\beta$ (the ratio $v_2/v_1$ of the vacuum expectation values of the two Higgs doublets)~\cite{BarrientosBendezu:1998gd, BarrientosBendezu:1999vd}. The inclusion of the squark-loop contribution to the $gg$ channel was done in \cite{BarrientosBendezu:2000tu, Brein:2000cv}. The next-to-leading order (NLO) corrections to the $b\bar b$ annihilation are more complicated and not complete as yet; the full NLO electroweak (EW) corrections are still missing. The Standard Model QCD (SM-QCD) corrections were calculated in~\cite{Hollik:2001hy,Gao:2007wz}, the supersymmetric-QCD (SUSY-QCD) corrections in~\cite{Zhao:2005mu,Rauch:2008fy}, and the Yukawa part of the electroweak corrections in~\cite{Yang:2000yt}. There are also studies on the experimental possibility of observing $W^\mp H^\pm$ production at the LHC with subsequent hadronic $H^- \to \bar{t} b$ decay~\cite{Moretti:1998xq} and leptonic $H^- \to \tau^- \bar{\nu_\tau} $ decay~\cite{Eriksson:2006yt, Hashemi:2010ce}. The aim of this paper is multifold. First, we extend the calculation for $pp\to W^\pm H^\mp$ to the MSSM with complex parameters (complex MSSM, or cMSSM). Second, the full NLO EW corrections to the $b\bar{b}$ annihilation channel are calculated and consistently combined with the other contributions to provide the complete NLO corrections to the $pp\to W^\pm H^\mp$ processes. Third, CP-violating effects arising in the cMSSM are discussed. The important issues related to the neutral Higgs mixing and large radiative corrections to the bottom-Higgs couplings are also systematically addressed. In the cMSSM, new sources of CP violation are associated with the phases of soft-breaking parameters and of the Higgsino-mass parameter $\mu$. Through loop contributions, CP violation also enters the Higgs sector, which is CP conserving at lowest order (see for example \cite{Accomando:2006ga} for more details and references). As a consequence, the $h$, $H$ and $A$ neutral Higgs bosons in general mix and form the mass eigenstates $h_{1,2,3}$ with both CP even and odd properties, which can have important impact on many physical observables. The bottom-Higgs Yukawa couplings are subject to large quantum corrections in the MSSM. We use the usual QCD running bottom-quark mass to absorb large QCD corrections to the LO results. The potentially large SUSY-QCD corrections, in the large $\tan\beta$ limit, are included into the quantity $\Delta m_b$, which is complex in the cMSSM and can be resummed (\sect{running_mb}). The paper is organized as follows. \sect{sect-bbWH} is devoted to the subprocess $b\bar b\to W^\pm H^\mp$, including the issues of effective bottom-Higgs couplings and neutral Higgs mixing. The calculation of the $gg$ fusion part is shown in \sect{sect-ggWH}. Hadronic cross sections and CP-violating asymmetry are defined in \sect{sec:hadronic}. Numerical results are presented in \sect{sect-results} and conclusions in~\sect{sect-conclusions}. Feynman diagrams, counterterms, and renormalization constants can be found in the Appendices. \section{The subprocess {$\boldmath{b\bar{b}\to W^{\mp}H^{\pm}}$} } \label{sect-bbWH} \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{psfig/born_bbWH.pdf} \caption{{\em Tree-level diagrams for the partonic process $b\bar b\to W^\pm H^\mp$. $h_i$ with $i=1,2,3$ denote the neutral Higgs bosons $h$, $H$ and $A$, respectively.}} \label{proc_bbWH_born} \end{figure} At the tree level, there are four Feynman diagrams including three $s$-channel diagrams with a neutral Higgs exchange and a $t$-channel diagram, as shown in \fig{proc_bbWH_born}. The tree-level bottom-Higgs couplings read as follows, \begin{eqnarray} \lambda_{b\bar{b}h}&=&\frac{iem_b}{2s_WM_W}\frac{{\sin{\alpha}}}{{\cos{\beta}}}(P_L + P_R),\nonumber \\ \lambda_{b\bar{b}H}&=&\frac{-iem_b}{2s_WM_W}\frac{{\cos{\alpha}}}{{\cos{\beta}}}(P_L + P_R),\nonumber \\ \lambda_{b\bar{b}A}&=&\frac{em_b}{2s_WM_W}{\tan{\beta}}(P_L - P_R),\nonumber \\ \lambda_{b\bar{t}H^+}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(\frac{m_t}{{\tan{\beta}}}P_L + m_b{\tan{\beta}} P_R\right),\nonumber \\ \lambda_{t\bar{b}H^-}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(m_b{\tan{\beta}} P_L + \frac{m_t}{{\tan{\beta}}} P_R\right), \label{b_H_couplings_tree} \end{eqnarray} where $P_{L,R}=(1\mp \gamma_5)/2$, $s_W =\sin\theta_W$, and $\alpha$ is the tree-level mixing angle of the two CP-even Higgs bosons. In order to obtain reliable predictions, two important issues related to the bottom-Higgs Yukawa couplings and the neutral Higgs mixing have to be addressed. These quantities can get large radiative corrections as will be detailed in the next two sections. \subsection{Bottom-Higgs couplings} \label{running_mb} In the context of the MSSM, the bottom-Higgs couplings can get large SM-QCD, SUSY-QCD and EW corrections. These large universal corrections can be absorbed into the bottom-Higgs couplings in two steps. First, the SM-QCD corrections are absorbed by using the running bottom-quark mass at one-loop order via \begin{eqnarray} m_b\longrightarrow m_b^{\DRb} (\mu_R)=m_b\left[1 - \frac{\alpha_s}{\pi}\left(\frac{5}{3}-\ln\frac{m_b^2}{\mu_R^2}\right)\right]. \end{eqnarray} We note, in passing, that the relation between the pole mass and the ${\overline{\text{MS}}}$ mass is different \begin{eqnarray} m_b^{\MSb} (\mu_R)=m_b\left[1 - \frac{\alpha_s}{\pi}\left(\frac{4}{3}-\ln\frac{m_b^2}{\mu_R^2}\right)\right]. \end{eqnarray} It can be proved that by using the running bottom-quark mass in \eq{b_H_couplings_tree} the SM-QCD one-loop corrections are independent of $\alpha_s\ln(m_b^2)$ \cite{Braaten:1980yq}. We will therefore replace $m_b=m_b^{\DRb} (\mu)$ in \eq{b_H_couplings_tree}. $m_b^{\DRb}$ can be related to the QCD-${\overline{\text{MS}}}$ mass $\overline{m}_b(\overline{m}_b)$, which is extracted from experimental data and is usually taken as an input parameter, at two-loop order as follows \cite{Avdeev:1997sz} \begin{eqnarray} m_b^{\overline{\text{DR}}}(\mu_R)=m_b^{{\overline{\text{MS}}}}(\mu_R)\left[1 - \frac{\alpha_s}{3\pi} - \frac{\alpha_s^2}{144\pi^2}(73-3n) \right], \end{eqnarray} where $n$ is the number of active quark flavours and the ${\overline{\text{MS}}}$ running mass is evaluated with the two-loop formula \begin{eqnarray} m_b^{\overline{\text{MS}}}(\mu_R) = \begin{cases} U_6(\mu_R, m_t)U_5(m_t, \overline{m}_b)\overline{m}_b(\overline{m}_b) \quad & \text{for}\quad \mu_R > m_t \\ U_5(\mu_R, \overline{m}_b)\overline{m}_b(\overline{m}_b) \quad & \text{for}\quad \mu_R \le m_t \end{cases} \label{mb_evolution} \end{eqnarray} where the evolution factor $U_n$ reads (see {\it e.g.}\ \cite{Carena:1999py}) \begin{eqnarray} U_n(Q_2,Q_1)&=&\left(\frac{\alpha_s(Q_2)}{\alpha_s(Q_1)}\right)^{d_n}\left[1 + \frac{\alpha_s(Q_1) - \alpha_s(Q_2)}{4\pi}J_n\right],\quad Q_2 > Q_1\nonumber \\ d_n&=&\frac{12}{33-2n}, \quad J_n = -\frac{8982 - 504n + 40n^2}{3(33 - 2n)^2}. \end{eqnarray} The second step is to absorb large universal SUSY-QCD and EW corrections into the couplings in \eq{b_H_couplings_tree}. This is achieved by using the following effective bottom-Higgs couplings \cite{Carena:1999py, Guasch:2003cv, Williams:2008phd, Dittmaier:2009np}: \begin{comment} These leading corrections can be resummed to all order by using the effective low-energy Lagrangian \cite{Guasch:2003cv, Carena:1999py} \begin{eqnarray} {\cal{L}} &=& -\bar\lambda_b\overline{b}_R\left[\phi_1^0 + \frac{\Delta_b}{{\tan{\beta}}}\phi_2^{0*}\right]b_L + \text{h.c.},\nonumber \\ \bar\lambda_b &=&\frac{\sqrt{2}m_b^{\DRb}}{v_1}\frac{1}{1+\Delta_b},\quad v_1=v{\cos{\beta}}, \nonumber \\ \phi_1^0 &=& \frac{1}{\sqrt{2}}\left(v_1 + H{\cos{\alpha}} - h{\sin{\alpha}} + iA{\sin{\beta}} - iG^0{\cos{\beta}} \right),\nonumber \\ \phi_2^0 &=& \frac{1}{\sqrt{2}}\left(v_2 + H{\sin{\alpha}} + h{\cos{\alpha}} + iA{\cos{\beta}} + iG^0{\sin{\beta}} \right), \end{eqnarray} and $\Delta_b$ can be complex, see below. \end{comment} \begin{eqnarray} \bar\lambda_{b\bar{b}h}&=&\frac{iem_b^{\DRb}}{2s_WM_W}\frac{{\sin{\alpha}}}{{\cos{\beta}}} \left(\Delta_b^1 P_L + \Delta_b^{1*}P_R\right),\nonumber \\ \bar\lambda_{b\bar{b}H}&=&\frac{-iem_b^{\DRb}}{2s_WM_W}\frac{{\cos{\alpha}}}{{\cos{\beta}}}(\Delta_b^{2}P_L + \Delta_b^{2*}P_R),\nonumber \\ \bar\lambda_{b\bar{b}A}&=&\frac{em_b^{\DRb}}{2s_WM_W}{\tan{\beta}}(\Delta_b^{3}P_L - \Delta_b^{3*}P_R),\nonumber \\ \bar\lambda_{b\bar{t}H^+}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(\frac{m_t}{{\tan{\beta}}}P_L + m_b^{\DRb}{\tan{\beta}} \Delta_b^{3*} P_R\right),\nonumber \\ \bar\lambda_{t\bar{b}H^-}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(m_b^{\DRb}{\tan{\beta}} \Delta_b^{3} P_L + \frac{m_t}{{\tan{\beta}}} P_R\right), \label{b_H_couplings_loop} \end{eqnarray} where \begin{eqnarray} \Delta_b^1 &=& \frac{1-\Delta_b/({\tan{\beta}}{\tan{\alpha}})}{1+\Delta_b},\nonumber \\ \Delta_b^2 &=& \frac{1+\Delta_b {\tan{\alpha}}/{\tan{\beta}}}{1+\Delta_b},\nonumber \\ \Delta_b^3 &=& \frac{1-\Delta_b/({\tan{\beta}})^2}{1+\Delta_b}. \end{eqnarray} The leading corrections proportional to ${\cal{O}}(\alpha_s{\tan{\beta}}, \alpha_t{\tan{\beta}}, \alpha{\tan{\beta}})$ with $\alpha_t=h_t^2/(4\pi)$ and $h_t$ being the superpotential top coupling are included in $\Delta m_b$ \cite{Carena:1999py}. This quantity is UV finite and can be calculated by considering the one-loop corrections to the $H_2^0b\bar{b}$ coupling (which is zero at tree level) where $H_2^0$ is the neutral component of the second Higgs doublet. It can also be extracted from the one-loop bottom-quark self-energy~\cite{Heinemeyer:2004xw, Hofer:2009xb}. In the cMSSM, we fin \begin{eqnarray} \Delta m_b&=&\Delta m_b^{SQCD}+\Delta m_b^{SEW},\nonumber \\ \Delta m_b^{SQCD}&=&\frac{2\alpha_s(Q)}{3\pi}M_3^*\mu^*\tan\beta\; I(m_{\tilde{b}_1}^2,m_{\tilde{b}_2}^2,m_{\tilde{g}}^2),\quad Q=(m_{\tilde{b}_1} + m_{\tilde{b}_2} + m_{\tilde{g}})/3,\nonumber \\ \Delta m_b^{SEW}&=&\Delta m_b^{\tilde{H}\tilde{t}}+\Delta m_b^{\tilde{W}}+\Delta m_b^{\tilde{B}},\nonumber \\ \Delta m_b^{\tilde{H}\tilde{t}}&=&\frac{\alpha_t}{4\pi}A_t^*\mu^*\tan\beta\; I(m_{\tilde{t}_1}^2,m_{\tilde{t}_2}^2,\vert\mu\vert^2)\nonumber \\ \Delta m_b^{\tilde{W}}&=&-\frac{\alpha}{8\pi s_W^2}M_2^*\mu^*{\tan{\beta}} \big[ 2\vert U_{11}^{\tilde t}\vert^2 I(m_{\tilde{t}_1}^2,\vert M_2\vert^2,\vert\mu\vert^2) + 2\vert U_{21}^{\tilde t}\vert^2 I(m_{\tilde{t}_2}^2,\vert M_2\vert^2,\vert\mu\vert^2)\nonumber \\ & & + \vert U_{11}^{\tilde b}\vert^2 I(m_{\tilde{b}_1}^2,\vert M_2\vert^2,\vert\mu\vert^2) + \vert U_{21}^{\tilde b}\vert^2 I(m_{\tilde{b}_2}^2,\vert M_2\vert^2,\vert\mu\vert^2) \big]\nonumber \\ \Delta m_b^{\tilde{B}}&=&-\frac{\alpha}{72\pi c_W^2}M_1^*\mu^*{\tan{\beta}} \big[ 3(\vert U_{11}^{\tilde b}\vert^2 + 2\vert U_{12}^{\tilde b}\vert^2)I(m_{\tilde{b}_1}^2,\vert M_1\vert^2,\vert\mu\vert^2)\nonumber \\ & & + 3\, (2\vert U_{22}^{\tilde b}\vert^2 + \vert U_{21}^{\tilde b}\vert^2)I(m_{\tilde{b}_2}^2,\vert M_1\vert^2,\vert\mu\vert^2) +2I(m_{\tilde{b}_1}^2,m_{\tilde{b}_2}^2,\vert M_1\vert^2) \big], \label{eq:deltamb} \end{eqnarray} with the auxiliary function \begin{eqnarray} I(a,b,c)=-\frac{1}{(a-b)(b-c)(c-a)}\left(ab\ln\frac{a}{b} + bc\ln\frac{b}{c} + ca\ln\frac{c}{a}\right). \end{eqnarray} $M_1$, $M_2$, $M_3$ (each with a phase $M_j = \vert M_j\vert e^{i\phi_j}$) and $\mu = \vert\mu\vert e^{i\phi_\mu}$ are the bino ($\tilde B$), wino ($\tilde W$), gluino ($\tilde g$) and Higgsino ($\tilde H$) mass parameters, respectively. $A_f = \vert A_f\vert e^{i\phi_f}$, here $f$ means fermion, denotes the soft supersymmetry-breaking trilinear scalar coupling. $\tilde b_i$ and $\tilde t_i$ with $i=1,2$ are the sbottom and stop mass eigenstates, respectively. $U^{\tilde b}$ and $U^{\tilde t}$ are $2\times 2$ mixing matrices. By setting all the phases to zero we obtain the results for the real MSSM (rMSSM), which agree with those given in \cite{Dittmaier:2006cz, Carena:1999py}. Since we are also interested in the effect of the $A_b$ phase, corrections proportional to $A_b$ are resummed by \cite{Carena:2002bb, Guasch:2003cv} \begin{eqnarray} \Delta_b&=&\frac{\Delta m_b}{1+\Delta_1},\nonumber \\ \Delta_1&=&-\frac{2\alpha_s(Q)}{3\pi}M_3^*A_b I(m_{\tilde{b}_1}^2,m_{\tilde{b}_2}^2,m_{\tilde{g}}^2). \end{eqnarray} We remark that $\Delta_b$ is complex and depends on $\phi_{\mu}$, $\phi_{f}$, $\phi_{i}$ with $i=1,2,3$. The effective couplings in \eq{b_H_couplings_loop} are used in the calculations of the tree-level, SM-QCD and SUSY-QCD contributions to the $b\bar{b}\to W^{\mp}H^{\pm}$ process and the $gg$ fusion. For the NLO EW corrections we use the tree-level couplings \eq{b_H_couplings_tree} with $m_b = m_b^{\DRb}(\mu_R)$. In the explicit one-loop calculations, we have to subtract the $\Delta_b$-related corrections which have already included into the tree-level contribution to avoid double counting. This can be done by adding the following counterterms \begin{eqnarray} \delta m_b^h&=&m_b^{\DRb}\left(1+\frac{1}{{\tan{\alpha}}{\tan{\beta}}}\right)(\Delta_b P_L + \Delta_b^* P_R), \nonumber \\ \delta m_b^H&=&m_b^{\DRb}\left(1-\frac{{\tan{\alpha}}}{{\tan{\beta}}}\right)(\Delta_b P_L + \Delta_b^* P_R), \nonumber \\ \delta m_b^A&=&m_b^{\DRb}\left[1+\frac{1}{({\tan{\beta}})^2}\right](\Delta_b P_L - \Delta_b^* P_R),\nonumber \\ \delta m_b^{H^+}&=&m_b^{\DRb}\left[1+\frac{1}{({\tan{\beta}})^2}\right]\Delta_b^* P_R,\nonumber \\ \delta m_b^{H^-}&=&m_b^{\DRb}\left[1+\frac{1}{({\tan{\beta}})^2}\right]\Delta_b P_L \label{dMB_subtraction} \end{eqnarray} to $\delta m_b$ in the corresponding bottom-Higgs-coupling counterterms, as listed in Appendix~\ref{ap:counterterm}. Moreover, \eq{dMB_subtraction} is used with $\Delta_b=$ $\Delta m_b^{SQCD}$, $\Delta m_b^{SEW}$ for the SUSY-QCD and EW corrections, respectively. \subsection{Neutral Higgs-boson propagators} \label{sect-ggWH-Higgs-propagator} In the MSSM, the neutral Higgs boson masses are subject to large radiative corrections in particular from the Yukawa sector of the theory. As a consequence, the tree-level Higgs masses can be quite different from the physical ones. This important effect should be considered in the NLO calculations of processes with intermediate neutral Higgs exchange. In our calculation, both subprocesses include $s$-channel diagrams with internal neutral Higgs bosons, (\fig{proc_bbWH_born} and \fig{proc_ggWH}). For $b\bar{b}\to W^{\mp}H^{\pm}$ there is also a $t$-channel diagram at tree level which gives the dominant contribution at high energies. Thus, the higher-order corrections to the internal Higgs propagators are not expected to have important effects in this subprocess at high energies. This will be verified in our numerical studies in \sect{sect_results_bbWH_tree}. The situation is different with the $gg$ fusion since the $s$-channel (triangle) contribution is large. The higher-order corrections to the internal Higgs propagators can therefore be significant in this case, as will be confirmed in \sect{sect_results_ggWH}. This issue has not been addressed in the previous studies. In a general amplitude with internal neutral Higgs bosons that do not appear inside loops, the structure describing the Higgs-exchange part of an amplitude is given by \begin{eqnarray} {\cal{A}}(p^2) = \sum_{ij} \Gamma_i\, \Delta_{ij}(p^2)\, \Gamma_j, \quad i= h,H,A, \label{amp_higgs_mixing} \end{eqnarray} where $\Gamma_{i,j}$ are one-particle irreducible Higgs vertices. $p$ is the momentum in the Higgs propagator, which is given in terms of the $3\times 3$ propagator matrix \begin{eqnarray} \Delta(p^2)& =& i[p^2 -{\mathbf{M}}(p^2)]^{-1} , \nonumber \\ {\mathbf{M}}(p^2)& =& \begin{pmatrix} m_h^2 - \hat{\Sigma}_{hh}(p^2) & - \hat{\Sigma}_{hH}(p^2) & - \hat{\Sigma}_{hA}(p^2) \\ - \hat{\Sigma}_{hH}(p^2) & m_H^2 - \hat{\Sigma}_{HH}(p^2) & - \hat{\Sigma}_{HA}(p^2) \\ - \hat{\Sigma}_{hA}(p^2) & - \hat{\Sigma}_{HA}(p^2) & m_A^2 - \hat{\Sigma}_{AA}(p^2) \end{pmatrix} . \label{eq:propagatormatrix} \end{eqnarray} $m_{i}$ ($i=h,H,A$) are the lowest-order Higgs-boson masses, and $\hat{\Sigma}_{ij}$ the renormalized self-energies. The physical masses can be found by diagonalizing the above matrix \cite{Frank:2006yh}. By using this propagator matrix we effectively resum all the one-loop corrections to the neutral Higgs self-energies. In our calculation, we keep the full propagator matrix and \eq{amp_higgs_mixing} whenever neutral Higgs-bosons are exchanged connecting three-point vertices in the tree-level $b\bar{b}$ contributions and in the $gg$ fusion diagrams. As a consequence, when including the NLO EW corrections, we have to discard all Feynman diagrams containing diagonal and nondiagonal $h_ih_j$ self-energies to avoid double counting (see \fig{proc_bbWH_EW}). Whenever neutral Higgs bosons appear inside a loop, the tree-level expressions are used for propagators and couplings. The renormalized Higgs self-energies in \eq{eq:propagatormatrix} are calculated at NLO by using the hybrid on-shell and $\overline{\text{DR}}$ scheme (see \sect{sect-bbWHew} and \cite{Frank:2006yh} for details). Our results have been successfully checked against the ones of FeynHiggs~\cite{Frank:2006yh,Degrassi:2002fi,Heinemeyer:1998np,Heinemeyer:1998yj}. It is noted that FeynHiggs has the option to include the leading two-loop ${\cal{O}}(\alpha_s \alpha_t)$ corrections in the cMSSM~\cite{Heinemeyer:2007aq, Hahn:2010te}. We have verified that the effects of these two-loop corrections are negligible in our numerical analysis and we thus chose to perform the numerical evaluation with the one-loop self-energies. To quantify the effect of the neutral Higgs propagators we introduce two approximations for the subprocess $b\bar{b}\to W^{\mp}H^{\pm}$: The improved-Born approximation (IBA) including both the $\Delta_b$ resummation and the neutral Higgs mixing resummation, and the simpler version IBA1 which contains only the resummed $\Delta_b$ together with tree-level Higgs boson masses and couplings. By LO we refer to the tree-level $b\bar{b}\to W^{\mp}H^{\pm}$ contribution with $m_b = m_b^{\overline{\text{DR}}}(\mu_R)$ and the tree-level Higgs sector. \subsection{SM-QCD corrections} \label{sect-bbWHsmqcd} \begin{comment} As mentioned in the introduction, these corrections have been calculated first in \cite{Hollik:2001hy} and subsequently in \cite{Gao:2007wz}. The latter claimed that, by using the same input parameters, they could not obtained the same results as the former. However, no quantitative comparison was shown. In this work, we recompute the SM-QCD corrections and try to clear up this ambiguity in the literature. \end{comment} The NLO contribution includes the virtual and real gluonic corrections. The virtual corrections, displayed by the Feynman graphs in~\fig{proc_bbWH_SMQCD_virt}, contain an extra gluon in the loops. The calculation is done by using the technique of constrained differential renormalisation (CDR) \cite{delAguila:1998nd} which is, at one-loop level, equivalent to regularization by dimensional reduction \cite{Siegel:1979wq, Hahn:1998yk}. We have also checked by explicit calculations that it is also equivalent to dimensional regularization \cite{'tHooft:1972fi} in this case. Concerning renormalization, the bottom-quark mass appearing in the Yukawa couplings is renormalized by using the $\overline{\text{DR}}$ scheme. It means that the running $m_b^{\overline{\text{DR}}}(\mu_R)$ (see \sect{running_mb}) is used in the Yukawa couplings and the one-loop counterterm reads \begin{eqnarray} \delta m_b^{\overline{\text{DR}}} = -m_b\frac{C_F\alpha_s}{4\pi}3C_{UV}, \end{eqnarray} where $C_F=4/3$, $C_{UV}=1/\varepsilon - \gamma_E + \ln(4\pi)$ in $D = 4 - 2\varepsilon$ space-time dimensions with $\gamma_E$ denoting Euler's constant. The bottom-quark mass related to the initial state (in the kinematics $p_{b, \bar{b}}^2=m_b^2$ and the spinors) is treated as the pole mass since the correct on-shell (OS) behavior must be assured. Indeed the $m_b^{{\text{OS}}}$ effect here is very small and can be neglected. As mentioned in \sect{running_mb}, the final results are independent of $\ln(m_b^{{\text{OS}}})$. We will therefore set $m_b^{{\text{OS}}}=m_b^{\overline{\text{DR}}}(\mu_R)$ everywhere in this paper. The finite wave-function normalization factors for the bottom quarks can be taken care of by using the OS scheme for the wave-function renormalization. For the top quark, the pole mass is used throughout this paper. Accordingly, the mass counterterm is calculated by using the OS scheme (Appendix~\ref{ap:counterterm}). The real QCD corrections consist of the processes with external gluons, \begin{eqnarray} b + \bar{b} & \to & W^{-} + H^{+} + g,\nonumber \\ b + g &\to& b + H^{+} + W^{-},\nonumber \\ \bar{b} + g &\to& \bar{b} + W^{-} + H^{+}, \label{pro_NLO_bbWHqcd_real} \end{eqnarray} corresponding to the Feynman diagrams shown in \fig{proc_bbWH_realQCD}. For the gluon-radiation process, soft and collinear divergences occur. The soft singularities cancel against those from the virtual corrections, while the collinear singularities are regularized by the bottom-quark mass. The gluon--bottom-induced processes are infrared finite but contain collinear singularities, which are regularized by the bottom-quark mass as well. After adding the virtual and real corrections, the result is collinear divergent and proportional to $\ln(m_b^2/\hat{s})$, where $\sqrt{\hat{s}}$ is the center-of-mass energy. These singularities are absorbed into the bottom and gluon parton distribution functions (PDF), as discussed in~\sect{sec:hadronic}. Following the line of~\cite{Boudjema:2009pw}, we apply both the dipole subtraction scheme~\cite{Catani:1996vz, Dittmaier:1999mb} and the two-cutoff phase space slicing method~\cite{Baur:1998kt} to extract the singularities from the real corrections. The two techniques give the same results within the integration errors. However, the error of the dipole subtraction scheme is much smaller than the one of the phase space slicing method. We will therefore use the dipole subtraction scheme in the numerical analysis. \subsection{Subtracting the on-shell top-quark contribution} \label{sect-subtraction-OS} A special feature of the gluon-induced processes in~(\ref{pro_NLO_bbWHqcd_real}) is the appearance of on-shell top-quarks decaying into $b W$ (and $b H^+$ when kinematically allowed), which requires a careful treatment and has been discussed in the previous literature, {\it e.g.}\ in \cite{Beenakker:1996ch, Tait:1999cf, Frixione:2008yi}. Our approach is similar to the one described in~\cite{Beenakker:1996ch,Tait:1999cf}, with the difference that we perform the zero top-quark width limit. We demonstrate the procedure in terms of the process $\bar{b}g\to W^{-}H^{+}\bar{b}$. The Feynman diagrams (\fig{proc_bbWH_realQCD}c) include a subclass involving the decay $\bar{t}\to \bar{b}W^-$. When the internal $\bar{t}$ can be on-shell, the propagator pole must contain a finite width $\Gamma_t$, which is regarded here as a regulator: \begin{eqnarray} \frac{i}{q^2-m_t^2}\longrightarrow \frac{i}{q^2-m_t^2+im_t\Gamma_t}. \end{eqnarray} This on-shell contribution is primarily a $\bar{t}H^+$ production and should therefore not be considered a NLO contribution. For the genuine NLO correction, the on-shell top contribution has to be discarded in a gauge invariant way. Starting from the full set of diagrams, the squared matrix element reads as follows, \begin{eqnarray} \vert M\vert^2 = \vert M_{{\text{OS}}}\vert^2 + 2\operatorname{Re}[M_{{\text{OS}}}M_{\text{non-OS}}^*] + \vert M_{\text{non-OS}}\vert^2, \label{Msquared} \end{eqnarray} where the subscripts $_{{\text{OS}}}$ and $_{\text{non-OS}}$ denote the contribution of the on-shell $\bar{t}$ diagrams and the remainder, respectively. The OS part, differential in the $bW$ invariant mass, to be subtracted can be identified as \begin{eqnarray} \frac{d\sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}}{dM_{bW}^2}\bigg\vert_{OS}^{\text{sub}} = \sigma^{\bar{b}g\to H^{+}\bar{t}} {\rm{Br}}(\bar{t}\to \bar{b}W^-) \frac{m_t\Gamma_t }{\pi[(M_{bW}^2-m_t^2)^2+m_t^2\Gamma_t^2]}, \label{bbar_glu_OS1} \end{eqnarray} where ${\rm{Br}}(\bar{t}\to \bar{b}W^-)=\Gamma_{\bar{t}\to \bar{b}W^-}^{LO}/\Gamma_t$. The ratio on the right-hand side (rhs) of \eq{bbar_glu_OS1} approaches $\delta(M_{bW}^2-m_t^2)$ when $\Gamma_t\to 0$. The subtracted NLO contribution, regularised with the help of $\Gamma_t$, can be written in the following way, \begin{eqnarray} \sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{reg}}(\Gamma_t)&=&\int dM_{bW}^2\left(\frac{d\sigma_{{\text{OS}}}^{\bar{b}g\to W^{-}H^{+}\bar{b}}}{dM_{bW}^2} -\sigma^{\bar{b}g\to H^{+}\bar{t}}\frac{m_t\Gamma_t{\rm{Br}}(\bar{t}\to \bar{b}W^-)}{\pi[(M_{bW}^2-m_t^2)^2 + m_t^2\Gamma_t^2]}\right)\nonumber \\ &+&\sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{inter}} +\sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{non-OS}}, \label{bbar_gluon_reg1} \end{eqnarray} where the interference and non-OS terms arise from the second and third terms in \eq{Msquared}. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{psfig/GamTq_1000_BbarG.pdf} \caption{{\em Dependence of the partonic cross section $\sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{reg}}$ on the width regulator $\Gamma_t$.}} \label{Bbarglu_Gamtq_parton} \end{figure} There is strong cancellation between the first term in the rhs of \eq{bbar_gluon_reg1} and the rest after subtraction of the collinear part, which makes the result of \eq{bbar_gluon_reg1} very small, yielding an essentially linear dependence on $\Gamma_t$ as displayed in \fig{Bbarglu_Gamtq_parton}. We can thus perform the limit $\Gamma_t \to 0$ and obtain a gauge invariant expression by \begin{eqnarray} \sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{reg}}=\lim_{\Gamma_t\to 0}\sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{reg}}(\Gamma_t). \label{sigma_reg_GamT_lim} \end{eqnarray} \begin{comment} In \cite{Frixione:2008yi} (see also \cite{Weydert:2009vr} and references therein), the two methods called Diagram Removal (DR) and Diagram Subtraction (DS) have been proposed and implemented in {\tt MC@NLO}. The former explicitly breaks gauge invariance. The latter is not defined in the zero width limit and therefore depends on the value of $\Gamma_t$ (taken to be the physical width there) and on the form of the subtraction function. Although it is claimed that the differences between DR and DS are small in their cases, this statement depends on the processes and physical observables. We proceed to calculate the limit in \eq{sigma_reg_GamT_lim}. \end{comment} \begin{figure}[h] \centering \includegraphics[width=0.6\textwidth]{psfig/MHp_14000_BbarG_fin.pdf} \caption{{\em The finite hadronic cross section $\sigma^{\bar{b}g\to W^{-}H^{+}\bar{b}}_{\text{reg}}$ after subtracting the OS top-quark and the collinear-singularity contributions as a function of $M_{H^\pm}$.}} \label{Bbarglu_MHp_hadron} \end{figure} \begin{comment} In \cite{Beenakker:1996ch} the cross section is obtained by calculating the rhs of \eq{bbar_gluon_reg2} with a very small decay width, but the value is not given. We note that the numerical integration is not easy with a very small decay width since the spin correlations are not included in the subtraction function. We have tried to do this with $\Gamma_t=1$ in our case and compared to the extrapolation ($\Gamma_t=0$) results, see \fig{Bbarglu_MHp_hadron}. We observe that the two results do not always agree within the integration errors and with the same statistics the errors of the extrapolation method are significantly smaller. The price to pay is that one has to calculate the cross section for at least two different values of $\Gamma_t$ in order for extrapolation to work. \end{comment} \fig{Bbarglu_MHp_hadron} shows that the finite gluon-induced contribution obtained in this way at the hadronic level (after proper subtraction of the collinear part) is very small for large values of $M_{H^\pm}$, but it can be of some significance when the charged Higgs boson is light. The method described above is completely analogous for the process $bg\to W^{-}H^{+}b$. For low masses, $M_{H^\pm} < m_t$, the intermediate on-shell top quark can also decay into $H^+b$. This additional OS contribution can be extracted by using the same extrapolation method. For completeness, we list here the expressions for the decay widths of $t\to b W^+$ and $t\to b H^+ $ at lowest order, \begin{align} \Gamma^{LO}_{t\to b W^+} &= \frac{\alpha}{16 m_t^3M_W^2s_W^2}(m_t^2-M_W^2)^2(m_t^2+2M_W^2),\\ \Gamma^{LO}_{t\to b H^+} &= \frac{\alpha}{16 m_t^3M_W^2s_W^2}(m_t^2-M_{H^\pm}^2)^2\left[(m_b^{\DRb}\tan\beta)^2 |\Delta_b^3|^2+\frac{m_t^2}{\tan^2\beta}\right], \end{align} where the $b$-quark mass has been neglected. \subsection{SUSY-QCD corrections} \begin{comment} As mentioned in the introduction, the SUSY-QCD corrections have been calculated in the rMSSM \cite{Zhao:2005mu, Rauch:2008fy}. In this paper, we extend this calculation for the cMSSM. \end{comment} The NLO SUSY-QCD contribution consists only of the virtual one-loop corrections, visualized by the Feynman diagrams with gluino loops in \fig{proc_bbWH_SUSY}. The only divergent part is the top-quark self energy, which is renormalized in the on-shell scheme. As discussed in \sect{running_mb}, large corrections proportional to $\alpha_s M_3^* \mu^*\tan\beta$ have been summed up to all orders in the bottom-Higgs couplings included in the IBA. We therefore have to subtract this part from the explicit one-loop SUSY-QCD corrections to avoid double counting. \subsection{Electroweak corrections} \label{sect-bbWHew} The full NLO EW contributions to the processes $b\bar{b}\to W^{\mp}H^{\pm}$ in the cMSSM have not been computed yet. They comprise both virtual and real corrections. For the virtual part, \fig{proc_bbWH_EW} illustrates the various classes of one-loop Feynman diagrams. As before, the calculation is performed using the CDR technique. We have also worked out all the necessary counterterms in the cMSSM and implemented them in {\texttt{FeynArts-3.4}} \cite{Hahn:2000kx, Hahn:2001rv}. Explicit expressions for the counterterms can be found in \appen{ap:counterterm}. For the Higgs field renormalization and $\tan\beta$, we use the $\overline{\text{DR}}$ renormalization scheme as specified in \cite{Frank:2006yh}. Hence, the correct OS behavior of the external $H^\pm$ must be ensured by including the finite wave-function renormalization factor \cite{Hollik:2010dh} \begin{eqnarray} \sqrt{Z_{H^-H^+}}=1 - \frac{1}{2}\operatorname{Re}\frac{\partial}{\partial p^2}\hat\Sigma_{H^-H^+}(p^2)\big\vert_{p^2=M_{H^\pm}^2}, \end{eqnarray} where $\hat\Sigma_{H^-H^+}(p^2)$ is the $H^\pm$ renormalized self-energy. The other renormalization constants are determined according to the OS scheme. To make the EW corrections independent of $\ln m_f$ from the light fermions $f\neq t$, we use the fine-structure constant at $M_Z$, $\alpha = \alpha(M_Z)$ as an input parameter. This means that we have to modify the counterterm as \begin{eqnarray} \delta Z_e^{\alpha(M_Z)}&=&\delta Z_e^{\alpha(0)} - \frac{1}{2}\Delta\alpha(M_Z^2),\nonumber \\ \Delta\alpha(M_Z^2)&=&\frac{\partial \Sigma_T^{AA}}{\partial k^2}\bigg\vert_{k^2=0}-\frac{\operatorname{Re}\Sigma_T^{AA}(M_Z^2)}{M_Z^2}, \end{eqnarray} where the photon self-energy includes only the light fermion contribution, to avoid double counting. The real EW contributions correspond to the processes with external photons, \begin{eqnarray} b + \bar{b} &\to& W^{-} + H^{+} + \gamma,\nonumber \\ b + \gamma &\to& b + H^{+} + W^{-},\nonumber \\ \bar{b} + \gamma &\to& \bar{b} + W^{-} + H^{+}, \label{pro_NLO_bbWHew_real} \end{eqnarray} described by the Feynman diagrams of~\fig{proc_bbWH_realEW}. They are calculated in the same way as the real QCD corrections, discussed in \sect{sect-bbWHsmqcd} and \sect{sect-subtraction-OS}. Naively, we would expect this photon contribution to be much smaller than the one from the gluon, due to the smallness of the EW coupling $\alpha$ and the photon PDF. This is not always true, however, since the photon couples to the $W^\pm$ and $H^\pm$ as well. The soft singularities are completely cancelled, as in the case of QCD. The EW splitting $\gamma\to H^+ H^-$ (similarly for $\gamma \to W^+ W^-$), on the other side, can introduce large collinear correction in the limit $M_{H^\pm}/Q\to 0$, $Q$ is a typical energy scale. The constraint $M_H^\pm > M_W$ prevents those splittings from becoming divergent. We observe, however, that the finite corrections (after subtracting the collinear bottom-photon and the OS top-quark contributions) from the above $\bar{b}\gamma$ process are still larger than the corresponding QCD ones for $M_{H^\pm}<200{~\text{GeV}}$, {\it e.g.}\ for $M_H=150{~\text{GeV}}$ and $\sqrt{s}=14{~\text{TeV}}$ by a factor of 2. The photon-induced contribution should thus be included in the NLO calculations for $W^\pm/H^\pm$ production at high energies. This requires the knowledge of the photon density in the proton, which at present is contained in the set MRST2004qed~\cite{Martin:2004dh} of PDFs. \section{The subprocess {\boldmath{$gg\to W^{\mp}H^{\pm}$}}} \label{sect-ggWH} The subprocess $gg\to W^{\mp}H^{\pm}$ is loop induced, in the MSSM with quark- and squark-loop contributions. \fig{proc_ggWH} summarizes the various one-loop Feynman diagrams, which involve three- and four-point vertex functions. Since the (s)quarks are always coupled to a Higgs boson, the one-loop amplitude is proportional to (s)quark-Higgs couplings. The dominant contributions therefore arise from the diagrams with the third-generation (s)quarks. As in \cite{Brein:2000cv}, the contribution from the first two generations of (s)quarks is neglected in this paper. Compared to the previous work~\cite{Brein:2000cv}, our calculation is improved by using the effective bottom-Higgs couplings and the resummed neutral Higgs propagators. It turns out that these improvements sizably affect both the cross section and CP-violating asymmetry. We have checked our results against those of~\cite{Brein:2000cv} for the case of the real MSSM using the tree-level couplings and Higgs propagators and found good agreement. \begin{figure}[h] \centering \includegraphics[width=0.7\textwidth]{psfig/ggWH_quark_landau.pdf} \caption{{\em Feynman diagrams that can produce three-point Landau singularities.}} \label{fig:quark_singularity} \end{figure} We notice an interesting feature related to the anomalous thresholds. Fig.~1b of \cite{Brein:2000cv} shows a very sharp peak close to the normal $t\bar{t}$ threshold. Careful observation reveals that the peak position is slightly above $2m_t$ and is obviously more singular than the normal thresholds in Fig.~1a of \cite{Brein:2000cv}. This is indeed an anomalous threshold corresponding to the three-point Landau singularity (see \cite{ninh_bbH2, ninh_thesis} and references therein) of the triangle and box diagrams in \fig{fig:quark_singularity}. A simple calculation following \cite{ninh_bbH2} yields the peak position at \begin{eqnarray} \hat{s}_{\text{peak}} &=& \frac{1}{2m_b^2}\big[(M_{H^\pm}^2+M_W^2)(m_t^2+m_b^2)-(m_b^2-m_t^2)^2- M_{H^\pm}^2M_W^2 \nonumber \\&& - \lambda^{1/2}(m_t^2,m_b^2,M_{H_{\pm}}^2) \lambda^{1/2}(m_t^2,m_b^2,M_{W}^2)\big], \label{eq:landau_s_peak} \end{eqnarray} with $\lambda(x,y,z)=x^2+y^2+z^2-2(xy+yz+xz)$. The partonic cross section is divergent at $\hat{s}=\hat{s}_{\text{peak}}$ but the result is finite at the hadronic level, {\it i.e.}\ after integrating over $\hat{s}$, since this singularity is logarithmic and thus integrable. The conditions for this anomalous threshold to be in the physical region can also be given~\cite{ninh_bbH2}, \begin{eqnarray} 2m_t &\le& \sqrt{\hat{s}} \le \sqrt{\frac{m_t}{m_b}[(m_t+m_b)^2-M_W^2]},\nonumber \\ m_b+m_t &\le& M_{H^\pm} \le \sqrt{2(m_t^2+m_b^2) -M_W^2}. \end{eqnarray} Similarly, the three-point Landau singularities can occur in the squark diagrams. \section{Hadronic cross section and CP asymmetry} \label{sec:hadronic} The LO hadronic cross section, in terms of the LO partonic $b\bar{b}$ annihilation cross section, is given by \begin{eqnarray} \sigma^{pp}_{LO}= \int {\rm{d}} x_1{\rm{d}} x_2[F_b^{p}(x_1, \mu_F)F_{\bar{b}}^{p}(x_2, \mu_F){\hat{\sigma}}^{b\bar{b}}_{LO}(\alpha^2,\mu_R)+(1\leftrightarrow 2)], \end{eqnarray} where $F_{b/\bar{b}}^p(x,\mu_F)$ is the bottom PDF at momentum faction $x$ and factorization scale $\mu_F$. Other $q\bar{q}$-subprocesses ($q=u,d,c,s$) are neglected due to the smallness of light-quark-Higgs couplings. The NLO hadronic cross section reads as follows, \begin{eqnarray} \sigma^{pp}_{NLO}&=&\sum_{i,j}\frac{1}{1+\delta_{ij}}\int {\rm{d}} x_1{\rm{d}} x_2[F_i^{p}(x_1, \mu_F)F_j^{p}(x_2, \mu_F){\hat{\sigma}}^{ij}_{NLO}(\alpha^2,\alpha^2\alpha_s,\alpha^3,\alpha^2\alpha_s^2,\mu_R)\nonumber \\ &+&(1\leftrightarrow 2)], \end{eqnarray} where $i,j$ = ($b, \bar{b}, g,\gamma$) and \begin{eqnarray} {\hat{\sigma}}^{ij}_{NLO}&=&{\hat{\sigma}}^{b\bar{b}}_{IBA}(\alpha^2)+\Delta_{\text{SM-QCD}}{\hat{\sigma}}^{ij}_{NLO}(\alpha^2\alpha_s) +\Delta_{\text{SUSY-QCD}}{\hat{\sigma}}^{ij}_{NLO}(\alpha^2\alpha_s)\nonumber \\ &+&\Delta_{EW}{\hat{\sigma}}^{ij}_{NLO}(\alpha^3)+{\hat{\sigma}}^{gg}(\alpha^2\alpha_s^2) \end{eqnarray} contain the various NLO contributions at the parton level, discussed in the previous sections. \begin{comment} We use the zero-mass bottom quark approximation. It means that in the calculation of the hard cross sections, the bottom quark mass will be neglected as much as possible, except in the treatment of collinear divergences related to initial state radiation and in the bottom-Higgs Yukawa coupling which can be regarded as an independent parameter. The bottom distribution function is used in this approach. \end{comment} As mentioned there, the mass singularities of the type $\alpha_s\ln(m_b)$ and $\alpha\ln(m_b)$ are absorbed in the quark distributions. We use the MRST2004qed set of PDFs~\cite{Martin:2004dh}, which include ${\cal{O}}(\alpha_s)$ QCD and ${\cal{O}}(\alpha)$ photonic corrections. As explained in \cite{Diener:2005me}, the consistent use of these PDFs requires the ${\overline{\text{MS}}}$ factorization scheme for the QCD, but the DIS scheme for the photonic corrections. We therefore redefine the (anti-)bottom PDF as follows, \begin{eqnarray} q(x)& =& q(x, \mu_{\text F}^2) -\frac{\alpha_s C_F}{2\pi} \int_x^1\frac{dz}{z} q\left(\frac xz, \mu_F^2\right) \bigg\{ \ln\begin{pmatrix}\frac{\mu_F^2}{m_b^2}\end{pmatrix} [P_{qq}(z)]_+\nonumber \\ && - \,[P_{qq}(z)(\ln(1-z)^2 +1)]_+ + C_{qq}^{{\overline{\text{MS}}}}(z) \bigg\} \nonumber \\ && -\frac{\alpha Q_b^2}{2\pi} \int_x^1\frac{dz}{z} q\left(\frac xz, \mu_F^2\right) \bigg\{ \ln\begin{pmatrix}\frac{\mu_F^2}{m_b^2}\end{pmatrix} [P_{qq}(z)]_+\nonumber \\ && - \,[P_{qq}(z)(\ln(1-z)^2 +1)]_+ + C_{qq}^{{\text{DIS}}}(z) \bigg\} \nonumber \\ &&-\, \frac{\alpha_sT_F }{2\pi}\int_x^1\frac{dz}{z} g\left(\frac xz, \mu_F^2\right) \bigg[ \ln\begin{pmatrix} \frac{\mu_F^2}{m_b^2}\end{pmatrix} P_{qg} + C_{qg}^{{\overline{\text{MS}}}}(z) \bigg]\nonumber \\ &&-\, \frac{3\alpha Q_b^2}{2\pi}\int_x^1\frac{dz}{z} \gamma\left(\frac xz, \mu_F^2\right) \bigg[ \ln\begin{pmatrix} \frac{\mu_F^2}{m_b^2}\end{pmatrix} P_{q\gamma} + C_{q\gamma}^{{\text{DIS}}}(z) \bigg] , \label{pdf_redifined} \end{eqnarray} with $C_F = 4/3$, $T_F=1/2$. The splitting functions are given by \begin{eqnarray} P_{qq}(z) = \frac{1+z^2}{1-z}, \quad P_{qg}(z) = P_{q\gamma}(z) = z^2 + (1-z)^2,\end{eqnarray} and the $[\ldots]_{+}$ prescription is understood in the usual way, \begin{eqnarray} \int_x^1dzf(z)\left[\frac{g(z)}{1-z}\right]_{+}=\int_x^1dz\frac{[f(z)-f(1)]g(z)}{1-z}-f(1)\int_0^xdz\frac{g(z)}{1-z}. \end{eqnarray} Following the standard conventions of QCD, the factorization schemes are specified by \begin{eqnarray} C_{qq}^{{\overline{\text{MS}}}}(z) &=& C_{qg}^{{\overline{\text{MS}}}}(z) = 0,\nonumber \\ C_{qq}^{{\text{DIS}}}(z) &=& \left[P_{qq}(z)\left(\ln(\frac{1-z}{z}) - \frac{3}{4}\right) + \frac{9+5z}{4} \right]_+ ,\nonumber \\ C_{q\gamma}^{{\text{DIS}}}(z) &=& P_{q\gamma}\ln(\frac{1-z}{z}) - 8z^2 + 8z - 1. \end{eqnarray} Having constructed in this way the hadronic cross sections $\sigma(pp\to W^\pm H^\mp)$, we can define the CP-violating asymmetry at the hadronic level in the following way, \begin{eqnarray} \delta^{\text{CP }}_{pp}&=&\frac{\sigma(pp\to W^{-}H^{+})-\sigma(pp\to W^{+}H^{-})}{\sigma(pp\to W^{-}H^{+})+\sigma(pp\to W^{+}H^{-})}. \label{eq_delta_cp_pp} \end{eqnarray} The numerator gets contributions from the NLO-$b\bar{b}$ corrections (the LO is CP conserving) and the loop-induced $gg$ process. However, the latter is much larger than the former due to the dominant gluon PDF. The CP-violating effect is therefore mainly generated by the $gg$ channel. The LO-$b\bar{b}$ contribution adds only to the CP invariant part and therefore reduces the magnitude of the CP asymmetry. \section{Numerical studies} \label{sect-results} \subsection{Input parameters} \label{sect-input} We use the following set of input parameters for the SM sector \cite{Amsler:2008zzb, :2009ec}, \begin{comment} The other quark masses are effective parameters adjusted to reproduce the hadronic contribution to the photonic vacuum polarization of \cite{Jegerlehner:2001ca}. \end{comment} \begin{equation} \begin{aligned} \alpha_s(M_Z) &= 0.1197, \quad &\alpha(M_Z)&=1/128.926, \\ M_{W} &= 80.398{~\text{GeV}}, \quad& M_Z&= 91.1876{~\text{GeV}}, \\ m_t & =173.1{~\text{GeV}}, \quad &\overline{m}_b(\overline{m}_b)& = 4.2{~\text{GeV}}. \end{aligned} \end{equation} We take here $\alpha_s = \alpha_s^{{\overline{\text{MS}}}}(\mu_R)$ at three-loop order \cite{Amsler:2008zzb}. $\overline{m}_b(\overline{m}_b)$ is the QCD-${\overline{\text{MS}}}$ $b$-quark mass, while the top-quark mass is understood as the pole mass. CKM matrix elements are approximated by $V_{td}=V_{ts}=0$ and $V_{tb}=1$. \begin{comment} As discussed in \ssect{sec:calc-ren} we use a variant of the $G_\mu$ scheme with $\alpha_{G_\mu}$ at leading order leading to NLO corrections that are of ${\cal O}(\alpha_{G_\mu}^3\alpha(0))$. Using $\alpha_{G_\mu}$ as coupling we calculate $\Delta r=3.0792\times 10^{-2}$ for $M_H=120{~\text{GeV}}$ and $\Delta r=3.1577\times 10^{-2}$ for $M_H=150{~\text{GeV}}$. \end{comment} For the soft SUSY-breaking parameters, we use the adapted CP-violating benchmark scenario (CPX)~\cite{Williams:2007dc,Carena:2000ks}, \begin{eqnarray}\begin{aligned} |\mu| &= 2{~\text{TeV}}, |M_2|=200{~\text{GeV}},\, |M_3| = 1{~\text{TeV}},\, |A_t|=|A_b|=|A_\tau|=900{~\text{GeV}},\\ M_{\tilde Q}&=M_{\tilde D}=M_{\tilde U}=M_{\tilde L}=M_{\tilde E}=M_{\text{SUSY}}=500{~\text{GeV}} . \end{aligned}\end{eqnarray} Since the Yukawa couplings of the first two fermion generations proportional to the small fermion masses are neglected in our calculations, we set $A_f=0$ for $f=e,\mu,u,d,c,s$. The values of $M_1$ and $M_2$ are connected via the GUT relation $|M_1|= 5/3\tan^2\theta_W |M_2|$. We can set $\phi_2 = 0$ while keeping $\phi_1$ as a free parameter. The complex phases of the trilinear couplings $A_t$, $A_b$, $A_\tau$ and the gaugino-mass parameters $M_i$ with $i=1,2,3$ are chosen as default according to \begin{eqnarray} \phi_{t}=\phi_{b}=\phi_{\tau}=\phi_{3}=\phi_1=\frac{\pi}{2}, \end{eqnarray} unless specified otherwise. The phase of $\mu$ is chosen to be zero in order to be consistent with the experimental data of the electric dipole moment. We will study the dependence of our results on $\tan\beta$, $M_{H^\pm}$, $\phi_t$ and $\phi_3$ in the numerical analysis. The $\phi_b$ dependence is not very interesting since it is similar to but much weaker than that of $\phi_t$. The scale of $\alpha_s$ in the SUSY-QCD resummation of the effective bottom-Higgs couplings \eq{eq:deltamb} is set to be $Q=(m_{\tilde{b}_1} + m_{\tilde{b}_2} + m_{\tilde{g}})/3$. If not otherwise specified, we set the renormalization scale equal to the factorization scale, $\mu_R=\mu_F$, in all numerical results. Our default choice for the factorization scale is $\mu_{F0} = M_W+ M_{H^\pm}$. Our study is done for the LHC at $7{~\text{TeV}}$ and $14{~\text{TeV}}$ center-of-mass energy. In the numerical analysis, we will focus on the latter since the total cross section is about an order of magnitude larger. Important results will be shown for both energies. \subsection{Checks on the results} \label{sect-ggWH-qcd-gauge} The results in this paper have been obtained by two independent calculations. We have produced, with the help of {\texttt{FeynArts-3.4}}\ and {\texttt{FormCalc-6.0}}\ \cite{Hahn:1998yk}, two different Fortran~77 codes. Loop integrals are calculated by using {\texttt{LoopTools/FF}}\ \cite{Hahn:1998yk, ff}. The phase-space integration is done by using the Monte Carlo integrators {\texttt{BASES}}~\cite{bases} and {\texttt{VEGAS}}~\cite{Lepage:1977sw}. The results of the two codes are in full agreement. On top, we have also performed a number of other checks: For the process $gg\to W^{\mp}H^{\pm}$, we have verified that the results are QCD gauge invariant. This can be easily done in practice by changing the numerical value of the gluon polarization vector $\epsilon_{\mu}(p,q)$, where $p$ is the gluon momentum and $q$ is an arbitrary reference vector. QCD gauge invariance means that the squared amplitudes are independent of $q$. More details can be found in \cite{Boudjema:2007uh}. As already mentioned, we compared our results also to the ones of \cite{Brein:2000cv} for the rMSSM and obtained good agreement. For the process $b\bar{b}\to W^{\mp}H^{\pm}$, besides the common checks of UV and IR finiteness, we compared our virtual EW corrections to those obtained by using {\texttt{SloopS}}\ \cite{Baro:2008bg,Baro:2009gn}, and the SUSY-QCD corrections to the results of Rauch~\cite{Rauch:2008fy} for the case of vanishing phases. Again, good agreement was found. \subsection{\boldmath{$pp/b\bar{b}\to W^{\mp}H^{\pm}$}: LO and improved-Born approximations} \label{sect_results_bbWH_tree} In this section, we study the effect of the bottom-Higgs coupling resummation described in \sect{running_mb} and of the Higgs propagator matrix discussed in \sect{sect-ggWH-Higgs-propagator}. \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/TB_14000_born3_all.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/MHp_14000_born3_all.pdf}} \caption{\label{bb_LO_all}{\em The leading order (LO) cross section with $m_b=m_b^{\overline{\text{DR}}}$ and the two improved Born approximations (IBA) as functions of $\tan\beta$ (left) and $M_{H^\pm}$ (right). $\sigma_{IBA1}$ includes the $\Delta_b$ resummation but not the Higgs mixing resummation, while $\sigma_{IBA}$ includes both. The lower panels show the corresponding relative corrections with respect to the LO result.}} \end{center} \end{figure} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHITP_14000_born3_all.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHIGL_14000_born3_all.pdf}} \caption{\label{bb_LO_all_phase}{\em Similar to \fig{bb_LO_all}, but with $\phi_t$ (left) and $\phi_3$ (right) varied instead.}} \end{center} \end{figure} The results for the approximations IBA and IBA1 defined in section \ref{sect-ggWH-Higgs-propagator} are illustrated in \fig{bb_LO_all} showing the dependence on $\tan\beta$ in the left panel and on the mass $M_{H^\pm}$ in the right panel. The relative correction $\delta$, with respect to the LO cross section, is defined as $\delta = (\sigma_{\text{IBA}} -\sigma_{\text{LO}})/\sigma_{\text{LO}}$. For small values of $\tan\beta$ the left-chirality contribution proportional to $m_t/\tan\beta$ is dominant while the right-chirality contribution proportional to $m_b\tan\beta$ dominates at large $\tan\beta$. The cross section has a minimum around $\tan\beta = 8$. The effect of $\Delta_b$ resummation is best understood in terms of \fig{bb_LO_all} and \fig{bb_LO_all_phase}. The important point is that $\Delta_b$ is a complex number and only its real part can interfere with the LO amplitude. Thus, the $\Delta_b$ effect is minimum at $\phi_{t,3}=\pm \pi/2$ where the dominant $\Delta m_b^{SQCD, \tilde{H}\tilde{t}}$ are purely imaginary and is largest at $\phi_{t,3}=0,\pm \pi$. $\phi_t$ enters via EW corrections and $\phi_3$ via the SUSY-QCD contributions. \fig{bb_LO_all_phase} shows that the $\Delta_b$ effect can be more than $150\%$. In \fig{bb_LO_all} where $\Delta_b$ is mostly imaginary we see the effect of order ${\cal{O}}(\Delta_b^2)$ which is about $-15\%$ at $\tan\beta=10$. We also observe that the Higgs mixing resummation in the $s$-channel diagrams has a much smaller impact, less than $10\%$, as expected. \subsection{\boldmath{$pp/b\bar{b}\to W^{\mp}H^{\pm}$}: full NLO results} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/TB_14000_bb_NLO_all.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/MHp_14000_bb_NLO_all.pdf}} \caption{\label{bb_NLO_all}{\em The cross section obtained by using IBA and including various nonuniversal NLO corrections as functions of $\tan\beta$ (left) and $M_{H^\pm}$ (right). The lower panels show the corresponding relative corrections to the IBA result.}} \end{center} \end{figure} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHITP_14000_bb_NLO_all.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHIGL_14000_bb_NLO_all.pdf}} \caption{\label{bb_NLO_all_phase}{\em Similar to \fig{bb_NLO_all}, but with $\phi_t$ (left) and $\phi_3$ (right) varied instead.}} \end{center} \end{figure} In this section, we investigate the effects of the SUSY-QCD, SM-QCD, and EW contributions at NLO. As in the previous section, we present here two sets of plots. In \fig{bb_NLO_all} we show the dependence of the total cross sections on $\tan\beta$ and $M_{H^\pm}$ at the default CPX phases, in particular $\phi_{t}=\phi_{3}=\pi/2$. As explained above, the ${\cal{O}}(\Delta_b)$ effect is turned off in this CPX scenario. The SUSY-QCD and EW NLO terms are therefore small at large $\tan\beta $, as shown in \fig{bb_NLO_all} (left). The SM-QCD correction is about $-20\%$ for small $\tan\beta$ and changes the sign around $\tan\beta =11$ due to the competition between the $b\bar{b}$ and the $g$-induced contributions. All the NLO contributions for different values of $\tan\beta$ and $M_{H^\pm}$ can be found in \tab{table_NLO}. \fig{bb_NLO_all_phase} shows the dependence of the total cross sections on $\phi_t$ and $\phi_3$ for $\tan\beta=10$ and $M_{H^\pm}=200$GeV. The EW corrections depend strongly on $\phi_t$, and the SUSY-QCD corrections on $\phi_3$. At $\phi_{t}=\phi_{3}=0,\pm\pi$ the effects are largest. The remaining EW and SUSY-QCD corrections, beyond the ${\cal{O}}(\Delta_b)$ contribution, are still rather large. \begin{figure}[t] \centering \includegraphics[width=0.6\textwidth]{psfig/vert_self_tbH.pdf} \caption{{\em Diagrams that can introduce large SUSY-QCD (left) and EW (right) corrections. $G^\pm$ are the charged Goldstone bosons.}} \label{diag_tbH_subleading} \end{figure} In particular, there is the following term of the SUSY-QCD correction, \begin{eqnarray} \tilde\Delta_{t}&=&\frac{2\alpha_s}{3\pi}M_3^*\mu^*\tan\beta J(m_{\tilde{g}}^2),\nonumber \\ J(m^2)&=&\vert U_{11}^{\tilde b}\vert^2 \vert U_{12}^{\tilde t}\vert^2 I(m^2, m_{\tilde{t}_1}^2, m_{\tilde{b}_1}^2) + \vert U_{21}^{\tilde b}\vert^2 \vert U_{12}^{\tilde t}\vert^2 I(m^2, m_{\tilde{t}_1}^2, m_{\tilde{b}_2}^2)\nonumber \\ &+& \vert U_{11}^{\tilde b}\vert^2 \vert U_{22}^{\tilde t}\vert^2 I(m^2, m_{\tilde{t}_2}^2, m_{\tilde{b}_1}^2) + \vert U_{21}^{\tilde b}\vert^2 \vert U_{22}^{\tilde t}\vert^2 I(m^2, m_{\tilde{t}_2}^2, m_{\tilde{b}_2}^2), \label{eq:sub_leading_Htb} \end{eqnarray} which can be included in the top-Yukawa part of charged Higgs couplings as follows \begin{eqnarray} \tilde\lambda_{b\bar{t}H^+}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(\frac{m_t}{{\tan{\beta}}}(1-\tilde\Delta_{t})P_L + m_b^{\DRb}{\tan{\beta}} \Delta_b^{3*}P_R\right),\nonumber \\ \tilde\lambda_{t\bar{b}H^-}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(m_b^{\DRb}{\tan{\beta}} \Delta_b^{3}P_L + \frac{m_t}{{\tan{\beta}}}(1-\tilde\Delta^{*}_{t}) P_R\right). \label{eq:sub_leading_Htb_couplings} \end{eqnarray} \begin{comment} \begin{eqnarray} \tilde\Delta_{b}&=&\frac{\alpha_t}{4\pi}A_t^*\mu^*\frac{1}{\tan\beta} J(\vert\mu\vert^2),\nonumber \\ \tilde\lambda_{b\bar{t}H^+}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(\frac{m_t}{{\tan{\beta}}}(1-\tilde\Delta_{t})P_L + m_b^{\DRb}{\tan{\beta}} \Delta_b^{3*}(1-\tilde\Delta^{*}_{b}) P_R\right),\nonumber \\ \tilde\lambda_{t\bar{b}H^-}&=&\frac{ie}{\sqrt{2}s_WM_W}\left(m_b^{\DRb}{\tan{\beta}} \Delta_b^{3}(1-\tilde\Delta_{b}) P_L + \frac{m_t}{{\tan{\beta}}}(1-\tilde\Delta^{*}_{t}) P_R\right). \label{eq:sub_leading_Htb_couplings} \end{eqnarray} \end{comment} This term originates from the left diagram in \fig{diag_tbH_subleading} and is important for small $\tan\beta$. This finding agrees with the discussion in \cite{Carena:2002bb} where other subleading corrections are also discussed. If the couplings \eq{eq:sub_leading_Htb_couplings} are used we find that the new-improved LO results move significantly closer to the full NLO results in \fig{bb_NLO_all_phase} (right). The situation in the left part of \fig{bb_NLO_all_phase} is due to the EW corrections. It indicates that there are still large corrections proportional to $A_t\mu\alpha_t/(4\pi)$ which can be associated with the right diagram in \fig{diag_tbH_subleading}. \begin{comment} \begin{eqnarray} \Delta_{G^\pm H^\pm} &=& \frac{3\alpha_t}{4\pi}A_t^* \mu^* \sin^2\beta K(M_{H^\pm}^2)\vert_{\text{fin}},\nonumber \\ K(m^2)&=&\vert U_{11}^{\tilde b}\vert^2 \vert U_{12}^{\tilde t}\vert^2 B_{0}(m^2, m_{\tilde{t}_1}^2, m_{\tilde{b}_1}^2) + \vert U_{21}^{\tilde b}\vert^2 \vert U_{12}^{\tilde t}\vert^2 B_{0}(m^2, m_{\tilde{t}_1}^2, m_{\tilde{b}_2}^2)\nonumber \\ &+& \vert U_{11}^{\tilde b}\vert^2 \vert U_{22}^{\tilde t}\vert^2 B_{0}(m^2, m_{\tilde{t}_2}^2, m_{\tilde{b}_1}^2) + \vert U_{21}^{\tilde b}\vert^2 \vert U_{22}^{\tilde t}\vert^2 B_{0}(m^2, m_{\tilde{t}_2}^2, m_{\tilde{b}_2}^2), \end{eqnarray} The index $_{\text{fin}}$ means that only the UV-finite part of $K$ is taken. \end{comment} The SM-QCD corrections (and EW corrections to a lesser extend) have a striking structure for small masses $M_{H^\pm} < m_t$ (\fig{bb_NLO_all}, right part). This is due to the finite contribution of the process $bg\to W^{-}H^{+}b$. When $M_{H^\pm} < m_t$ the intermediate top quark can be on-shell and can decay to $H^+b$. As discussed in \sect{sect-subtraction-OS}, this OS contribution has to be properly subtracted. The structure indicates that the OS top-quark effect cannot be completely removed and this quantum effect on the $W^-H^+$ production rate is an interesting feature, which was not discussed in previous studies \cite{Hollik:2001hy, Gao:2007wz}. \subsection{\boldmath{$pp/gg\to W^{\mp}H^{\pm}$}: neutral Higgs-propagator effects} \label{sect_results_ggWH} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHITP_14000_gg.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHITP_14000_CPasym_gg.pdf}} \caption{\label{gg_PHITP}{\em The cross section (left) and CP asymmetry (right) as functions of $\phi_{t}$.}} \end{center} \end{figure} Even though the $gg$-fusion subprocess is loop induced, its contribution can be of the same order as the tree-level $b\bar{b}\to W^{\mp}H^{\pm}$ contribution. Neutral Higgs bosons are exchanged in the $s$-channel and can be described by using effective bottom-Higgs couplings and the full Higgs-propagator matrix. The impact of the latter on the total cross section and CP asymmetry is large as can be seen from \fig{gg_PHITP}. The cross section can be reduced by $20\%$ at $\phi_t=\pm\pi$, while the CP asymmetry increases about $25\%$ at $\phi_t=\pm\pi/2$. This is consistent with the discussion in \sect{sect-ggWH-Higgs-propagator}. We also observe that the $gg$ contribution is very sensitive to $\phi_t$. \subsection{\boldmath{$pp\to W^{\mp}H^{\pm}$}: total results at $7{~\text{TeV}}$ and $14{~\text{TeV}}$} The total production cross section for the $W^-H^+$ final state at the LHC is shown in \fig{pp_NLO_all} and \fig{pp_NLO_all_phase}, as well as in \tab{table_NLO}. The cross section increases by an order of magnitude when the center-of-mass energy goes from $7{~\text{TeV}}$ to $14{~\text{TeV}}$. The $gg$ contribution is largest for small $\tan\beta$ and large $M_{H^\pm}$ while the $b\bar{b}$ dominates when $\tan\beta > 12$ and, approximately, $M_{H^\pm}<200{~\text{GeV}}$. In the right panel of \fig{pp_NLO_all}, one can see a little bump on the $gg$ contribution around $M_{H^\pm}=200$GeV, attributed to the three-point Landau singularities discussed in \sect{sect-ggWH}. The total cross section depends strongly on the phases $\phi_t$ and $\phi_3$ as can be seen from \fig{pp_NLO_all_phase}. The $gg$ contribution is almost independent of $\phi_3$ since the gluino does not appear at the one-loop level (the contribution through $\Delta_b$ resummation is of higher-order effect). The CP violating asymmetry is shown in \fig{pp_NLO_CP} as a function of $\tan\beta$ and $M_{H^\pm}$, and in \fig{pp_NLO_CP_phase} versus $\phi_t$ and $\phi_3$. The uncertainty bands obtained by varying the renormalization and factorization scales (we set $\mu_R=\mu_F$ for simplicity) in the range $\mu_{F0}/2 < \mu_F < 2\mu_{F0}$ are shown only in \fig{pp_NLO_CP} since the uncertainty depends strongly on $\tan\beta$ and in particular on $M_{H^\pm}$, but not on the phases. A more detailed account of the scale uncertainty of our results is given in the next section. As discussed at the end of \sect{sec:hadronic}, the CP violating effect is dominantly generated by the gluon-gluon fusion channel. The $b\bar{b}$ channel contributes significantly to the symmetric cross section and thus to the denominator of the CP asymmetry. It is therefore easy to understand why $\delta_{CP}$ is small for large $\tan\beta$ and small $M_{H^\pm}$, as seen in \fig{pp_NLO_CP}. The dependence on $\phi_3$ is explained by the same reasons: the numerator is independent of $\phi_3$ while the denominator including $\sigma_{b\bar{b}}$ has a minimum at $\phi_3=0$. The CP asymmetry is therefore maximum around $\phi_3=0$. \begin{table}[] \begin{footnotesize} \begin{center} \caption{\label{table_NLO}{ \em The total cross section in${~\text{fb}}$ for $pp/bb \to W^- H^+$ including the IBA and various nonuniversal NLO corrections and for $pp/gg \to W^- H^+ $ at $\sqrt{s}= 14 {~\text{TeV}}$. The charged Higgs-boson masses are given in${~\text{GeV}}$.}} \vspace*{0.5cm} \begin{tabular}{l c r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l} \hline $\tan\beta$ & $M_{H^\pm}$ &\multicolumn{2}{c}{ $\sigma_{\text{IBA}}$} &\multicolumn{2}{c}{ $\Delta_{\text{EW}}$} &\multicolumn{2}{c}{$\Delta_{\text{SMQCD}}$} &\multicolumn{2}{c}{$\Delta_{\text{SUSYQCD}}$} &\multicolumn{2}{c}{ $\sigma_{gg}$} &\multicolumn{2}{c}{all}\\ \hline \hline 5 & 200 & 11&241(1) & -1&0383(3) & -2&012(3) & -0&00821(1) & 13&194(1) & 21&377(3) \\ 10 & 200 & ~7&2568(9) & -0&1989(5) & -0&178(1) & -0&00721(2) & ~7&9428(5)& 14&815(2) \\ 20 & 200 & 12&546(2) & 0&1881(6) & ~0&752(3) & -0&03570(6) & ~7&9968(6)& 21&447(4) \\ 10 & 150 & 12&497(1) & -0&2574(5) & -0&561(2) & 0&00191(4) & ~8&7064(5)& 20&387(3) \\ 10 & 400 & ~1&2907(2) & -0&00530(7) & ~0&0328(2) & -0&008954(7)& ~4&4386(3)& ~5&7477(4) \\ 10 & 600 & ~0&35740(5)& -0&00832(2) & ~0&01594(5)& -0&006263(4)& ~2&7481(1)& ~3&1069(2) \\ \hline \end{tabular}\end{center} \end{footnotesize} \end{table} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/TB_14000_7000_bbNLO_gg.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/MHp_14000_7000_bbNLO_gg.pdf}} \caption{\label{pp_NLO_all}{\em The cross section as a function of $\tan\beta$ (left) and $M_{H^\pm}$ (right).}} \end{center} \end{figure} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHITP_14000_7000_bbNLO_gg.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHIGL_14000_7000_bbNLO_gg.pdf}} \caption{\label{pp_NLO_all_phase}{\em The cross section as a function of $\phi_{t}$ (left) and $\phi_{3}$ (right).}} \end{center} \end{figure} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/TB_14000_7000_CPasym.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/MHp_14000_7000_CPasym.pdf}} \caption{\label{pp_NLO_CP}{\em CP asymmetry as a function of $\tan\beta$ (left) and $M_{H^\pm}$ (right). Within the band, the scale $\mu_R=\mu_F$ is varied in the range $\mu_{F0}/2<\mu_F<2\mu_{F0}$.}} \end{center} \end{figure} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHITP_14000_7000_CPasym.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/PHIGL_14000_7000_CPasym.pdf}} \caption{\label{pp_NLO_CP_phase}{\em CP asymmetry as a function of $\phi_{t}$ (left) and $\phi_{3}$ (right).}} \end{center} \end{figure} \subsection{Scale dependence} \begin{figure}[] \begin{center} \mbox{\includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/SCL_14000_7000_bbNLO.pdf} \hspace*{0.001\textwidth} \includegraphics[width=0.49\textwidth, height=0.5\textwidth]{psfig/SCL_14000_7000_CPasym_pp.pdf}} \caption{\label{bb_NLO_SCL}{\em The cross section (left) and CP asymmetry (right) as functions of the renormalization and factorization scales ($\mu_R=\mu_F$).}} \end{center} \end{figure} In this section we discuss the scale dependence of the total cross sections and CP asymmetries. Since the calculation of the loop-induced subprocess $gg\to W^{\mp}H^{\pm}$ includes only the leading order contribution (with improvements on the bottom-Higgs couplings and neutral Higgs-mixing propagators), there is no cancellation of the renormalization/factorization-scale dependence in this channel. We therefore concentrate on the scale dependence of the $b\bar{b}\to W^{\mp}H^{\pm}$ cross section calculated at NLO, see \fig{bb_NLO_SCL} (left). We set $\mu_R = \mu_F$ for simplicity. The remaining uncertainty of the NLO scale dependence is approximately $9\%$ ($9\%$) when $\mu_F$ is varied between $\mu_{F0}/2$ and $2\mu_{F0}$, compared to approximately $14\%$ ($7\%$) for the IBA, at $14{~\text{TeV}}$ ($7{~\text{TeV}}$) center-of-mass energy. The uncertainty is defined as $\delta = [\vert\sigma(\mu_{F0}/2)-\sigma(\mu_{F0})\vert + \vert\sigma(2\mu_{F0})-\sigma(\mu_{F0})\vert]/\sigma(\mu_{F0})$. The IBA scale dependence looks quite small because we have set both renormalization and factorization scales equal, leading to an ``accidental'' cancellation. The IBA cross section increases as $\mu_F$ increases while it decreases as $\mu_R$ increases. We recall that $\mu_F$ enters via the bottom-distribution functions and $\mu_R$ appears in the running $b$-quark mass. That accidental cancellation depends strongly on the value of $\tan\beta$. We have verified, by studying separately the renormalization and factorization scale dependence, that including NLO corrections does reduce significantly each scale dependence. \begin{table}[] \begin{footnotesize} \begin{center} \caption{\label{table_scale}{ \em Cross sections in${~\text{fb}}$ for $pp/b\bar{b} \to W^- H^+$ and $pp/gg \to W^- H^+$ at different values of the factorization(renormalization) scale. The CP asymmetries in percentage are also shown.}} \vspace*{0.5cm} \begin{tabular}{l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l r@{.}l} \hline & \multicolumn{8}{c|}{$\sqrt{s}=7{~\text{TeV}}$} & \multicolumn{8}{c}{$\sqrt{s}=14{~\text{TeV}}$}\\ \hline $\mu_R=\mu_F$ &\multicolumn{2}{c}{ $\sigma_{\text{IBA}}$} &\multicolumn{2}{c}{ $\sigma^{b\bar{b}}_{\text{NLO}}$} &\multicolumn{2}{c}{$\sigma^{gg}$} &\multicolumn{2}{c}{$\delta_{\text{CP}}$} &\multicolumn{2}{c}{ $\sigma_{\text{IBA}}$} &\multicolumn{2}{c}{ $\sigma^{b\bar{b}}_{\text{NLO}}$} &\multicolumn{2}{c}{$\sigma^{gg}$} &\multicolumn{2}{c}{$\delta_{\text{CP}}$} \\ \hline $\mu_{F0}/2$ & 1&1028(2) & 1&0434(3) & 1&42088(9) & 8&207(8) & 6&6774(8) & 6&633(2) & 10&4606(6) & 8&380(7)\\ $\mu_{F0}$ & 1&1544(1) & 1&0870(2) & 1&02168(6) & 6&967(8) & 7&2568(9) & 6&873(1)& 7&9428(5) & 7&457(8)\\ $2\mu_{F0}$ & 1&1790(1) & 1&1445(2) & 0&7631(5) & 5&868(7) & 7&6648(9) & 7&224(1)& 6&2204(4) & 6&591(8)\\ \hline \end{tabular}\end{center} \end{footnotesize} \end{table} Concerning the CP asymmetries, the scale dependence is shown in \fig{bb_NLO_SCL} (right). We again set here $\mu_R = \mu_F$. If $\mu_F$ is varied between $\mu_{F0}/2$ and $2\mu_{F0}$, the uncertainty is approximately $24\%$ ($34\%$) for $14{~\text{TeV}}$ ($7{~\text{TeV}}$) center-of-mass energy. This uncertainty is so large because the dominant contribution to the CP asymmetries (the subprocess $gg\to W^{\mp}H^{\pm}$) is calculated only at LO. In \tab{table_scale} we show the values of the cross sections for the two subprocesses as well as the CP asymmetries. The scale-dependence uncertainty of the $gg\to W^{\mp}H^{\pm}$ process is indeed very large. It is mainly due to the running strong coupling $\alpha_s(\mu_R)$ which depends logarithmically on the renormalization scale. \section{Conclusions} \label{sect-conclusions} In this paper we have studied the production of charged Higgs bosons in association with a $W$ gauge boson at the LHC in the context of the complex MSSM. The NLO EW, SM-QCD and SUSY-QCD contributions to the $b\bar{b}$ annihilation are calculated together with the loop-induced $gg$ fusion. Special care is dedicated to the use of the effective bottom-Higgs couplings and the neutral-Higgs boson propagator matrix. Moreover, the CP violating asymmetry, dominantly generated by the $gg$ fusion parton subprocess, has been investigated. We have shown that the $\Delta_b$ and the Higgs-mixing resummations can have large effects on the production rates and CP asymmetry. Numerical results have been presented for the CPX scenario. It is shown that the production rate and the CP asymmetry depend strongly on $\tan\beta$, $M_{H^\pm}$ and the phases $\phi_t, \phi_3$. Large production rates prefer small $\tan\beta$, small $M_{H^\pm}$ and the phases $\phi_t, \phi_3$ about $\pm \pi$. Large CP asymmetries occur at small $\tan\beta$, for $M_{H^\pm}$ of about $250{~\text{GeV}}$, and $\phi_t \approx \pm\pi/2$ and $\phi_3 =0$. We have also studied the dependence of the results on the renormalization and factorization scales. For the $b\bar{b}$ subprocess, the NLO corrections reduce significantly the scale dependence while the $gg$ fusion suffers from large scale uncertainty mainly due to the running $\alpha_s(\mu_R)$. This makes the final results, in particular the CP asymmetry, depend significantly on the scales. A two-loop calculation would be needed to reduce this uncertainty to the level of a few percents.\\ \noindent{\bf Acknowledgments}\\ We are grateful to Fawzi Boudjema for discussions and for sending us the code ${\texttt{SloopS}}$. This work was supported in part by the European Community's Marie-Curie Research Training Network under contract MRTN-CT-2006-035505 `Tools and Precision Calculations for Physics Discoveries at Colliders' (HEPTOOLS). \newpage
proofpile-arXiv_067-13442
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} In single-molecule experiments of molecular motors, it has been a widely adopted strategy to visualize continuous stepwise motion by attaching a large probe particle.\cite{Svoboda:1993, Noji:1997, Rief:2000, Yasuda:2001, Greenleaf:2007} Recently, this technique has also been put into use for monitoring conformational changes in proteins that stochastically switch between two or more metastable states.\cite{Shiroguchi:2007, Shiroguchi:2011} Compared to using a fluorescent dye, using a probe particle has the following advantages: First, advances in technology now enable the monitoring of the particle at ultra-high temporal and spatial resolutions of up to $9.1$ \textmu s\cite{Ueno:2010} and $0.1$ nm,\cite{Nugent-Glandorf:2004, Greenleaf:2007} respectively. Second, the particle can be manipulated under optical microscopes, which provides insights into single-molecule mechanics\cite{Rief:2000, Smith:2001, Uemura:2003, Itoh:2004, Watanabe-Nakayama:2008} and energetics.\cite{Liphardt:2002, Toyabe:2010} However, one problem that remains in this method is that the observed motion of the probe particle does not precisely reflect the protein motion. For typical experiments, since the probe is large and loosely connected to the protein, the motion of the probe is usually delayed. To study protein dynamics in detail, the motion and the physical parameters of the protein from the observed trajectory of the probe particle must be estimated. To date, there has been systematic effort in developing both theoretical and numerical frameworks to determine a discrete state model of proteins from single-molecule fluorescent spectroscopy.\cite{Geva:1998, Berezhkovskii:2000, Cao:2000, Witkoskie:2004a, Flomenbom:2005, Gopich:2006} The framework was recently combined with Bayesian statistics,\cite{Witkoskie:2004b} which allows to analyze the entire sequence of single-molecule data, and it was experimentally demonstrated that the method yields more reliable estimation than the conventional correlation analysis.\cite{Witkoskie:2008} By the use of entire time series, a method to extract effective energy landscape has also been developed recently.\cite{Baba:2007, Baba:2011} For the time-series analysis of probe particles, there are several numerical approaches to estimating the underlying stepwise trajectories of the molecular motors from the motion of probe particles, and these have been applied to various kinds of experiments.\cite{Chung:1991, Nan:2005, Kerssemakers:2006, Carter:2008, Bozorgui:2010} However, to the best of authors' knowledge, all of these approaches attempt to discretize the observed trajectories, i.e., the probe trajectories into several discrete states. More importantly, these approaches do not incorporate the dynamics of the entire system, namely, thermal fluctuations of both the protein and the probe, and the response delay of the probe motion. Instead, a sampling technique of the reaction pathway in continuous space, so-called transition path sampling (TPS),\cite{Dellago:1998, Bolhuis:2002} is based on Langevin dynamics. However, this requires significant computational cost to search for the dominant pathways, making it inefficient in the presence of multiple reaction pathways.\cite{Autieri:2009} Although an effective method of estimating dominant reaction pathways (DRPs) has recently been developed,\cite{Faccioli:2006, Sega:2007, Autieri:2009} it is not straightforwardly applicable to the analysis of time series data because the method performs the path sampling using not constant time steps but constant displacement steps. In the present article, we consider a Langevin system that consists of two Brownian particles (one is visible and the other is hidden) connected with each other. On the basis of this model, we propose a method to efficiently estimate DRPs of the hidden variable from the trajectories of the visible variable. Although the model is very simple, it can be considered as a crude description of single-molecule experimental setups under appropriate approximations. We assume that the model and all the parameters have been determined and focus on the estimation of the DRP of the hidden variable. As will be shown later, even though the model is already given, finding the most probable trajectory of the hidden variable using the trajectory of the visible variable remains non-trivial. For parameter estimation in the presence of hidden degrees of freedom, we have proposed a general framework and the practical utility was demonstrated by a simple Langevin model.\cite{Miyazaki:2011} In this framework, the DRP plays a central role in parameter estimation. Our final remarks in the present article are a discussion of practical application of the proposed method to parameter estimation. Once the model is given, the path-probability can be expressed in terms of the Onsager--Machlup path probability.\cite{Onsager:1953, Machlup:1953, Hunt:1981} Thus, in principle, we can apply a standard maximum likelihood estimation to the hidden trajectory. However, we find in general that the conventional optimization algorithm requires too high a computational cost to find the DRP. Here, we develop an effective approximation technique with the aid of perturbation theory. The schematic procedure of our method is as follows. First, on the basis of a rough estimate of the DRP, we solve one differential equation in the forward-time direction. Next, by substituting this solution, we solve another differential equation in the reverse-time direction and obtain a better estimate of the DRP. By repeating this procedure, we can systematically increase the accuracy of the estimate. Since we solve these differential equations by alternating between the forward and reverse (backward) directions, we name this algorithm the Go-and-Back method. In Sec.~II, we introduce a working model and discuss the validity of the model. Then, we explain the problem with the gradient descent method and derive the Go-and-Back method. In Sec.~III, we examine two models of typical single-molecule experiments to investigate the effectiveness of our method. \begin{figure}[tbp] \centerline{\includegraphics{fig_model.eps}} \caption{Models for the numerical experiments. Model A: $x(t)$ is stochastically switching between two local minima of $V^{\rm eff}(x)$ by means of thermal noise (equilibrium state). Model B: $x(t)$ is stochastically stepping down the tilted effective potential $V^{\rm eff}(x) \equiv V(x) - fx$, where $V(x)$ is a periodic function and $f$ is the driving force (non-equilibrium steady-state). The parameters in these models are chosen as follows. Model A: $V^{\rm eff}(x)$ is a polynomial function defined as $V^{\rm eff}(x) \equiv \sum_{i=1}^8 a_i x^i$, where $a_1 = 2.57$, $a_2 = 114.4$, $a_3 = -14.0$, $a_4 = -191.9$, $a_5 = -81.3$, $a_6 = 196.4$, $a_7 = 29.1$, and $a_8 = -51.5$.\footnote{In typical experiments, the bimodal distribution of the probe particle is best fitted by two Gaussian functions. Thus, we roughly fitted two Gaussian functions of different means and variances with an eighth-order polynomial function in order to reproduce experimental results.} Model B: $V(x) \equiv A \sin(2\pi x/l)$, where $A = 20$, $l = 1$, and $f = 40$. In both models, $\gamma = 2$, $\Gamma = 20$, $k = 160$, $k_{\mathrm{B}}T = 4.11$, and $\Delta t = 0.01$.} \label{f.model} \end{figure} \section{Framework} \subsection{Model} The molecular structure of protein typically consists of a few hundred of amino acids. Therefore, in general, we need to consider such a huge number of degrees of freedom to study molecular dynamics of proteins. However, recent experimental\cite{Adachi:2007, Toyabe:2010, Hayashi:2010, Shiroguchi:2007, Shiroguchi:2011} studies on motor proteins clarified that only a few degrees of freedom dominate the large-scale conformational changes. In particular case of the rotational molecular motor F$_1$-ATPase, it has been experimentally shown that the energy conservation of the entire system is explained by considering only one-dimensional (rotational) motion.\cite{Toyabe:2010, Hayashi:2010} In addition, normal mode analysis on motor proteins\cite{Cui:2004, Cui:2006, Togashi:2007, Togashi:2010} implies that the low-frequency modes (a few degrees of freedom) correspond to such large-scale motion (\textmu s--ms), and the low-frequency modes are distinct from higher-frequency modes (a huge number of degrees of freedom) that may correspond to the local conformational fluctuations and the catalytic reactions at the active site (ps--ns). Therefore, within the typical time resolution of an optical microscope (\textmu s--ms), the large number of high-frequency modes are eliminated from the dynamics of proteins and thus the dynamics can be approximated by low-dimensional overdamped Langevin equations. Taking into account the above facts, we consider a Langevin system that consists of two Brownian particles interacting with each other: \begin{eqnarray} \gamma \dot{x} &=& -\partial_x[V^{\rm eff}(x) + U(x,y)] + \xi, \label{e.model_x} \\ \Gamma \dot{y} &=& F(y) -\partial_y U(x,y) + \eta, \label{e.model_y} \end{eqnarray} where $\gamma$ and $\Gamma$ are the friction coefficients, $\xi(t)$ and $\eta(t)$ are zero-mean white Gaussian noise with variances $2\gamma k_{\mathrm{B}}T$ and $2\Gamma k_{\mathrm{B}}T$, respectively. If $x(t)$ and $y(t)$ are regarded as the dominant degrees of freedom of the protein and the probe particle, respectively, Eqs.~(\ref{e.model_x}) and~(\ref{e.model_y}) can be considered as a crude model of single-molecule experiments.\cite{Julicher:1997, Reimann:2002, Miyazaki:2011} Note that we consider the simplest case where both $x(t)$ and $y(t)$ are one-dimensional variables because typical single-molecule experiments monitor only one-dimensional motion, but the following calculation is straightforwardly extended to higher dimensional $x(t)$ and $y(t)$. In the present model, $\gamma$ corresponds to the sum of the internal friction coefficient of the protein and the viscous friction coefficient between the protein and the medium. $\Gamma$ corresponds to the viscous friction coefficient of the probe particle. For simplicity, we assume that $\gamma$ and $\Gamma$ are position-independent. $U(x, y)$ corresponds to the energy potential of the elastic linker between the protein and the probe particle. $V^{\rm eff}(x) \equiv V(x) - fx$ is the effective energy potential of the protein, where $V(x)$ corresponds to the energy landscape profile of the protein along the reaction coordinate and $f$ corresponds to the ``driving force'' provided by the catalytic reaction such as ATP hydrolysis. In actual experiments, trapping force or space-constant load is sometimes applied to the probe particle. We also incorporate such an external force into the model equations denoted by $F(y)$. We assume that $U(x,y)$, $V^{\rm eff}(x)$, and $F(y)$ are independent of $t$. Here, let us discuss the validity of the working model by using two examples displayed in Fig.~\ref{f.model}. First, we consider that a protein has two chemical states: A and B states, and the protein stochastically goes back and forth between the two states. We suppose that the two chemical states exhibit different conformations and we can observe the switching motion by attaching a probe particle. Here, if the off rates of A(substrate) and B(product) from the protein are slower than the switching rates between the two states and the observation period, the entire system is well approximated as an equilibrium state. In addition, if the switching rates are slower than the global relaxation timescale of the protein structure, namely, the transition rates are well explained by the Kramers' model,\cite{Kramers:1940} the entire dynamics can be modeled by a double-well potential of $V(x)$ with $f = 0$ and $F = 0$ (Fig.~\ref{f.model}a). Next, we suppose that a molecular motor translocates in one direction with regular steps by catalytic reactions. For example, a rotary molecular motor F$_1$-ATPase rotates counterclockwise with regular 120$^\circ$ steps by hydrolyzing ATP.\cite{Noji:1997} In typical observation period(several minutes), the entire system is regarded as a nonequilibrium steady-state because the concentration of ATP and the products(ADP and P$_i$) are almost constant in this period. If ATP molecules are abundant in the medium and the rate limiting reaction of each step is the global conformational change of the protein by its thermal fluctuation, the phenomenological model can be described by using a tilted periodic potential (Fig.~\ref{f.model}b). In this manner, the dynamics of proteins and attached probe particles can be approximated as the simple Langevin model under appropriate conditions. Of course, the present model has several limitations to apply actual experiments due to the simple approximations especially the position-independent $\gamma$ and the time-independent $V^{\rm eff}(x)$. The details will be discussed in Sec.~IV. In what follows, we assume that $y(t)$ is observed with a sufficiently high temporal and spacial resolution, while $x(t)$ is hidden. We also assume that the entire set of system parameters is given. Then, our final task to consider here is the estimation of the most probable trajectory of $x(t)$ from the trajectory of $y(t)$. \begin{widetext} \subsection{Path probability} We denote the set of the trajectories from time $t = 0$ to $t = \tau$ as $[x, y]$ and the entire set of system parameters as $\vec \Pi = (\Pi_1, \Pi_2, \cdots, \Pi_p)$. Given a value of $\vec \Pi$, we can calculate the path probability $P([x,y] | \vec \Pi)$ as follows. First, $P([x,y] | \vec \Pi)$ is decomposed into the initial and transition probabilities as \begin{eqnarray} P([x,y]|\vec \Pi) = P_{\rm init}(x_0, y_0 | \vec \Pi)P_{\rm tr}((x_0, y_0) \to [x,y] | \vec \Pi), \label{e.path} \end{eqnarray} where $x_0$ and $y_0$ denote $x(0)$ and $y(0)$, respectively. Next, the transition probability can be expressed in terms of the Onsager--Machlup path-probability:\cite{Onsager:1953, Machlup:1953, Hunt:1981} \begin{eqnarray} P_{\rm tr}((x_0, y_0) \to [x,y] | \vec \Pi) &=& C \exp[-\beta S([x,y]; \vec \Pi)], \label{e.transition} \end{eqnarray} where $\beta^{-1}\equiv k_{\mathrm{B}}T$ and the action functional $S([x,y]; \vec \Pi)$ is defined as \begin{eqnarray} S([x,y]; \vec \Pi) &\equiv& \frac{1}{4\gamma} \int_0^{\tau} [\gamma \dot{x} + V^{\rm eff}_x(x) + U_x(x, y)]^2 \ \d t - \frac{k_{\mathrm{B}}T}{2\gamma} \int_0^{\tau} [V^{\rm eff}_{xx}(x) + U_{xx}(x, y)] \ \d t \nonumber \\ && + \frac{1}{4\Gamma} \int_0^{\tau} [\Gamma \dot{y} - F(y) + U_y(x, y)]^2 \ \d t - \frac{k_{\mathrm{B}}T}{2\Gamma} \int_0^{\tau} [ -F_y(y) + U_{yy}(x, y)] \ \d t, \label{e.action} \end{eqnarray} where the total and partial differentiations are denoted using the same notation such as ${V^{\rm eff}}' \equiv V^{\rm eff}_x$ and $\partial_{xx} U \equiv U_{xx}$. When the trajectory $[x, y]$ is time-discretized by $\Delta t$, the normalization constant becomes $C = [\sqrt{\gamma\Gamma}/ (4\pi k_{\mathrm{B}}T \Delta t) ]^N$, where $\tau \equiv N\Delta t$. Therefore, if we adopt an appropriate approximation for the initial distribution $P_{\rm init}(x_0, y_0 | \vec \Pi)$,\footnote{If the system is locally equilibrated, we can adopt the Boltzmann distribution.} we can compute the path probability by Eqs.~(\ref{e.path})--(\ref{e.action}). Once we obtain a concrete expression for the path probability, we can estimate the DRP by a standard maximum likelihood estimation with respect to $[x]$. In Bayesian statistics, the maximum likelihood estimator (MLE) is included in the maximum {\it a posteriori} (MAP) estimator as a special case. It has been clarified in our previous work that the MAP estimator does not coincide with the true trajectory of $x(t)$.\cite{Miyazaki:2011} However, when the motion of $x(t)$ is stepwise like that of molecular motors, we find that the MAP estimator seems to be a good estimator of the stepping motion of $x(t)$. To maintain consistency with our previous work and also for convenience when considering the application of the Go-and-Back method to parameter estimation, we refer to the MLE as the MAP estimator in the present article, which we denote by $[\hat{x}]$ in the following sections. \subsection{Gradient descent} The gradient descent method is the most widely used optimization algorithm. Before proceeding to the Go-and-Back method, let us consider why this standard method is inefficient for the present problem. To maximize $P([x,y]|\vec \Pi)$ with respect to $[x]$, an initial condition for $[x]$ and a boundary condition are required. Here, we adopt the Dirichlet boundary condition. (For the initial condition, for instance we can adopt $[x] = [y]$.) In this case, $x_0$ is fixed, and thus we avoid having to consider the initial distribution $P_{\rm init}(x_0, y_0 | \vec \Pi)$. Therefore, the maximization of $P([x,y]|\vec \Pi)$ is replaced by the minimization of action functional $S([x,y];\vec \Pi)$ with respect to $[x]$. By introducing a ``virtual time'' $s$ and replacing $x(t)$ with $x(t,s)$, we obtain the MAP estimator $[\hat{x}]$ as the solution of the following partial differential equation in the limit of $s \to \infty$: \begin{eqnarray} \pder{x(t,s)}{s} &=& -\frac{\delta S([x,y];\vec \Pi)}{\delta x} \nonumber \\ &=& - \left. \pder{W(x,y)}{x} \right|_{x = x(t,s)} + \frac{\gamma}{2} \frac{\partial^2 x(t, s)}{\partial^2 t}, \label{e.relaxation} \end{eqnarray} where \begin{eqnarray} W(x, y) &\equiv& \frac{1}{4\gamma}[V^{\rm eff}_x(x) + U_x(x,y)]^2 - \frac{k_{\mathrm{B}}T}{2\gamma}[V^{\rm eff}_{xx}(x) + U_{xx}(x,y)]\nonumber \\ &&+ \frac{1}{4\Gamma}[ - F(y) + U_y(x,y)]^2 - \frac{k_{\mathrm{B}}T}{2\Gamma}[ -F_y(y) + U_{yy}(x,y)]. \label{e.w} \end{eqnarray} Here, if we consider $s$ in Eq.~(\ref{e.relaxation}) as a ``real'' time $t$, and $t$ in Eq.~(\ref{e.relaxation}) as a spacial coordinate $x$, the form of Eq.~(\ref{e.relaxation}) is similar to the time-dependent Ginzburg--Landau (TDGL) equation for non-conserved systems.\cite{Gunton:1983} When $W(x, y)$ takes the form of a multi-well function, it is known in general that, after the fast relaxation of the local fluctuation, the global relaxation (the kink motion) takes a very long time.\cite{Kawasaki:1982, Carr:1989} Therefore, in the presence of the hopping motion of $x(t)$ between multiple local minima, the gradient descent method may require a large computational cost. \end{widetext} \subsection{Go-and-Back method} As mentioned in the previous section, the gradient descent method requires a large computational cost. On the basis of perturbation theory and making the proper assumption that $\gamma/\Gamma \ll 1$, we solve this problem as shown below. To solve the Euler--Lagrange equation \begin{eqnarray} \frac{\delta S([x,y];\vec \Pi)}{\delta x} = \pder{W(x,y)}{x} - \frac{\gamma}{2} \ddot{x} = 0, \label{e.E-L} \end{eqnarray} we decompose the equation into two first-order ordinary differential equations by introducing $v(t)$ which satisfies \begin{eqnarray} \gamma \dot x = - [V^{\rm eff}_x(x) + U_x(x,y)] + v. \label{e.led} \end{eqnarray} $v(t)$ is a deterministic variable which mimics the random force $\xi(t)$ [See Eq.~(\ref{e.model_x})]. Substituting Eq.~(\ref{e.led}) into Eq.~(\ref{e.E-L}) and using Eq.~(\ref{e.model_y}), we finally get \begin{eqnarray} \gamma \dot{v} &=& [V^{\rm eff}_{xx}(x) + U_{xx}(x,y)] v - k_{\mathrm{B}}T[V^{\rm eff}_{xxx} (x) + U_{xxx} (x,y)] \nonumber \\ && + \frac{\gamma}{\Gamma} U_{xy}(x,y)[U_y(x,y) - U_y(x^*, y)] \nonumber \\ && + \frac{\gamma}{\Gamma} U_{xy}(x, y)\cdot \eta^*, \label{e.dot_v} \end{eqnarray} where $\cdot$ represents It\^{o}-type stochastic calculi\cite{Mortensen:1969} and $x^*$ represents the true position of $x$ at time $t$. Since $[y]$ is realized under the true $[x]$, we must use $x^*$ instead of $x$ in Eq.~(\ref{e.model_y}). Here, unknown variables $x^*$ and $\eta^*$ are involved in the third and the fourth terms in the right-hand side of Eq.~(\ref{e.dot_v}). Fortunately, the contribution of these two terms are negligible. First, $[U_y(x,y) - U_y(x^*, y)]$ is statistically close to $0$ if $[x] \simeq [\hat{x}]$. Similarly, $\left\langle U_{xy}(x, y)\cdot \eta^* \right\rangle = 0$. This calculation is independent of $[x]$. Moreover, considering the typical case of single-molecule experiments, we can naturally assume $\gamma/\Gamma \ll 1$. Therefore, in principle, we may obtain a good approximate solution of $[\hat x]$ by solving the following differential equations simultaneously:\begin{eqnarray} \gamma \dot x &=& -G_x(x,y) + v, \label{e.led_g} \\ \gamma \dot{v} &=& G_{xx}(x,y) v - k_{\mathrm{B}}T G_{xxx}(x,y), \label{e.v_g} \end{eqnarray} where $G(x,y) \equiv V^{\rm eff}(x) + U(x,y)$. Here, Eq.~(\ref{e.led}) is rewritten as Eq.~(\ref{e.led_g}), and Eq.~(\ref{e.dot_v}) is approximated as Eq.~(\ref{e.v_g}). However, a difficulty remains: $G_{xx}(x,y)$ on the right-hand side of Eq.~(\ref{e.v_g}) is positive in the typical case when $x(t)$ is fluctuating around the bottom of the energy potential. Thus, as soon as we numerically integrate Eqs.~(\ref{e.led_g}) and~(\ref{e.v_g}), $v(t)$ will destabilize and finally diverge. In contrast, if we try to solve these equations in the reverse-time direction, $x(t)$ will be instantly destabilized due to the same problem.\footnote{When $x$ is close to the bottom of the effective potential, $-G_x(x, y) \simeq - G_{xx}(x_{\rm b}, y) (x - x_{\rm b})$, where $x_{\rm b}$ represents the bottom position. Since $G_{xx}(x_{\rm b}, y)$ is positive, Eq.~(\ref{e.led_g}) will be destabilized if the equation is integrated in the reverse-time direction.} To overcome such an initial value problem, we use that the second term in the right-hand side of Eq.~(\ref{e.v_g}) is small. In actual experiments, the data are discretized with $\Delta t$. This time interval is sub-millisecond for typical cases, which is much longer than the timescale of the local relaxation of proteins (ps--ns). Within this coarse timescale, the rough energy landscape is effectively smoothed. In addition, for most times $x(t)$ spends in the potential well. In this period, the second derivative of the effective potential $G_{xx}(x, y)$ becomes dominant and the higher-order derivatives will be negligible. Hence the second term in Eq.~(\ref{e.v_g}) will be small for most times. We introduce a perturbation parameter $\varepsilon$,\footnote{$\varepsilon$ is a nondimensional parameter, which guarantees the smallness of the second term in the right-hand side of Eq.~(\ref{e.v_g}). After the calculation, we substitute $\varepsilon = 1$ into the result.} and rewrite Eq.~(\ref{e.v_g}) as \begin{eqnarray} \gamma \dot{v} &=& G_{xx}(x,y) v - \varepsilon \ k_{\mathrm{B}}T G_{xxx}(x,y). \label{e.v_ge} \end{eqnarray} We expand $x(t)$ and $v(t)$ in terms of power series of $\varepsilon$, and we define the $i$-th order approximation as \begin{eqnarray} x_{(i)}(t) &\equiv& \sum_{j=0}^i \varepsilon^j x_{(j)}(t), \label{e.x_purt} \\ v_{(i)}(t) &\equiv& \sum_{j=0}^i \varepsilon^j v_{(j)}(t). \label{e.v_purt} \end{eqnarray} Note that the present definition of the $i$-th term differs from the usual definition: the $i$-th order approximation includes all terms from orders $0$ to $i$. This definition is crucial to dramatically simplify the algorithm. In addition, we introduce a proper approximation for the zeroth-order term of $v(t)$: \begin{eqnarray} v_{(0)}(t) = 0. \label{e.v_0} \end{eqnarray} (For the reason, see Appendix A.) Then, once we adopt appropriate boundary conditions for $x_{(i)}(0)$ and $v_{(i)}(\tau)$, the higher-order approximate solutions of $[\hat{x}]$ can be successively obtained as follows. (For details of the derivation, see Appendix A.) \newpage \textbf{Go-and-Back method:} \begin{itemize} \item[A.] Using $v_{(i)}(t)$, we solve \begin{eqnarray} \gamma \dot{x}_{(i)} &=& -G_x (x_{(i)},y) + v_{(i)} + O(\varepsilon^{i+1}) \label{e.gb_x} \end{eqnarray} \underline{from $t = 0$ to $t = \tau$} and obtain $x_{(i)}(t)$. \item[B.] Using $x_{(i)}(t)$, we solve \begin{eqnarray} \gamma \dot{v}_{(i+1)} &=& G_{xx}(x_{(i)},y) v_{(i+1)} - \varepsilon \ k_{\mathrm{B}}T G_{xxx}(x_{(i)},y)\nonumber \\ && + O(\varepsilon^{i+2}). \label{e.gb_v} \end{eqnarray} \underline{from $t = \tau$ to $t = 0$} and obtain $v_{(i+1)}(t)$. \item[C.] Alternate between Step A and Step B. \end{itemize} \begin{figure*}[t] \centerline{\includegraphics{fig_tc_dw_gb_arxiv.eps}} \caption{Estimation result of model A by the Go-and-Back method. (top) The trajectory of $y(t)$ [red line]. (bottom) The true trajectory of $x(t)$ [gray line], and the estimated trajectory of $x(t)$, denoted as $\hat{x}(t)$ [blue line]. The iteration number of the optimization process is $i=50$, and the data length is $\tau = 100$. For the present parameter setting, the relaxation time scale of $x(t)$ is shorter than $\Delta t$. Therefore, we use linear-interpolated $[y]$ with the step size $\Delta t/10$ to stably integrate Eqs.~(\ref{e.gb_x}) and (\ref{e.gb_v}).} \label{f.tc_dw_gb} \end{figure*} \begin{figure*}[tbp] \centerline{\includegraphics{fig_tc_step_gb_arxiv.eps}} \caption{Estimation result of model B by the Go-and-Back method. (top) The trajectory of $y(t)$ [red line]. (bottom) the true trajectory of $x(t)$ [gray line] and the estimated trajectory $\hat{x}(t)$ [blue line]. $i=50$ and $\tau = 50$. For the present parameter setting, the relaxation time scale of $x(t)$ is shorter than $\Delta t$. Therefore, we use linear-interpolated $[y]$ with to stably integrate Eqs.~(\ref{e.gb_x}) and (\ref{e.gb_v}).} \label{f.tc_step_gb} \end{figure*} \section{Examples} To investigate the practical utility of the Go-and-Back method, we examine the two models illustrated in Fig.~\ref{f.model}. We numerically integrate the model equations [Eqs.~(\ref{e.model_x}) and~(\ref{e.model_y})] from $t =0$ to $t=\tau$ and obtain the true trajectory set $[x,y]$. Then, we assume that we can only monitor $[y]$, and estimate $[x]$ from $[y]$. Figures~\ref{f.tc_dw_gb} and~\ref{f.tc_step_gb} display the estimation results of model A and model B by the Go-and-Back method, respectively. Although the estimated trajectory of $x(t)$, denoted as $[\hat{x}]$, does not coincide with the true trajectory $[x]$, $[\hat{x}]$ seems to be a good estimate in both examples. In particular, stepwise motion of $x(t)$ in model B is precisely reproduced from noisy $y(t)$ (Fig.~\ref{f.tc_step_gb}). \begin{figure}[tbp] \centerline{\includegraphics{fig_dw_gb_BC}} \caption{Effect of the boundary condition. The boundary condition $\{x_{(i)}(0), v_{(i)}(\tau)\}$ of model A is varied randomly ($n=20$), and the standard deviations of $\hat{x}(t)$ at each $t$ are plotted. The diminishing times at both boundaries are evaluated by fitting exponent functions. $x_{(i)}(0)$ is varied Gaussian noise with mean $y(0)$, S.D. $=0.5$ [half length of the two stable states]. $v_{(i)}(\tau)$ is varied zero-mean Gaussian noise with the variance $2\gammak_{\mathrm{B}}T$ [the same power of $\xi(t)$].} \label{f.BC} \end{figure} To apply the Go-and-Back method, a boundary condition is required. (The initial condition is included in the algorithm [Eq.~(\ref{e.v_0})]. The validity well be discussed below.) For both examples (Figs.~\ref{f.tc_dw_gb} and \ref{f.tc_step_gb}), $\{ x_{(i)}(0), v_{(i)}(\tau)\} = \{y(0), 0\}$ is adopted. To investigate the stability of the Go-and-Back method, we vary the boundary condition of model A at random. As a result, effect of the boundary condition is diminished instantly (Fig.~\ref{f.BC}). Only less than 0.1\% of the total length of the trajectory at both ends is affected by the boundary condition. Thus, the estimates are almost uniform against the choice of the boundary condition. The Go-and-Back method allows to expect the diminishing time scale from Eq.~(\ref{e.led_g}) by the second order approximation on $G(x, y)$ around the bottom of the effective potential. For the present model, the diminishing time scale is approximated as $\tau_{\rm dim} \sim 0.001$,\footnote{$\tau_{\rm dim} \sim \gamma / G_{xx}(x_{\rm b}) = \gamma/(a_1 + k)$ where $x_{\rm b}$ represents the bottom position of the effective potential.} which is almost consistent with the numerical results (Fig.~\ref{f.BC}). \begin{figure}[tbp] \centerline{\includegraphics{fig_dw_gb_IC_hist}} \caption{Effect of the initial condition. Zero-mean white Gaussian noise with the variance $0.25\times2\gammak_{\mathrm{B}}T$ is added to the original initial condition of the Go-and-Back method [Eq.~(\ref{e.v_0})], and the difference between the estimate from the noise-added initial condition $\hat{x}_{\rm noise}(t)$ and the the original estimate $\hat{x}(t)$ is evaluated at each $t$ ($10^{4}$ data points). See also Fig.~S1.} \label{f.IC} \end{figure} We also examine the validity of the initial condition adopted in the algorithm. We apply noise to the original initial condition [Eq.~(\ref{e.v_0})] and evaluate the dependency on the estimate. As a result, the original estimate and the estimate obtained from the noise-added initial condition are almost completely overlapped (Fig.~\ref{f.BC} and Fig.~S1). The difference is $10^{-4}$ on average, which is $10^4$ times smaller than the distance between two stable states, and $10^3$ times smaller than the thermal fluctuation of $x(t)$.\footnote{The standard deviation of the thermal fluctuation of $x(t)$ is $\sqrt{2k_{\mathrm{B}}T \Delta t / \gamma}$ in the case of discrete $t$. For model A, S.D. $= 0.2$.} We note that the Go-and-Back method bases on the Euler-Lagrange equation [Eq.~(\ref{e.E-L})], which means we cannot guarantee that the obtained estimate is the global minimum. However, such a robustness against the choice of the initial condition suggests that the estimate may be the global optimum solution at least in the present model. To conclude, the Go-and-Back method incorporates the appropriate initial condition that yields stable solution, and the method is quite robust against the boundary condition. \begin{figure}[tbp] \centerline{\includegraphics{fig_s_dw}} \caption{Relaxation properties of the action functional $S([x,y] ; \vec \Pi)$ in the optimization process. (left) Go-and-Back method. The data is taken from the numerical experiment shown in Fig.~\ref{f.tc_dw_gb} (model A). (right) Gradient descent method. We use the same $[y]$ as for the Go-and-Back method (Fig.~\ref{f.tc_dw_gb}, top). For the initial condition, we test raw data of $[y]$ and the moving-averaged (MA) trajectories of $[y]$ with bin sizes 20, 50, 100, and 200. (The time lengths are 0.2, 0.5, 1, and 2, respectively.)} \label{f.s_dw} \end{figure} Next, we compare the Go-and-Back method with the conventional gradient descent method. We use the same $[y]$ as for the Go-and-Back method (Fig.~\ref{f.tc_dw_gb}, top) and adopt the FTCS (forward-time centered-space) scheme\cite{Press:2007_chap20} for the minimization of $S([x,y];\vec \Pi)$. To apply FTCS scheme, the initial and the boundary conditions are required. For the boundary condition, we adopt Dirichlet condition $\{\hat{x}_0, \hat{x}_\tau\} = \{y_0, y_{\tau}\}$, and for the initial condition, we adopt raw data of $[y]$, or the moving-averaged trajectories of $[y]$ with different bin sizes in order to investigate effect of the initial condition. Figure~\ref{f.s_dw} shows the relaxation property of $S([x,y];\vec \Pi)$ between the two methods. In the case of the Go-and-Back method, after the quick relaxation the value of $S$ fluctuates slightly around the minimum value. In some cases, we can recognize that the method overcomes large barrier of $S([x,y] ; \vec \Pi)$ (Figs.~S2 and S3).\footnote{The Go-and-Back method is not an algorithm to search the global minimum point of $S([x,y] ; \vec \Pi)$ along $[x]$-space, but systematically calculates the higher-order approximation of the solution of the Euler-Lagrange equation [Eq.~(\ref{e.E-L})]. Therefore, a monotonic decrease of $S([x,y] ; \vec \Pi)$ does not mean that the estimate gets stuck into the local minimum.} For the present model, $i=15$ is enough to converge the solution. Compared to the Go-and-Back method, as one expects, the gradient descent method requires too many iteration steps for the relaxation. (The relaxation time is $10^4$ times slower.) Moreover, the converged value of $S([x,y]; \vec \Pi)$ in the case of the gradient descent method is strongly dependent on the initial condition. In the present case, roughly smoothed data (MA 50) yields the smallest value of $S([x,y];\vec \Pi)$, but it is still a little bit larger than that of the Go-and-Back method. Such a result implies that the optimization processes are trapped at the local minima. Indeed, $\hat{x}(t)$ is varied among the initial conditions. Two examples are shown in Fig.~\ref{f.tc_dw_ftcs}. Even the best optimized solution of the gradient descent method (MA 50), the motion of $x(t)$ cannot be precisely estimated (Fig.~\ref{f.tc_dw_ftcs}, bottom). We also apply the gradient descent method to model B and obtain features similar to those observed for model A (Figs.~S3 and S4). \begin{figure*}[t] \centerline{\includegraphics{fig_tc_dw_ftcs_arxiv.eps}} \caption{Two examples of the estimation results of model A by the gradient descent method. The true trajectories of $x(t)$ [gray line], and the estimated trajectories $\hat{x}(t)$ [blue line] are shown. (top) The raw data of $[y]$ is adopted for the initial condition. (bottom) A moving-averaged trajectory of $[y]$ with bin size 50 is adopted for the initial condition. The orange arrows show the regions where $x(t)$ cannot be precisely estimated. In both figures, $i=10^5$.} \label{f.tc_dw_ftcs} \end{figure*} \section{Conclusion and Discussion} We developed a method to estimate the most probable trajectory of the hidden variable from the trajectory of the probe particle. The method naturally incorporates Langevin dynamics. Therefore, although the difficulty of the model settings still remains, if we carefully choose the model, we may extract much more information from experimental data than using the conventional step analysis. By use of simple models of single-molecule experiments, we numerically verify that the proposed method provides us reasonable estimates. Comparing to the conventional gradient descent method, our proposed method can successfully reduce the computational cost more than $10^3$-fold. In the case of the gradient descent method, the choice of the initial conditions is crucial for obtaining accurate estimates: the wrong choice leaves the optimization process trapped at the local minima. Although we can adopt several techniques, for example, simulated annealing, to overcome this problem, case-by-case treatment is required to improve efficiency. In contrast, our method naturally incorporates an appropriate initial condition. Besides the simplicity of our coding, the proposed method only requires a boundary condition, and the results are quite robust with respect to this choice. Therefore, our method is easy to use, which is important for general users. As we mentioned above, the present model has several limitations to apply actual experiments. In the present model, it is assumed that the friction coefficient of protein denoted by $\gamma$ is independent of $x(t)$. This term originates from the internal friction of protein and the viscous friction between the protein and the medium. In general, the internal friction should be varied along the reaction coordinate. Indeed, position-dependent coefficients have been obtained from model systems and several kinds of proteins by simulation.\cite{Hummer:2005b, Best:2006, Yang:2006, Best:2010} Therefore, although the constant approximation is not valid for all kinds of proteins, our assumption may be proper for the protein that has a large globular domain and the entire domain tilts or shifts in its structural transition because the internal friction is negligible compared to the large viscous friction. In addition, the present model assumes that the effective energy potential of the protein $V^{\rm eff}(x)$ is time independent. However, recent single-molecule studies on cholesterol oxidase\cite{Lu:1998} and $\beta$-galactosidase\cite{English:2005} showed direct evidence that the enzymatic turnover events are not a Markovian process but correlated with the previous history. The result indicates that the protein has slow conformation fluctuations and this time dependence should be appeared in $V^{\rm eff}(x)$. Although, as far as we know, the similar slow dynamics has never been observed in the motor proteins, one has to verify in advance whether such a slow dynamics presents in the protein of interest. By contrast, if the substrate of the enzymatic reaction is not abundant in the medium so that the rate limiting step of the turnover is substrate binding, at least, on and off states should be incorporated into the model. Namely, $V^{\rm eff}(x)$ should have two (or more) states and switch stochastically.\cite{Julicher:1997, Reimann:2002, Harada:2005a, Harada:2005b} For this case, new efficient estimation algorithm must be developed on the basis of the switching state model. In actual applications of our model to experiments, we must estimate the parameters of the entire model in advance. We have proposed a general framework of parameter estimation in the presence of hidden degrees of freedom.\cite{Miyazaki:2011} According to this framework, which is based on Bayesian statistics, we can precisely estimate the model parameters by maximizing the marginalized path probability with respect to the parameters. The marginalized path probability, called a marginal likelihood, is calculated by integrating all possible trajectories of the hidden variables. In this calculation, the DRP, or the MAP estimator in the language of Bayesian statistics, plays the central role. For the analytical approach, the Wentzel--Kramers--Brillouin (WKB) approximation around the MAP estimator may work well when the temperature of the system is sufficiently small.\cite{Hunt:1981, Miyazaki:2011} For the numerical approach, we can utilize the MAP estimator for the initial condition of the Markov-chain Monte Carlo (MCMC) method\cite{Bishop:2006, Dellago:1998}. By the use of the proposed method, the next step is to investigate how effectively we can obtain reasonable estimates of the model and the hidden trajectories simultaneously. \begin{acknowledgments} The authors thank M.~Y.~Matsuo and S.~Toyabe for fruitful discussions. The authors also thank M.~Opper, M.~Sano, M.~Ichikawa, and K.~Yoshikawa for helpful comments, and K.~Shiroguchi, M.~Sugawa, T.~Nishizaka, and T.~Okamoto for discussions on single-molecule experiments. This work was supported from the JSPS Research Fellowships for Young Scientists, No.~20-4954 (to M.~M.), and a grant from MEXT, No.~20740239 (to T.~H.). \end{acknowledgments}
proofpile-arXiv_067-13470
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{MANUSCRIPT} \textbf{The field of plasmonics \cite{barnes_surface_2003} offers a route to control light fields with metallic nanostructures through the excitation of Surface Plasmon Polaritons (SPPs) \cite{ozbay_plasmonics:_2006,polman_applied_2008}. These surface waves, bound to a metal dielectric interface, tightly confine electromagnetic energy \cite{schuller_plasmonics_2010}. Active control over SPPs has potential for applications in sensing \cite{prodan_hybridization_2003}, photovoltaics \cite{atwater_plasmonics_2010}, quantum communication \cite{altewischer_plasmon-assisted_2002,akimov_generation_2007}, nano circuitry \cite{ebbesen_surface-plasmon_2008,engheta_circuits_2007}, metamaterials \cite{liu_plasmonic_2009,ergin_three-dimensional_2010} and super-resolution microscopy \cite{fang_sub-diffraction-limited_2005}. We achieve here a new level of control of plasmonic fields using a digital spatial light modulator. Optimizing the plasmonic phases via feedback we focus SPPs at a freely pre-chosen point on the surface of a nanohole array with high resolution. Digital addressing and scanning of SPPs without mechanical motion will enable novel interdisciplinary applications of advanced plasmonic devices in cell microscopy, optical data storage and sensing.} Positioning and focusing waves in transparent media requires fine tuning of the phase profile so that waves converge and constructively interfere at a point. A conventional lens uses refraction to redirect the waves to the focus and a well-designed lens shape to align the phase vectors of these waves. Due to the fixed geometric shape of the lens, the position of the focus can only be controlled by mechanically moving the lens or changing the angle of incidence of the incident beam. Focusing and controlling the position where waves constructively interfere in complex structures require new methods that are more versatile. Optical wavefront shaping has became a popular method that allows to focus light even inside completely disordered materials \cite{VellekoopOL08, cizmar_in_2010}. Positioning and focusing SPP waves in a controlled way is important for nanophotonic applications. To date plasmonics offers only a limited flexibility in the control of light fields: as with the conventional lens the geometry is typically fixed, so for a given optical frequency the locations of optical field enhancement are also fixed. Recently, some breakthroughs have been made on active control in which the intensity of the light fields is influenced in time, either through pump-probe \cite{dionne_plasmostor:metaloxidesi_2009, macdonald_ultrafast_2009, utikal_all-optical_2010} or through coherent control \cite{durach_toward_2007}. Only in specific cases this control also leads to spatial selectivity \cite{aeschlimann_adaptive_2007, li_highly_2008, volpe_controllingoptical_2009}. However, in the experiments the spatial selectivity is limited to a few modes predefined by the sample structure. We demonstrate here a new level of control of SPP wavefronts. This control allows us to tune any SPP interference phenomenon with unprecedented flexibility. Specifically, we show that we can generate, focus SPPs and scan the focus on a nanohole array with an electronically controlled spatial light modulator and standard helium neon laser. Because the light-to-SPP conversion process is coherent, the structured optical wavefront is projected onto the SPP wavefront. This conversion gives us full phase control of the SPPs, allowing us to shape the SPP wavefronts digitally. Because we use optimization loops to determine the necessary wavefront, our method is applicable to any plasmonic structure. Such flexible and digital control of SPPs is a large step forward towards interdisciplinary applications of advanced plasmonics. The sample is a nanohole array, similar to those used typically for Enhanced Optical Transmission (EOT) experiments \cite{garcia-vidal_light_2010} and recently suggested for super-resolution \cite{sentenac_subdiffraction_2008}. Our sample is composed of a 200 nm of gold film deposited on top of 1 mm BK7 glass substrate. The array covers an area of 30 x 30 ${\mu}\texttt{m}^{2}$ and the hole period is 450 nm. Square holes were milled with sides of 177 nm. The SPP wavelength at the gold-air interface from incident radiation of $\lambda_0$~=~633 nm is given by \begin{equation}\label{equation1}\\ \lambda_{S}= \lambda_0 \rm{Re}\sqrt{\frac{\varepsilon_m+\varepsilon_d}{\varepsilon_m\varepsilon_d}}, \end{equation} with $\varepsilon_m$ and $\varepsilon_d$ the dielectric constants of gold and air, respectively. Using tabulated bulk values for $\varepsilon_m$ \cite{johnson_optical_1972} we found $\lambda_{S}$~=~600 nm. Our aim is to digitally control the amplitude and phase of SPPs locally on the surface of the sample. This control is achieved by imaging a Spatial Light Modulator (SLM) onto the surface of the sample thus mapping each unit cell (pixel) of the SLM to a corresponding area on the sample. Amplitude and phase control of the SLM is achieved via the 4-pixel technique \cite{vanPuttenSLM08} where four adjacent pixels are grouped into a superpixel. We apply to the SLM a 32 x 32 superpixel division and we independently control amplitude and phase of each superpixel. \begin{figure}[t!!!]\ \centering\includegraphics[width=0.45\textwidth]{Figure1.pdf}\\ \caption{Experimental Setup. The Spatial Light Modulator (SLM) is projected onto the sample via the two-lens imaging system $L_1$ and $L_{OBJ}$. The demagnification is 650 times. Every image point on the sample is formed with a different average angle of incidence (shown for three pixels). The amplitude and phase of each pixel of the SLM are independently controlled with a computer. The sample is a nanohole grating engraved on a gold film. The blue arrows illustrate the propagation of Surface Plasmon Polaritons (SPP) launched from two pixels of the SLM. The amplitudes and phases of the SPPs are effectively clamped to those of the launching pixels. The surface of the sample is imaged onto the camera via $L_{OBJ}$ and $L_2$.} \label{Figure_setup} \end{figure} A diagram of the setup is given in Fig.~\ref{Figure_setup}. Based on SPP momentum conservation we designed the imaging system such that plasmons are launched toward the center. The light reflected from the sample is imaged on the detector. This light includes both the direct reflection of the illuminating beam and the scattered light from SPPs. Thus the resulting image is a combination of both the SLM amplitude pattern and the generated SPPs. To separate plasmonic from optical effects we spatially design the amplitude of the incident light to define four bright plasmon launching areas and one central dark arena. Any intensity detected inside the arena is purely plasmonic. The designed amplitude profile for focusing experiments is a four-block pattern of fully ``on" (A=1) superpixels on an ``off" (A=0) background. The resulting illuminated areas on a bare gold substrate are visible in Fig.~\ref{amplitudes}a. The overall phase is constant. Each ``on" block is 10~x~8 superpixels in size. Because no SPPs are launched on bare gold due to momentum mismatch, the image of Fig.~\ref{amplitudes}a is used as a background reference and measure of contrast ratio between the ``on" and ``off" areas. The observed contrast is nearly three orders of magnitude confirming that no photons enter the SPP arena. When the designed amplitude profile is projected onto the hole array, SPPs propagate into the central dark arena. The nanohole array has a dual role: it is used to launch SPPs (bright rectangles in Figs.~\ref{amplitudes}b-\ref{amplitudes}d) and to visualize the launched SPP's through their out-of-plane scattering in the central arena. The SPPs are launched only along the direction of the incident polarization as seen in Figs.~\ref{amplitudes}c and \ref{amplitudes}d, consistent with expectations \cite{van_oosten_nanohole_2010}. In Fig.~\ref{amplitudes}b, the sample is illuminated both by the structured amplitude profile and an additional white light source, revealing both the hole array grating itself (the fast amplitude modulation) and the laser light. This figure also demonstrates the optical resolution of the setup: sufficient to resolve the presence of the hole array pattern but not the shape of holes. \begin{figure}[t] \centering\includegraphics[width=0.4\textwidth]{Figure2.pdf}\\%{amplitude02-OneSample.eps}\\ \caption{Amplitude projection for uniform phase profile (no optimization). In each image the bright rectangles are the illuminated (amplitude=1) SPP launching areas. The SPPs are observed in the dark (amplitude=0) central SPP arena. (\textbf{a}) Bare gold reference (no SPPs launched). The dashed lines demarcate the SLM area. (\textbf{b})-(\textbf{d}) SLM projected on the 450 nm hole array. (\textbf{b}) SLM image plus white light illumination to observe the hole array. (\textbf{c}) SPPs launched toward the central SPP arena. (\textbf{d}) Vertical polarization of incident light (horizontal polarization for the other images).} \label{amplitudes} \end{figure} When two counterpropagating SPP waves interfere, a standing wave pattern of intensity is created. The observed period of the fringe pattern is clearly not half the SPP wavelength, as is expected for SPPs propagating on an ideally smooth and non-corrugated sample. The measured fringe period is $1\pm0.05~\mu$m. We attribute the fringe patterns to a Moire$\acute{}$ effect between the true standing SPP wave and the periodicity of the arrays. Now we present experiments of SPP focusing with digital phase control. The achieved SPP focusing is shown in Fig.~\ref{optimized}. We use a phase optimization loop \cite{VellekoopOC2008} to focus SPPs at a pre-chosen target. This loop yields the optimal phase $(\widetilde{\phi})$ for each superpixel as well as the relative contribution $(C)$ to focus.The amplitude profile is the same as for the bare gold case with four launching areas and a central dark arena where only SPPs can propagate. The incident polarization is diagonal with the grating lines so as to have all available angles ($2\pi$ range) contributing to the focus, thereby maximizing the NA and resolution. \begin{widetext} \begin{figure}[t!]\ \centering \includegraphics[width=0.9\textwidth]{Figure3.pdf}\\%{PhaseOptimizationFinalOther.eps} \caption{Dynamic focusing of SPPs. (\textbf{a}) The relative phases of the superpixels are optimized to focus SPPs in the center of the SPP arena. The intensity in the target spot is purely plasmonic and 20 times higher than the average background of an unstructured plasmonic wavefront. The focus size is diffraction limited by the detecting optics. (\textbf{b}) and (\textbf{c}) Demonstration of SPP focusing on freely chosen targets in the SPP arena. (\textbf{d}) Background reference of an unstructured SPP wavefront (uniform phase profile). In achieving the focus of image (\textbf{a}) we recorded the map of optimal phases (\textbf{a1}) and of relative contributions (\textbf{a2}) of the superpixels, respectively. Due to reciprocity these maps coincide with the phase and amplitude Green's function of a SPP source at the target. The amplitude map shows the decaying nature of the SPPs. (\textbf{e}) Quantitative analysis of the SPP focusing showing vertical cuts of (\textbf{b}) and (\textbf{d}). These cuts are normalized to the peak intensity of the bare gold case, also included in the graph.} \label{optimized}\end{figure} \end{widetext} Successful focusing at center of the SPP arena is shown in Fig.~\ref{optimized}a. The structured SPP wavefront produces an intensity in the designated target that is at least 20 times higher than the average SPP background of an unstructured wavefront. The measured size of the plasmonic focus is 420 nm, consistent with the diffraction limit of our optics. The flexibility of the method (scanning the focus) is demonstrated in Fig.~\ref{optimized}b and Fig.~\ref{optimized}c which show the SPP focus relocated without mechanical motion to controlled positions in the plasmonic arena. Thus the plasmonic arena is our field-of-view. We interpret SPP focusing in terms of Green's functions connecting the electric fields at any two points. We idealize every ``on" superpixel $n$ with a light source positioned at $\textbf{r}_{n}$ and with phase $\phi(\textbf{r}_n)$ and strength $A(\textbf{r}_n)=1$. The amplitude of the electric field (normalized to the incident field) at the target $\textbf{r}_{0}$ due to these sources is \begin{equation}\label{equation3}\\ E\left(\textbf{r}_{0},\left\{\phi(\textbf{r}_n)\right\}\right)= \sum_{n}^{N}g\left(\textbf{r}_{0},\textbf{r}_{n}\right)\exp{\left[i\phi(\textbf{r}_{n})\right]}, \end{equation} where $g\left(\textbf{r}_{0},\textbf{r}_{n}\right)$ is the Green's function connecting each source to the target, and the sum runs over all the ``on" superpixels of the amplitude profile. The target field is maximal when all source contributions are in phase. The optimal phase for superpixel $n$ is $\widetilde{\phi}(\textbf{r}_n,\textbf{r}_0)=-\arg[g\left(\textbf{r}_{0},\textbf{r}_{n}\right)]$. Imposing this phase to the superpixel yields an intensity increase of $C(\textbf{r}_n,\textbf{r}_0)= \left|g\left(\textbf{r}_{0},\textbf{r}_{n}\right)\right|$. We can write \begin{equation}\label{equation4}\\ g\left(\textbf{r}_{n},\textbf{r}_{0}\right)= C(\textbf{r}_n,\textbf{r}_0) \exp{\left[i\widetilde{\phi}(\textbf{r}_n,\textbf{r}_0)\right]}. \end{equation} Equation \ref{equation4} implies that when a focus is achieved in the SPP arena, the recorded optimal phases and relative contributions of the superpixels give the Green's function for a plasmonic source located at that exact focus point. Thus the superpixels of the SLM effectively behave as amplitude and phase sensitive detectors. These results are valid for any Green's function or nanostructure and can be extended to the time domain \cite{FinkPRL} and the transfer matrix approach \cite{popoff_measuringtransmission_2010}. For a perfectly smooth sample with no corrugations the SPP Green's function is simply a cylindrical wave in two dimensions (the Hankel function $H_{0}^{(1)}(Kr)$ with $K$ the complex-valued SPP momentum). Our digitally measured Green's function includes the light-to-SPP coupling and therefore presents much more complexity. With digital plasmonics we demonstrate the first ``\emph{black-box}" with nanoscale SPP outputs and text file inputs. Specifically, we focus SPPs on hole arrays and locally scan the focus freely over a field-of-view (SPP arena) without any mechanical translation. In achieving such dynamic focusing we recorded amplitude and phase Green's functions. These digital records, which contain the full complexity of the Green's function, are used as self-calibrated inputs. The method can be extended to any plasmonic structure and to the time domain. This digital plasmonic workbench is anticipated to enable interdisciplinary applications in microscopy, optical data storage and in bio-sensing. We thank Elbert van Putten and Jean Cesario for stimulating and helpful discussions. For sample fabrication we thank Hans Zeijlermaker. This work is part of the research program of the ``Stichting voor Fundamenteel Onderzoek der Materie", which is Financially supported by the ``Nederlandse Organisatie voor Wetenschappelijk Onderzoek".
proofpile-arXiv_067-13508
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \noindent Since the discovery of high-temperature superconductivity in Cu-O frameworks \cite{Bednorz1986} a large family of different Cu-O based systems was studied, whereas the dimensionality varied from quasi zero-dimensional systems (like Li$_2$CuO$_2$ or NaCuO$_2$) over one-dimensional networks (e.\,g. Sr$_2$CuO$_3$) to two-dimensional systems (such as Sr$_2$CuO$_2$Cl$_2$ or high temperature superconductors). The dimension of a system and the associated electronic and magnetic pathway joining neighboring Cu ions, which depends upon the manner in which the CuO$_4$ plaquettes are arranged, plays a key role for the electronic excitations. The compound Ca$_{x}$Sr$_{14-x}$Cu$_{24}$O$_{41}$ is a so-called quasi-one-dimensional system and shows additional complexity since it consists of two different types of copper oxide networks---CuO$_2$ (edge-sharing) chains and Cu$_2$O$_3$ (two-leg) ladders---which are separated by strings of Sr or Ca atoms. These networks are arranged in layers, and the layers are oriented in the crystallographic $ac$-plane, while they are stacked in an alternating manner along the perpendicular $b$-axis (see Fig.\,\ref{fig1}). Both of these two subsystems, ladders and chains, have orthorhombic symmetry, but are structurally incommensurate.\cite{McCarron1998,Siegrist1988} The discovery of superconductivity in Ca$_{13.6}$Sr$_{0.4}$Cu$_{24}$O$_{41}$ at a high pressure of 3\,GPa \cite{Uehara1996} has provoked a lot of attention on the spin-ladder system because it was the first superconducting copper oxide material with a non-square-lattice.\cite{Nagata1997} A remarkable property of Ca$_{x}$Sr$_{14-x}$Cu$_{24}$O$_{41}$ is that superconductivity only occurs when Sr is replaced by Ca. Thereby, the nominal valence of Cu remains unchanged but the change of chemical pressure within the lattice causes a transfer of holes from the chain to the ladder subsystem.\cite{Kato1996,Nuecker2000,Koitzsch2010} Therefore, the evolution of the electronic and magnetic structure upon Ca addition is one of the key issues for understanding superconductivity and other physical properties \cite{Kataev2001,Ammerahl2000}, whereas the exact hole distribution in these compounds is still under debate.\cite{Nuecker2000,Osafune1997,Rusydi2007,Magishi1998} \noindent Another interesting property of the spin ladder compounds is a tendency to form a charge density wave (CDW) phase depending on the Ca content \cite{Blumberg2002,Vuletic2003,Hess2004}, which may prevent the occurrence of superconductivity. An insulating hole crystal phase, as it was predicted \cite{Dagotto1992}, in which the carriers are localized through manybody interactions was reported.\cite{Abbamonte2004,Carr2002,Friedel2002} In summary, the complex phase diagram as well as the effect of dimensionality and the impact of temperature on the electronic structure of these compounds are not yet fully understood. \begin{figure}[h] \includegraphics[width=0.95\textwidth]{Fig1} \caption{\label{fig1}Schematic representation of the crystall structure of Ca$_{x}$Sr$_{14-x}$Cu$_{24}$O$_{41}$} \end{figure} Electron energy-loss spectroscopy (EELS) is a useful tool for the investigation of materials at all levels of complexity in the electronic-spectrum.\cite{Fink2001} The EELS cross section is basically proportional to $\operatorname{Im}$[-1/$\epsilon(\omega,\bf q)$] (called loss function) where $\epsilon(\omega,\bf q)$ = $\epsilon_1(\omega,\bf q)$ + $i \epsilon_2(\omega,\bf q)$ is the momentum and energy-dependent complex dielectric function. In this way, EELS probes the electronic excitations of a solid under investigation. Furthermore, it allows momentum dependent measurements of the loss function, i.\,e. the observation of non-vertical transitions within the band structure of a solid, the idendification of dipole forbidden excitations \cite{Knupfer1999, Atzkern2000} and the determination of the dispersion of excitons, interband transitions or charge carrier plasmons.\cite{Nuecker1989,Wang1990,Nuecker1991,Romberg1990,Schuster2007} As the dispersion of a charge carrier plasmon is related to the Fermi velocity, EELS studies also provide valuable insights into further fundamental electronic parameters. \section{Experimental} \noindent Single crystals of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ were grown by using the travelling solvent floating zone method.\cite{Ammerahl1998} For the EELS measurements thin films ($\sim$ 100\,nm) were cut along the crystal $b$-axis from these single crystals using an ultramicrotome equipped with a diamond knife. The films were then put onto standard transmission electron microscopy grids and transferred into the spectrometer. All measurements were carried out at room temperature with a dedicated transmission electron energy-loss spectrometer \cite{Fink1989} employing a primary electron energy of 172\,keV. The energy and momentum resolution were set to be $\Delta E$\,=\,80\,meV and $\Delta q$\,=\,0.035\,\AA$^{-1}$, respectively. Before measuring the loss-function, the thin films have been characterized by \textit{in situ} electron diffraction, in order to orient the crystallographic axis with respect to the transferred momentum. From the measured loss function, the real and imaginary part of the dielectric function $\epsilon(\omega)$ and consequently the optical conductivity $\sigma$ were calculated by the well-known Kramers-Kronig relations.\cite{Fink1989} Prior to the Kramers-Kronig analysis the measured spectra were corrected by substracting contributions of multiple scattering and eliminating the contribution of the direct beam by fitting the plasmon peak with a model function, which gives the low energy tail to zero energy for the loss function.\cite{Nuecker1989} We note that in our case the quasi elastic background does not alter the plasmon position as discribed in \cite{Chen1977,Batson1983,Bertoni2010}. \section{Results and Discussion} \begin{figure}[h] \includegraphics[width=0.7\textwidth]{Fig2} \caption{\label{fig2}The momentum dependence of the EELS spectra of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ for $q$ parallel to the crystallographic $a$\,-\,axis ($q$ is increasing from top to bottom spectra). The upturn towards 0\,eV is due to the quasi-elastic line.} \end{figure} \noindent In Fig.\,\ref{fig2} we show the evolution of the loss function with increasing $q$ in an energy range between 0.5\,-\,10\,eV for a momentum transfer perpendicular to the ladders/chains (crystallographic $a$\,-\,axis). The data have been normalized to the high-energy region between 9 and 10\,eV where they are almost momentum independent. We can clearly identify a well pronounced double peak structure with maxima at 3\,-\,3.5\,eV and at 5\,eV. The spectral weight of the first excitation feature---compared to the second---decreases with increasing momentum transfer. Fig.\,\ref{fig3} displays the corresponding data for a momentum transfer parallel to the crystallographic $c$\,-\,axis, i.\,e. parallel to the ladders and chains in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$. Again, there is a double-peak feature between 3 and 5 eV, and the intensity of the former is decreasing upon increasing momentum. \begin{figure}[h] \includegraphics[width=0.7\textwidth]{Fig3} \caption{\label{fig3}The momentum dependence of the EELS spectra of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ for $q$ parallel to the crystallographic $c$\,-\,axis ($q$ is increasing from top to bottom spectra). The upturn towards 0\,eV is due to the quasi-elastic line.} \end{figure} \noindent In addition, Fig.\,\ref{fig3} reveals a further excitation feature around 1 eV for momentum transfers parallel to the ladder direction, which is absent perpendicular to it. This additional excitation clearly disperses to higher energies with increasing $q$. Furthermore, the peak width increases with increasing momentum, which indicates damping of this excitation which also increases with $q$. According to resistivity data \cite{Motoyama1997}, Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ shows a metallic behavior along the $c$\,-\,direction, which is also in line with the appearance of a plasma edge close to 1\,eV in the corresponding reflectivity spectra.\cite{Osafune1997, Ruzicka1998} Consequently, we ascribe the peak around 1\,eV to the so\,-\,called Drude plasmon (or charge carrier plasmon) caused by the collective excitation of the free charge carriers. This is analogous to what has been observed for other doped cuprate systems.\cite{Wang1990,Nuecker1991,Knupfer1994} \begin{figure}[h] \includegraphics[width=0.7\textwidth]{Fig4} \caption{\label{fig4}Plasmon dispersion in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ along the $c$\,-\,direction. Within the error bars the plasmon scales linearly with momentum and the bandwidth amounts to $\approx$ 400\,meV in the considered momentum range. The gray curve represent the fit with a polynomial function (cf. equation (2))} \end{figure} \noindent In order to further quantify the behavior of the 1\,eV plasmon we present in Fig.\,\ref{fig4} the evolution of the peak position in the range 0.15\,\AA$^{-1}$ to 0.35\,\AA$^{-1}$ for $q \parallel$ $c$. Due to the strong damping of the plasmon and the low cross section for higher momentum transfers, data for a momentum transfer above $q$\,=\,0.35\,\AA$^{-1}$ are not included in Fig.\,\ref{fig4}. It can be seen that the plasmon dispersion in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ is positive with a bandwidth of at least 400\,meV. This is in very good agreement with the dispersion found in planar cuprates such as Bi$_2$Sr$_2$CaCu$_2$O$_{8-\delta}$.\cite{Nuecker1991} Moreover, the plasmon dispersion is linear in $q$, which is in contrast to what one would expect for an ``ordinary'' metallic plasmon, where it should be quadratic.\cite{Sturm1982,Pines1963} We note that for Bi$_2$Sr$_2$CaCu$_2$O$_{8-\delta}$ a quadratic plasmon dispersion has been reported. \begin{figure} \includegraphics[width=0.7\textwidth]{Fig5} \caption{\label{fig5}Angular dependence of the EELS response of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ for $q$ = 0.15\,\AA$^{-1}$ measured in the low energy range between 0\,-\,10\,eV. The upper spectrum (red line) corresponds to $q \parallel$ $a$, and the lower spectra (purple line) represents $q \parallel$ $c$.} \end{figure} \begin{figure} \includegraphics[width=0.7\textwidth]{Fig6} \caption{\label{fig6}Angular dependence of the EELS intensity for $q$ = 0.15\,\AA$^{-1}$ measured at room temperature in the range between 0.4\,-\,2\,eV. The horizontal dashed bars correspond to the two main crystallographic directions along or perpendicular to the $c$\,-\, and $a$\,-\, axis, respectively} \end{figure} \noindent In order to obtain a more detailed picture we have measured the loss function of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ as a function of the angle in the $ac$\,-\,plane at a constant momentum transfer of $q$\,=\,0.15\,\AA$^{-1}$, as presented in Fig.\,\ref{fig5} and \ref{fig6}. We can identify a clearly visible anisotropy within these directions. In particular, the excitation at 3 - 4\,eV shifts to higher energy approaching the $c$\,-\,direction, while the spectral feature at 5\,eV remains located at about 5\,eV. The excitation seen at 1\,eV for a momentum transfer parallel to the $c$\,-\,direction, shows a distinct behavior. Its energy decreases on leaving the $c$-direction and it becomes invisible near the $a$\,-\,direction. The observed energy variation scales as $\omega_p (\Theta)$\,=\,$\cos (\Theta) \cdot \omega_p$, which follows the prediction from random-phase-approximation calculations for the charge carrier plasmon excitation of a quasi-one-dimensional metal.\cite{Williams1974} Resistivity data \cite{Motoyama1997,Kojima2001} also demonstrate a strong anisotropy of the conductivity with a metallic behavior in $c$\,-\,direction. These facts represent strong support for our conclusion above, that this excitation indeed is a result of the plasmon oscillation of the charge carriers in the Cu-O ladder network.\\ \noindent In the following we discuss the origin and character of the observed excitation in the energy range of 3\,-\,5\,eV. In this context it is important to consider the structure of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$. Between two nearest neighbor copper atoms there are essentially two different bond-configurations possible in cuprates, which differ in the angle between the two copper atoms and the relevant $p$-orbital(s) of a bridging oxygen, a 180$^\circ$ and a 90$^\circ$ bond-configuration. In case of the ladders, the 180$^\circ$ bond-configuration form the legs and rungs of the ladder, while the copper atoms on neighboring ladders are connected via the 90$^\circ$ bond-configuration (see also Fig.\,\ref{fig1}). The Cu-O chains in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ are built from edge sharing CuO$_4$ units, i.\,e. two copper atoms are connected via 90$^\circ$ Cu-O-Cu bonds. The latter bonding geometry does hardly allow delocalization of electronic states since hopping of a hole (or electron) along the chain involves change of the orbital at the oxygen site. This for instance causes the electronic excitations to be localized (non-dispersing) as seen for undoped chains in Li$_2$CuO$_2$.\cite{Atzkern2000} Moreover, the excitations in these undoped chains also are virtually isotropic within the plane of the CuO$_4$ units. In the case of doping such chains the resulting electronic states and the excitations will still be localized due to the bonding geometry, which is also evidenced by the results of x-ray absorption studies.\cite{Hu2002} We therefore attribute the excitation at 5 eV in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$, which does not disperse and which is isotropic within the $ac$\,-\,plane, to excitations from the Cu-O chains in the compound.\\ \noindent In contrast, in the 180$^\circ$ bond-configuration a delocalized charge-transfer excitation is also possible in undoped cuprates. This represents an excited hole which has moved to the O2$p$ states of a neighbouring Cu-O plaquette forming a Zhang-Rice singlet state there.\cite{Zhang1998,Wang1996,Zhang1988} In addition, this delocalized excitation has a lower excitation energy because of the energy gain associated with the formation of the Zhang-Rice singlet. In consideration of the structure of the Cu-O ladders, such excitations are most likely also anisotropic for momentum transfers along and perpendicular to the ladder direction. Although their spectral weight is reduced upon doping, they remain present up to rather high doping levels (about 10\,-\,15\% in planar cuprates \cite{Nuecker1991,Schuster2010}). Thus, the excitation between 3\,-\,4\,eV is most likely a result from electronic excitations in the Cu-O ladders. The results of recent resonant inelastic x-ray scattering (RIXS) of Ca$_{x}$Sr$_{14-x}$Cu$_{24}$O$_{41}$ \cite{Wray2008,Wray2008_2,Ishii2007} are in very good agreement with our assignment of the spectral structures at 3 to 5 eV as seen by EELS. We note that, ignoring the resonance process in RIXS, EELS and RIXS probe the same dynamic response function. It is therefore very surprising that the available RIXS data on Ca$_{11.5}$Sr$_{2.5}$Cu$_{24}$O$_{41}$ miss the low energy plasmon excitation at 1\,eV. It is unclear at present whether this is due to limitations in the experimental parameters such as resolution, or whether it has an intrinsic origin, related to the resonant process itself.\\ \noindent Finally, the dispersion of the charge carrier plasmon can also help to gain further insight into the microscopic nature of the electronic system. For a simple metal in the long wavelength limit, the plasmon dispersion is expected to be quadratic, i.\,e. $\omega (q)\,=\,\omega_p + \mathrm{A}q^2$, whereas the coefficient A can be expressed as $\mathrm{A}\,=\,\frac{\hbar^2}{m}\alpha$ and $\alpha = \alpha_1 + \alpha_2$ \cite{Sing1999,SingPhD}, with \begin{align} \alpha_1 = \frac{m v_F^2}{2 \hbar \omega_p} \qquad \mathrm{and} \qquad \alpha_2 = \frac{me^2}{24 \hbar\epsilon_{\infty}\epsilon_0 \omega_p} \left\langle v_q \left(\textbf{e} \frac{\partial}{\partial\textbf{k}}\right)^2 v_q \right\rangle. \end{align} \noindent Thereby $m$ is the free-electron mass, $v_F$ is the Fermi velocity, $\epsilon_{\infty}$ is the background dielectric constant and $\omega_p$ is the plasma frequency. To calculate $\alpha_2$ one has to consider the second derivative of the Fermi velocity component parallel to $q$, $v_q$. \par Interestingly, the dispersion of the plasmon in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ along the ladder direction (see Fig.\,\ref{fig4}) has a band width which is comparable to that observed for the optimally doped high temperature superconductors. This seems reasonable, taking into account that the doping level of the ladders in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ is about 0.15 - 0.2 holes per Cu unit, as reported from recent angular resolved photoemission experiments \cite{Koitzsch2010}. However, the plasmon in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ scales linearly with $q$, which is in contrast to the high T$_c$ materials at optimal doping, and also in contrast to the expectation for a free electron gas like electronic system. \par In order to investigate the long wave length limit of the plasmon dispersion in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ in more detail, we have fitted the dispersion curve using a polynomial function \begin{align} \omega(q)\,=\,\omega_p + \mathrm{A}q^2 + \mathrm{B}q^4. \end{align} The parameter A then represents the plasmon behavior for small momenta, i.\,e. long wavelengths. This fit provides us with the following results (see also Fig.\,\ref{fig4}): $\hbar\omega_p$ = (0.83$\pm$0.02)\,eV, A\,=\,(7.47$\pm$0.75)\,eV\AA$^2$ and B\,=\,(-25.28$\pm$5.48)\,eV\AA$^4$. For a one-dimensional system the plasma frequency $\omega_p$ can be written as \cite{Sing1999} \begin{align} \omega_p^2 = \frac{4e^2}{\hbar \epsilon_{\infty} \epsilon_0 \pi a b} |v_F|. \end{align} Note that this expression takes the number of Cu-sites within the unit cell of the ladder into account. \par To be able to derive the mean Fermi velocity from the expression above, the knowledge of the background dielectric constant $\epsilon_{\infty}$ is required. We have analyzed this parameter via a Kramers-Kronig analysis (KKA) of the measured loss function. Subsequently, we have described the resulting dielectric function within the Drude-Lorentz model with one Drude and a number of Lorentz oscillators. In left panel of Fig.\,\ref{fig7} we show the optical conductivity ($\sigma = \omega \epsilon_2$) as received from our KKA and the corresponding fit result. This Figure demonstrates that our model description of the data is very good. The value of the plasma frequency $\omega_p$\,=\,0.89\,eV which is obtained from the fit of $\sigma$ is in very good agreement to that provided by the fit of the plasmon dispersion (cf. Fig.\,\ref{fig4}). In addition, this value is also in good agreement to the data from reflectivity measurements.\cite{Ruzicka1998} \begin{figure}[ht] \includegraphics[width=0.45\textwidth]{Fig7a} \includegraphics[width=0.45\textwidth]{Fig7b} \caption{\label{fig7}Left panel: The optical conductivity for Ca$_{11}$Sr$_{3}$Cu$_{24}$O$_{41}$ as derived by a Kramers-Kronig transformation of the EELS intensity (black circle) and the fit (red line). Right panel: The real part of the dielectric function subtracted by the Drude contribution which provides a value for the background dielectric constant of $\epsilon_{\infty}$\,$\approx$\,7.6.} \end{figure} The background dielectric constant can now be read off the real part of the dielectric function subtracting the Drude (i.\,e. charge carrier) contribution (see Fig.\,\ref{fig7} right panel), we obtain $\epsilon_{\infty}$\,$\approx$\,7.5 - 8. We thus arrive at a mean Fermi velocity for the conduction electrons in the ladder of $v_F\,\approx$\,530000\,$\frac{\mathrm{m}}{\mathrm{s}}$. Taking this value, we can now calculate the parameter $\alpha_1$ to be about $\sim$\,0.96 (and consequently a value for A of 7.25\,eV\AA$^2$), which is very close to what we obtained from the fit for this coefficient, A\,=\,7.47\,eV\AA$^2$. This good correspondence indicates that (i) our description of the long wavelenght limit is consistent and (ii) the contribution of $\alpha_2$ to the long wave length plasmon dispersion is small. Indeed, a calculation of $\alpha_2$ using equation (2) and taking into account the tight binding description of the conduction bands in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ \cite{Arai1997} yields \(|\alpha_2|\,\le\,0.05\) (for a more detailed description of this procedure see \cite{Sing1999_2}). Thus, $\alpha_2$ is less than 10$\%$ of the contribution to the dispersion coefficient A. We now can conclude that the long wavelength limit of the plasmon dispersion in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ can be well rationalized within an RPA like description of simple metals with a mean Fermi velocity of 530000\,$\frac{\mathrm{m}}{\mathrm{s}}$, which is in reasonable agreement with the tight binding description of the conduction bands in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ \cite{Arai1997}, and which also agrees well with recent result from angular resolved photoemission.\cite{Koitzsch2010} \par However, the quasi-linear dispersion as revealed in Fig. \ref{fig4} cannot be rationalized within the framework of a simple metal. Deviations from the expectation of a quadratic plasmon dispersion have already been reported in the past. Already the heavier alkali metals show vanishing or even negative plasmon dispersions, which has initiated a lot of theoretical work.\cite{Felde1989} Over the years different reasons for these observations have been discussed including local field effects, interband transitions and the anisotropy of the effective mass. This emphasizes that the dispersion of the charge carrier plasmon can be a very complex parameter. \par Previously, the plasmon dispersion in other quasi one-dimensional metallic systems has been investigated theoretically and experimentally for a few compounds. Within RPA it has been predicted \cite{Williams1974} that the plasmon dispersion in one dimensional metals can be substantially modified by local field effects, i.\,e. the inhomogeneous character of the electron gas. This modification can even cause a negative plasmon dispersion in case of a tight binding description of the electronic bands.\cite{Williams1974} Experimental studies of the plasmon dispersion in (TaSe$_4$)$_2$I \cite{Sing1998} and K$_{0.3}$MoO$_3$, \cite{Sing1999} found a quasi-linear dispersion which could be explained to predominantly be an effect of the band structure in these materials. Moreover, going to lower doping levels of about 0.1 holes per Cu atom in two-dimensional cuprate structures, the plasmon dispersion is drastically reduced compared to optimal doping (i.\,e. about 0.15 holes per copper atom). The band width of the plasmon in Ca$_{1.9}$Na$_{0.1}$CuO$_2$Cl$_2$ is only half of that observed for the doping level of 0.15 holes per copper unit.\cite{Schuster2010} Also, the plasmon dispersion at this lower doping in planar cuprates is essentially linear in contrast to optimal doping, but reminiscent of our results for Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$. Since at lower doping (so-called underdoping) the planar cuprates enter a pseudogap phase, the origin of which is actually under debate, the variations of the plasmon behavior might be closely connected with the peculiar properties in this pseudogap region. In this context, it is important to notice that there is evidence that the electronic degrees of freedom in the cuprate ladders are also quite complex. The data from angular resolved photoemission \cite{Koitzsch2010} show a substantially reduced spectral weight close to the Fermi level, and the optical reflectivity \cite{Osafune1997, Ruzicka1998} is different from that of a simple metallic material, but indicates additional electronic excitation in the energy range of the plasmon. In addition, for Ca$_{x}$Sr$_{14-x}$Cu$_{24}$O$_{41}$ compounds the formation of a hole crystal \cite{Abbamonte2004,Carr2002,Friedel2002,Hess2004} (i.\,e. a charge density wave) has been reported. These findings suggest that also in the cuprate ladders there might be a phase quite similar to the pseudo gap phase in the planar cuprates, with complex electronic degrees of freedom and interactions. \par \noindent In relation to these previous findings we conclude that in Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$, band structure and local field effects as well as the peculiar physics of underdoped cuprates have to be considered in the future, in order to rationalize the measured plasmon dispersion, and further theoretical developments are required to achieve a conclusive picture of this interesting physics. \section{Summary} \noindent To summarize, employing EELS we investigated the dispersion of low lying charge-transfer excitations and the charge carrier plasmon in the spin ladder system Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$. We found a strong anisotropy of the spectral structures, whereas the charge carrier plasmon is only visible for a momentum parallel to the crystallographic $c$ - direction. A well pronounced two peak structure is seen at 3\,-\,5\,eV, and can be qualitatively assigned to localized and delocalized charge-transfer excitations. The plasmon dispersion scales quasi linear along the legs of the ladder, which is in contrast to what is observed for cuprate high temperature superconductors. A comparison of the fit of the dispersion curve (using a polynomial function) with the value for the plasma frequency (which can be obtained by a fit of the optical conductivity after a Kramers Kronig analysis of the measured loss function) shows a very good agreement and consistency of both fits. This indicates that the long wavelenght limit of the plasmon dispersion can be described within a RPA like description. We furthermore calculated the mean Fermi velocity to be about 530000\,$\frac{\mathrm{m}}{\mathrm{s}}$, which agrees well with a tight binding description of Ca$_{11}$Sr$_3$Cu$_{24}$O$_{41}$ and with results from angular resolved photoemission. The linearity of the plasmon dispersion cannot be rationalized within the framework of a simple metal. Phenomena such as local fields, interband transistions, or the influence of the band structure, as well as many-body effects in cuprates, may be responsible for this behavior. \begin{acknowledgments} \noindent We thank R. Sch\"onfelder, R. H\"ubel and S. Leger for technical assistance. This work has been supported by the Deutsche Forschungsgemeinschaft (grant number KN 393/13). \end{acknowledgments}
proofpile-arXiv_067-13540
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} Motivated by trying to categorify the essential ingredients in the definition of cluster algebras by Fomin and Zelevinsky, the authors of \cite{bmrrt} introduced the cluster category $\mathcal C_Q$ associated with a finite acyclic quiver $Q$. The notion was later generalized by Amiot \cite{a}, dealing with quivers which are not necessarily acyclic. Let $\mathbb{K}$ be an algebraically closed field of characteristic zero. Cluster categories are special cases of Hom-finite, triangulated 2-Calabi-Yau $\mathbb{K}$-categories (2-CY categories). In such categories, the cluster tilting objects, or more generally, maximal rigid objects, play a special role for the categorification of cluster algebras. For cluster categories in the acyclic case these two classes coincide, but in general maximal rigid objects in 2-CY categories are not necessarily cluster tilting. The cluster-tilted algebras are the finite dimensional algebras obtained as endomorphism algebras of cluster tilting objects in cluster categories. These, and the more general class of 2-Calabi-Yau-tilted algebras, are of independent interest, and have been studied by many authors, see \cite{k2, rei,rei2}. As a natural generalization, one also considers the endomorphism algebras of maximal rigid objects in 2-CY categories, here called {\em 2-endorigid algebras}. When $\Gamma = \operatorname{End}_{\mathcal C}(T)$ is a 2-Calabi-Yau-tilted algebra, it is not known if the category $\mathcal C$ is determined by $\Gamma$, but this is known to be true in the case of acyclic cluster categories \cite{kr2}. However, if we consider 2-endorigid algebras, then one frequently obtains the same algebras starting with different 2-CY categories. In this paper we investigate this phenomenon. We restrict to the case where the 2-CY categories in question only have a finite number of isomorphism classes of indecomposable objects. Also in this case, it is known that the 2-endorigid algebras are of finite representation type \cite{iy}. In \cite{a} and \cite{xz}, the structure of triangulated categories with finitely many indecomposables was studied. Such categories have Serre functors, and hence there is an associated AR-quiver. Here orbit categories of the form $D^b(\operatorname{mod} \mathbb{K} Q)/ \varphi$ play a special role, where $Q$ is a Dynkin quiver, and $D^b(\operatorname{mod} \mathbb{K} Q)$ is the bounded derived category of the path algebra $\mathbb{K} Q$. These are exactly the standard categories, i.e. those which can be identified with the mesh category of their AR-quiver, which are 2-CY and have only a finite number of indecomposable objects. In \cite{bikr}, such orbit categories with the 2-CY property were classified. And as an application of that classification, the 2-CY-tilted algebras of finite representation type, coming from orbit categories, were classified in \cite{bo} (one case was missed, as was noticed in \cite{l}, see Section \ref{subsection: list} for details.) These classifications are crucial for our investigations. Our main result is a complete classification of the 2-endorigid algebras associated to standard 2-CY categories of finite type. In fact, we show that all such algebras, with one single exception, already appear in the classification of \cite{bo}. In order to prove this we show that the following holds in almost all cases: If we fix a 2-CY (orbit) category $\mathcal C$ of finite type, then there is an associated 2-CY category $\mathcal C'$ with cluster tilting objects, such that the full additive subcategories generated by the rigid objects in $\mathcal C$ and $\mathcal C'$ are equivalent. It is known that in the case of standard 2-CY categories, the 2-CY-tilted algebras of finite representation type are Jacobian \cite{bo} (when the algebraically closed field $\mathbb{K}$ is of characteristic zero): There is a potential (i.e. a sum of cycles), such that the algebra is the Jacobian of its Gabriel quiver with respect to this potential. Moreover, all Jacobians are 2-CY-tilted, by the work of Amiot \cite{a}. However, as indicated, we point out that there is a 2-endorigid algebra which is not 2-CY-tilted, and therefore also not Jacobian. In section 1, we give some background material on maximal rigid and cluster tilting objects. In section 2 we give our version of the classification of 2-CY orbit categories, and in particular we describe the rigid objects in these categories. Then, in section 3, we define functors identifying the subcategories of rigids in the relevant cases. In section 4, we give the example of a 2-endorigid algebra of finite type which is not 2-CY-tilted. \subsection*{Notation} Unless stated otherwise, $\mathbb{K}$ will be an algebraically closed field of characteristic zero. We write $\Sigma$ for the shift functor in any orbit category, and $[1]$ for the shift in any derived category. We will use the following notation: \[\mathcal{A}_{n,t} = \db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]\] \[\mathcal{D}_{n,t} = \db{D}{2t(n+1)}/\tau^{n+1}\varphi^n,\] where $\varphi$ is induced by an automorphism of order 2 of ${\rm D}_{2t(n+1)}$. The orbit categories that we consider are triangulated, by a theorem of Keller, see~\cite{k}. \section{Background} In this section, we give some background material on cluster tilting and maximal rigid objects in $\operatorname{Hom}$-finite triangulated 2-Calabi-Yau categories over an algebraically closed field $\mathbb{K}$. Let $d$ be a non-negative integer. A $\operatorname{Hom}$-finite, triangulated $\mathbb{K}$-category $\mathcal C$ is called {\em d-Calabi-Yau} or d-CY for short, if we have a natural isomorphism $$D\operatorname{Hom}(X,Y) \simeq \operatorname{Hom}(Y,X[d])$$ for objects $X,Y$ in $\mathcal C$, where $D = \operatorname{Hom}_{\mathbb{K}}(\ ,\mathbb{K})$ is the ordinary $\mathbb{K}$-duality. A main example here is the cluster category $\mathcal C_Q$ associated with a finite (connected) acyclic quiver $Q$ \cite{bmrrt}. Here $\mathcal C_Q$ is the orbit category $D^b(\operatorname{mod} \mathbb{K} Q)/ \tau^{-1}[1]$, where $\tau$ is the AR-translation on the bounded derived category $D^b(\operatorname{mod} \mathbb{K} Q)$. The cluster categories have been shown to be triangulated \cite{k}. Another main example is the stable category $\underline{\operatorname{mod} \Lambda}$, where $\Lambda$ is a preprojective algebra of Dynkin type, investigated in \cite{gls}. An object $M$ in a triangulated category is called {\em rigid} if $\operatorname{Ext}^1(M,M) = 0$, and {\em maximal rigid} if it is maximal with respect to this property. Let $\operatorname{add} M$ denote the additive closure of $M$. If also $\operatorname{Ext}^1(M,X) =0$ implies $X\in\operatorname{add} M$, then $M$ is said to be {\em cluster tilting}. For the cluster categories $\mathcal C_Q$ and the stable module categories $\underline{\operatorname{mod} \Lambda}$, the maximal rigid objects are also cluster tilting \cite{bmrrt, gls}, but this is not the case in general. An object $\overline{T}$ is called an {\em almost complete cluster tilting} object in $\mathcal C_Q$, if there is an indecomposable object $X$, not in $\operatorname{add} \overline{T}$, such that $\overline{T} \amalg X$ is a cluster tilting object. It was shown in \cite{bmrrt} that if $\overline{T}$ is an almost complete cluster tilting object in $\mathcal C_Q$, then there is a unique indecomposable object $Y \not \simeq X$, such that $T^{\ast} = \overline{T} \amalg Y$ is a cluster tilting object. There is an interesting property for cluster tilting objects which does not hold for maximal rigid objects. For $T$ a cluster tilting object in a $2$-CY category $\mathcal C$, there is an equivalence of categories $\mathcal C/\operatorname{add} T \to \operatorname{mod} \operatorname{End}(T)$, by \cite{bmr,kr}. For a connected 2-Calabi-Yau category, then either all maximal rigid objects are cluster tilting, or none of them are \cite{zz}. And if for a maximal rigid object $M$ there are no loops or 2-cycles in the quiver of $\operatorname{End}(M)$, then $M$ is cluster tilting \cite{birs, xo}. The main sources of examples for having maximal rigid objects which are not cluster tilting are 1-dimensional hypersurface singularities \cite{bikr} and cluster tubes \cite{bkl, bmv, v, y}. The 2-Calabi-Yau-tilted algebras $\Gamma$ satisfy some nice homological properties: They are Gorenstein of dimension $\leq 1$, and $\operatorname{Sub} \Gamma$ is a Frobenius category whose stable category $\underline{\operatorname{Sub}} \Gamma$ is 3-Calabi-Yau \cite{kr}. Here $\operatorname{Sub} \Gamma$ denotes the full additive subcategory of $\operatorname{mod} \Gamma$ generated by the submodules of objects in $\operatorname{add} \Gamma$, and $\underline{\operatorname{Sub}} \Gamma$ denotes the corresponding stable category, that is: the category with the same objects, but with $\operatorname{Hom}$-spaces given as the $\operatorname{Hom}$-spaces in $\operatorname{mod} \Gamma$ modulo maps factoring through projective objects. By \cite{zz}, also the 2-endorigid algebras are Gorenstein of dimension $\leq 1$. \section{Rigid objects in triangulated orbit categories of finite type}\label{section: rigid} \subsection{The classification}\label{subsection: list} In \cite{a}, Amiot classified all standard triangulated categories with finitely many indecomposable objects. By using geometric descriptions in type $\rm{A}$ \cite{ccs} and in type $\rm{D}$ \cite{s}, and direct computations in type $\rm{E}$, Burban--Iyama--Keller--Reiten extracted from Amiot's list all 2-Calabi--Yau triangulated categories with cluster tilting objects, and with non-zero maximal rigid objects (see the appendix of \cite{bikr}). In this section, we give a restatement of the results in the appendix of \cite{bikr}. We note two changes from their lists: \begin{enumerate} \item[(L1)] The orbit category $\db{E}{8}/\tau^4$ has cluster tilting objects (this case was first noticed by Ladkani in \cite{l}); \item[(L2)] The orbit category $\db{D}{4}/\tau^2\varphi$, where $\varphi$ is induced by an automorphism of $D_4$ of order 2, has non-zero maximal rigid objects which are not cluster tilting. \end{enumerate} \begin{proposition}[Amiot ; Burban--Iyama--Keller--Reiten]\label{proposition: list CTO} The standard, 2-Calabi--Yau, triangulated categories with finitely many indecomposable objects and with cluster tilting objects are exactly the cluster categories of Dynkin types $A$, $D$ or $E$ and the orbit categories: \begin{itemize} \item[-] \emph{(Type $\rm{A}$)} $\db{A}{3n}/\tau^n[1]$, where $n\geq 1$; \item[-] \emph{(Type $\rm{D}$)} $\db{D}{kn}/(\tau\varphi)^n$, where $n\geq 1$, $k>1$, $kn\geq 4$ and $\varphi$ is induced by an automorphism of ${\rm D}_{kn}$ of order 2; \item[-] \emph{(Type $\rm{E}$)} $\db{E}{8}/\tau^4$ and $\db{E}{8}/\tau^8$. \end{itemize} \end{proposition} \begin{proof} These categories are described in a table of the appendix of \cite{bikr}, and our description is based on that. We explain why and how our description in case of types $\rm{A}$ and $\rm{D}_4$ differ from that of \cite{bikr}. Apart from the cluster category, the orbit categories of $\db{A}{m}$ appearing in the table of \cite{bikr} are given by the automorphisms: \begin{itemize} \item[] $(\tau^{\frac{m}{2}}[1])^{\frac{m+3}{3}}$, if 3 divides $m$ and $m$ is even; \item[] \item[] $\tau^{\frac{m+3}{6}+\frac{m+1}{2}}[1]$, if 3 divides $m$ and $m$ is odd. \end{itemize} We simplify this description by using the fact that in the triangulated category $\db{A}{m}$, we have \begin{equation} \label{eq:cy} \tau^{-(m+1)} = [2] \end{equation} Note that this is sometimes referred to as a {\em fractional Calabi--Yau property.} Let $m=3n$. Assume first that $n$ is even. We then have: \begin{multline*}(\tau^{\frac{m}{2}}[1])^{\frac{m+3}{3}} = (\tau^{3\frac{n}{2}}[1])^{n+1} = (\tau^{3n+1})^{\frac{n}{2}}\tau^n[n+1] = (\tau^{3n+1})^{\frac{n}{2}}\tau^n[2]^\frac{n}{2} [1] \\ = (\tau^{3n+1} [2])^{\frac{n}{2}} ( \tau^n[1] ) = \tau^n[1] \end{multline*} where the last equality follows from (\ref{eq:cy}). Assume now that $n$ is odd. Then, we have $$\tau^{\frac{m+3}{6}+\frac{m+1}{2}}[1] = \tau^{2n+1}[1] = (\tau^n[1])^{-1}$$ where (\ref{eq:cy}) is used for the last equation. In both cases, the orbit category is $\db{A}{3n}/\tau^n[1]$. The orbit categories of $\db{D}{4}$ appearing in the table of \cite{bikr} are given by the automorphisms: $\tau^k\sigma$, where $k$ divides 4, where $\sigma$ is induced by an automorphism of ${\rm D}_4$ satisfying $\sigma^\frac{4}{k} = 1$ and where $(k,\sigma)\neq (1,1)$. We thus have: \begin{itemize} \item[] if $k=1$, then $\sigma$ is of order 2; \item[] if $k=2$, then $\sigma$ is either the identity or of order 2; \item[] if $k=4$, then $\sigma$ is the identity and the orbit category is the cluster category of type $\rm{D}_4$. \end{itemize} We claim that if $k=2$ and $\sigma$ is of order 2, then the corresponding orbit category has non-zero maximal rigid objects, but does not have cluster tilting objects. Let thus $\sigma$ be of order 2. By computing the Hom-hammocks in the Auslander--Reiten quiver: \[\xymatrix@-1pc{ & d \ar@{->}[dr] & & \Sigma d \ar@{->}[dr] & & d \\ a \ar@{->}[r]\ar@{->}[ur]\ar@{->}[dr] & c \ar@{->}[r] & \Sigma a \ar@{->}[r]\ar@{->}[ur]\ar@{->}[dr] & \Sigma b \ar@{->}[r] & a \ar@{->}[r]\ar@{->}[ur]\ar@{->}[dr] & b \\ & b \ar@{->}[ur] & & \Sigma c \ar@{->}[ur] & & c, }\] one finds that $d$ and $\Sigma d$ are the only non-zero rigid objects and that there are no non-zero morphisms from $d$ to $b$ or $c$. This shows that $d$, and therefore also $\Sigma d$, are maximal rigid objects which are not cluster tilting. This explains (L2). \end{proof} \begin{proposition}[Amiot ; Burban--Iyama--Keller--Reiten]\label{proposition: list max rigid} The standard, 2-Calabi--Yau, triangulated categories with finitely many indecomposable objects and with non-zero maximal rigid objects which are not cluster tilting are exactly the orbit categories: \begin{itemize} \item[-] \emph{(Type A)} $\db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]$, where $n\geq 1$ and $t>1$; \item[-] \emph{(Type D)} $\db{D}{2t(n+1)}/\tau^{n+1}\varphi^n$, where $n,t\geq 1$, and where $\varphi$ is induced by an automorphism of ${\rm D}_{2t(n+1)}$ of order 2 ; \item[-] \emph{(Type E)} $\db{E}{7}/\tau^2$ and $\db{E}{7}/\tau^5$. \end{itemize} \end{proposition} \begin{proof} Type $\rm{A}$ deserves a few comments. The tables in the appendix of \cite{bikr} list all orbit categories of $\db{A}{m}$ with non-zero maximal rigid objects which are not cluster tilting. They are given by the following automorphisms: \begin{itemize} \item[I.] $(\tau^\frac{m}{2}[1])^k$, where $m$ is even; $k$ divides $m+3$; $k\neq 1$; $k\neq m+3$ and if 3 divides $m$, then $k\neq\frac{m+3}{3}$; \item[] \item[II.] $\tau^{k + \frac{m+1}{2}}[1]$, where $m$ is odd; $k$ divides $\frac{m+3}{2}$; $\frac{m+3}{2k}$ is odd; $k\neq \frac{m+3}{2}$ and if 3 divides $m$, then $k\neq\frac{m+3}{6}$ \end{itemize} As in the proof of Proposition \ref{proposition: list CTO}, we use the property given by equation (\ref{eq:cy}), in order to give a uniform description of all the cases above. Note first that if $k=\frac{m+3}{3}$ or if $k=\frac{m+3}{6}$, then 3 divides $m$. Therefore, the condition ``if 3 divides $m$'' above is redundant. Assume first we are in case I above, so $m$ is even and we can write $m+3 = uk$, where $u$ and $k$ are greater than 1 and $u\neq 3$. We then have: \begin{multline*} (\tau^\frac{m}{2}[1])^k = (\tau^\frac{uk-3}{2}[1])^k = \tau^{k\frac{uk-3}{2}}[1][k-1] = \tau^{k\frac{uk-3}{2}}[1][2]^\frac{k-1}{2}\\ = \tau^{k\frac{uk-3}{2}}[1](\tau^{-uk+2})^\frac{k-1}{2} = \tau^{\frac{u-1}{2}k-1}[1]\end{multline*} Replacing $u$ by $2t+1$ and $k$ by $n+1$ gives $$m= uk-3 = (2t+1)(n+1)-3 \text{ and } \frac{u-1}{2}k-1 = t(n+1) -1$$ Hence, we obtain the orbit categories $\db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]$ (where $t>1$ and $n\geq 1$). Assume now we are in case II, so that $m$ is odd and we can write $m+3 = 2uk$, where $u$ is odd and greater than 3. We then have: $$\tau^{k + \frac{m+1}{2}}[1] = \tau^{k+uk-1}[1] = \tau^{\frac{u+1}{2}2k-1}[1] = (\tau^{\frac{u-1}{2}2k-1}[1])^{-1}$$ where the last equation follows from equation (\ref{eq:cy}). Replacing $u$ by $2t+1$, and $2k$ by $n+1$ gives $$m = 2uk-3= (2t+1)(n+1)-3 \text{ and } \frac{u-1}{2}2k-1= t(n+1) -1$$ and also in this case we obtain the orbit categories $\db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]$ (where $t>1$, $n\geq 1$). \end{proof} \begin{remark} For a given value of $n$, the orbit categories \[\db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]\] share some similarities, and are compared in section~\ref{section: comparisons}. Note that when $t=1$ , we have $$\db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1] = \db{A}{3n}/\tau^{n}[1].$$ Hence, by Proposition \ref{proposition: list CTO} this orbit category has cluster tilting objects. On the other hand, if $t>1$ it has non-zero maximal rigid objects which are not cluster tilting. This family can be expanded by including the cluster tubes, thought of as a limit obtained when $t$ goes to infinity. This point of view will be corroborated in sections \ref{section: comparisons} and \ref{section: endoalg}, where the endomorphism algebras of the maximal rigid objects in these categories are shown to be independent of the specific value of $t$. \end{remark} \subsection{The rigid objects}\label{subsection: rigid} We will now describe indecomposable rigid objects in the orbit categories listed in subsection \ref{subsection: list}, and then consider the additive subcategories generated by the set of rigid objects. \subsubsection{Type $\rm{A}$}\label{subsubsection: max rigid A} In order to compute the rigid objects in the orbit categories $\mathcal{A}_{n,t} = \db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]$, we use the geometric description \cite{ccs} of the cluster category of type $\rm{A}$. The following lemma was implicitly used in the appendix of \cite{bikr}. \begin{lemma}\label{lemma: rigid A} \begin{enumerate} \item There is a bijection between isomorphism classes of basic objects in $\mathcal{A}_{n,t}$ and collections of arcs of the $(2t+1)(n+1)$-gon which are stable under rotation by $\frac{2\pi}{2t+1}$. Such a bijection is given in figure~\ref{figure: arcs type A17} for $t=2, n=3$ and is sketched in figure~\ref{figure: Ant} for the general case. \item Under the bijection above, rigid objects correspond to non-crossing collections of arcs. In particular: \begin{enumerate} \item The isomorphism classes of \sloppy indecomposable rigid objects in $\mathcal{A}_{n,t}$ are parametrised by the arcs $[i\;(i+2)],\ldots,[i\;(i+n+1)]$ for $i=1,\ldots,n+1$. \item The maximal non-crossing collections correspond to (isoclasses of) basic maximal rigid objects and such an object is cluster tilting if and only if the collection of arcs is a triangulation (if and only if $t=1$). \end{enumerate} \end{enumerate} \end{lemma} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.5, fl/.style={->,shorten <=6pt, shorten >=6pt,>=latex}] \draw (0,0) node[circle, fill=black!15, scale=1.2] {} ; \draw (1,1) node[circle, fill=black!15, scale=1.2] {} ; \draw (2,2) node[circle, fill=black!15, scale=1.2] {} ; \draw (8,0) node[circle, fill=black!15, scale=1.2] {} ; \draw (9,1) node[circle, fill=black!15, scale=1.2] {} ; \draw (10,2) node[circle, fill=black!15, scale=1.2] {} ; \draw (4,16) node[circle, fill=black!15, scale=1.2] {} ; \draw (5,15) node[circle, fill=black!15, scale=1.2] {} ; \draw (6,14) node[circle, fill=black!15, scale=1.2] {} ; \draw (12,16) node[circle, fill=black!15, scale=1.2] {} ; \draw (13,15) node[circle, fill=black!15, scale=1.2] {} ; \draw (14,14) node[circle, fill=black!15, scale=1.2] {} ; \foreach \x in {1,2,3,4} { \pgfmathparse{12-\x}\let\z\pgfmathresult ; \foreach \y in {2,3,...,\z} { \newcount\u ; \pgfmathsetcount{\u}{\x+\y} ; \draw (2*\x+\y-4,\y-2) node[scale=0.7] {\x$\;$\the\u} ; \draw[fl] (\y-4+2*\x,\y-2) -- (\y-3+2*\x,\y-1) ; \draw[fl] (\y-3+2*\x,\y-1) -- (\y-2+2*\x,\y-2) ; } ; } ; \draw[thick, dashed, blue] (-1.1,-0.5) -- ++(7.6,0) -- ++(6.5,6.5) -- ++(-3.9,3.9) -- cycle ; \begin{scope}[xshift=8cm] \foreach \x in {1,2,3,4} { \pgfmathparse{12-\x}\let\z\pgfmathresult ; \foreach \y in {2,3,...,\z} { \newcount{{\bf w}_0} ; \pgfmathsetcount{{{\bf w}_0}}{\x+\y} ; \draw (2*\x+\y-4,\y-2) node[scale=0.7] {\x$\;$\the{{\bf w}_0}} ; \draw[fl] (\y-4+2*\x,\y-2) -- (\y-3+2*\x,\y-1) ; \draw[fl] (\y-3+2*\x,\y-1) -- (\y-2+2*\x,\y-2) ; } ; } ; \end{scope} \begin{scope}[xshift=240] \draw[thick, dashed, blue] (-1.5,-0.5) -- ++(7.6,0) -- ++(6.5,6.5) -- ++(-3.9,3.9) -- cycle ; \end{scope} \begin{scope}[xshift=4cm, yshift=16cm, rotate=180, xscale=-1] \foreach \x in {1,2,3,4} { \pgfmathparse{12-\x}\let\z\pgfmathresult ; \foreach \y in {2,3,...,\z} { \newcount\r ; \pgfmathsetcount{\r}{\x+\y} ; \draw (2*\x+\y-4,\y-2) node[scale=0.7] {\x$\;$\the\r} ; \draw[fl] (\y-4+2*\x,\y-2) -- (\y-3+2*\x,\y-1) ; \draw[fl] (\y-3+2*\x,\y-1) -- (\y-2+2*\x,\y-2) ; } ; } ; \end{scope} \begin{scope}[xshift=4.2cm, yshift=16.2cm, rotate=180, xscale=-1] \draw[thick, dashed, blue] (-1.3,-0.3) -- ++(7.6,0) -- ++(6.5,6.5) -- ++(-3.9,3.9) -- cycle ; \end{scope} \begin{scope}[xshift=12cm, yshift=16cm, rotate=180, xscale=-1] \foreach \x in {1,2,3,4} { \pgfmathparse{12-\x}\let\z\pgfmathresult ; \foreach \y in {2,3,...,\z} { \newcount\r ; \pgfmathsetcount{\r}{\x+\y} ; \draw (2*\x+\y-4,\y-2) node[scale=0.7] {\x$\;$\the\r} ; \draw[fl] (\y-4+2*\x,\y-2) -- (\y-3+2*\x,\y-1) ; \draw[fl] (\y-3+2*\x,\y-1) -- (\y-2+2*\x,\y-2) ; } ; \draw (9.5-\x+2*\x,10.5-\x) node[circle, fill=white, scale=1.2] {} ; } ; \end{scope} \begin{scope}[xshift=12.2cm, yshift=16.2cm, rotate=180, xscale=-1] \draw[thick, dashed, blue] (-1.3,-0.3) -- ++(7.6,0) -- ++(6.5,6.5) -- ++(-3.9,3.9) -- cycle ; \end{scope} \draw (3,5) node[scale=1.5] {$\cdots$} ; \draw (6,8) node[scale=1.5] {$\cdots$} ; \draw (6,12) node[scale=1.5] {$\cdots$} ; \draw (19,2) node[scale=1.5] {$\cdots$} ; \end{tikzpicture} \caption{A bijection between $\frac{2\pi}{5}$-periodic collections of arcs of the heptakaidecagon and isomorphism classes of basic objects in $\mathcal{A}_{3,2}$. The maximal rigid object of Corollary~\ref{corollary: max rigid A} is highlighted in grey.} \label{figure: arcs type A17} \end{figure} \end{center} \begin{proof} Let $n,t\geq 1$, let $N = (2t+1)(n+1)$, let $\mathcal{A}_{n,t}$ be the triangulated orbit category $\db{A}{N-3}/\tau^{t(n+1)-1}[1]$ and let $\mathcal{C}_{\rm{A}_{N-3}}$ be the cluster category of type $\rm{A}_{N-3}$. Using that $\mathcal{A}_{n,t}$ is 2-CY, the universal property of orbit categories yields a functor $\mathcal{C}_{\rm{A}_{N-3}} \stackrel{F}{\longrightarrow} \mathcal{A}_{n,t}$. Note that this covering functor commutes with shift functors since these latter are induced by the shift in the orbit category $\db{A}{N-3}$. In the cluster category $\mathcal{C}_{\rm{A}_{N-3}}$, we have $\tau^{t(n+1)-1}[1] = \tau^{t(n+1)}$. Moreover, in the derived category $\db{A}{N-3}$, we have $\tau^{N-2} = [-2]$. Therefore, $\tau$ is of order $N$ in $\mathcal{C}_{\rm{A}_{N-3}}$, and $\tau^{n+1}$ is of order $2t+1$. Since gcd$(t,2t+1) = 1$, then $\tau^{t(n+1)}$ is also of order $2t+1$ and generates the same group as $\tau^{n+1}$. The functor $F$ is thus a $(2t+1)$-covering functor, with $F(\tau^{n+1}X)$ isomorphic to $FX$ for any object $X$. Since $F$ commutes with shifts, we have, for any two objects $X,Y$ in $\mathcal{A}_{n,t}$: $\operatorname{Ext}^1_{\mathcal{A}_{n,t}}(X,Y) = \operatorname{Hom}_{\mathcal{A}_{n,t}}(X,\Sigma Y) \simeq \bigoplus_{FY'\simeq Y} \operatorname{Hom}_{\mathcal{C}_{A_{N-3}}}(X,\Sigma Y') = \bigoplus_{FY'\simeq Y} \operatorname{Ext}^1_{\mathcal{C}_{A_{N-3}}}(X,Y')$. We can thus use the description of the cluster category $\mathcal{C}_{A_{N-3}}$ in terms of diagonals of the $N$-gon \cite{ccs} in order to compute the rigid indecomposable objects in $\mathcal{A}_{n,t}$: Isomorphism classes of indecomposable objects in $\mathcal{A}_{n,t}$ are in bijection with collections of $2t+1$ diagonals of the $N$-gon which are stable under the automorphism sending a diagonal $[i\;j]$ to $[(i+n+1)\;(j+n+1)]$. Moreover, such a collection corresponds to a rigid indecomposable object in $\mathcal{A}_{n,t}$ if and only if none of its diagonals cross. This shows that isomorphism classes of indecomposable rigid objects in $\mathcal{A}_{n,t}$ are parametrised by the arcs $[i\;(i+2)],\ldots,[i\;(i+n+1)]$ for $i=1,\ldots,n+1$. Consider a maximal collection $\mathfrak{A}$ of non-crossing arcs, stable under rotation by $\frac{2\pi}{2t+1}$, that is not a triangulation. Then there exists an arc $\gamma$ which does not cross any arc in the collection (such an arc will correspond to a non-rigid indecomposable object). Necessarily, none of the rotations of $\gamma$ by multiples of $\frac{2\pi}{2t+1}$ cross any arc in the collection. This implies that the maximal rigid object corresponding to $\mathfrak{A}$ is not cluster tilting. \end{proof} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.475, fl/.style={->,shorten <=5pt, shorten >=5pt,>=latex}] \foreach \y in {0,1,2} { \draw[fl] (1+\y,\y) -- (1+\y +1,\y +1) ; \draw[fl] (2+\y,\y +1) -- (2+\y +1,\y ) ; \draw (1+\y,\y) node[circle, fill=black!15, scale=1.2] {} ; \newcount\u ; \pgfmathsetcount{\u}{3+\y} ; \draw (1+\y,\y) node[scale=0.7] {1$\;$\the\u} ; } ; \draw[fl] (3,0) -- (4,1) ; \draw[fl] (4,1) -- (5,2) ; \draw[fl] (4,1) -- (5,0) ; \draw (6,0.5) node[scale=1.5] {$\cdots$} ; \draw (7,2) node[scale=1.5] {$\cdots$} ; \foreach \x in {1,2,3,4} { \foreach \y in {0,1,2} { \draw[fl] (5+2*\x +\y,\y) -- (5+2*\x +\y +1,\y +1) ; \draw[fl] (5+2*\x +\y +1,\y +1) --(5+2*\x +\y +2,\y) ; } ; } ; \foreach \y in {0,1,2} { \draw[fl] (5+2*5 +\y,\y) -- (5+2*5 +\y +1,\y +1) ; \draw (5+2*5 +\y,\y) node[circle, fill=black!15, scale=1.2] {} ; \newcount\u ; \pgfmathsetcount{\u}{3+\y} ; \draw (5+2*5 +\y,\y) node[scale=0.7] {1$\;$\the\u} ; } ; \begin{scope}[xshift=-1cm, yshift=-1cm] \draw (7,6) node[circle, fill=black!15, scale=1.2] {} ; \draw (7,6) node[scale=0.7] {1$\; n\!+\!2$} ; \draw[loosely dotted] (4.2,3.2) -- (6,5) ; \draw[fl] (6,5) -- (7,6) ; \draw[fl] (7,6) -- (8,5) ; \draw[fl] (7,6) -- (8,7) ; \draw[loosely dotted] (8,5) -- (9.8,3.2) ; \draw[loosely dotted] (8.2,7.2) -- (15.8,14.8) ; \draw[fl] (15,14) -- (16,15) ; \draw[fl] (14,13) -- (15,14) ; \draw (15,14) node[scale=0.7] {1\;$u$} ; \draw[loosely dotted] (15.2,14.2) -- (16.8,15.8) ; \draw[thick, dashed, blue] (0.5,0.5) -- (15,15) -- ++(7,-7) -- ++(-7.5,-7.5) --cycle ; \draw[dashed, red] (5,6.5) -- ++(20,0) ; \end{scope} \begin{scope}[xshift=10cm, yshift=20cm, rotate=180, xscale=-1] \foreach \y in {0,1,2} { \draw[fl] (1+\y,\y) -- (1+\y +1,\y +1) ; \draw[fl] (2+\y,\y +1) -- (2+\y +1,\y ) ; \draw (1+\y,\y) node[circle, fill=black!15, scale=1.2] {} ; \newcount\u ; \pgfmathsetcount{\u}{3+\y} ; \draw (1+\y,\y) node[scale=0.7] {1$\;$\the\u} ; } ; \draw[fl] (3,0) -- (4,1) ; \draw[fl] (4,1) -- (5,2) ; \draw[fl] (4,1) -- (5,0) ; \draw (6,0.5) node[scale=1.5] {$\cdots$} ; \draw (7,2) node[scale=1.5] {$\cdots$} ; \foreach \x in {1,2,3,4} { \foreach \y in {0,1,2} { \draw[fl] (5+2*\x +\y,\y) -- (5+2*\x +\y +1,\y +1) ; \draw[fl] (5+2*\x +\y +1,\y +1) --(5+2*\x +\y +2,\y) ; } ; } ; \foreach \y in {0,1,2} { \draw[fl] (5+2*5 +\y,\y) -- (5+2*5 +\y +1,\y +1) ; \draw (5+2*5 +\y,\y) node[circle, fill=black!15, scale=1.2] {} ; \newcount\u ; \pgfmathsetcount{\u}{3+\y} ; \draw (5+2*5 +\y,\y) node[scale=0.7] {1$\;$\the\u} ; } ; \begin{scope}[xshift=-1cm, yshift=-1cm] \draw (7,6) node[circle, fill=black!15, scale=1.2] {} ; \draw (7,6) node[scale=0.7] {1$\; n\!+\!2$} ; \draw[loosely dotted] (4.2,3.2) -- (6,5) ; \draw[fl] (6,5) -- (7,6) ; \draw[fl] (7,6) -- (8,5) ; \draw[fl] (7,6) -- (8,7) ; \draw[loosely dotted] (8,5) -- (9.8,3.2) ; \draw[loosely dotted] (16.2,3.2) -- ++(2.8,2.8) ; \draw[dashed, red] (3,6.5) -- ++(17,0) ; \end{scope} \end{scope} \draw[loosely dotted] (16.2,3.2) -- ++(11.8,11.8) ; \draw[fl, green] (6.5,5) -- ++(9.8,9.8) ; \draw (13,0) node[scale=.5] {$n\!+\!1\;n\!+\!3$} ; \draw (14,1) node[scale=.5] {$n\!+\!1\;n\!+\!4$} ; \draw (15,2) node[scale=.5] {$n\!+\!1\;n\!+\!5$} ; \end{tikzpicture} \caption{A fundamental domain for $\mathcal{A}_{n,t}$ inside the derived category is encircled by a dotted blue line. Below the bottom (and above the top) dotted red line lie all rigid indecomposable objects. The Hom-hammock of $(1\,n+2)$ is emphasized by a dotted rectangle. The green arrow gives rise to a loop in the quiver $\qc_{\mathcal{R}_{\cat}}$. Here $u$ equals $(t+1)(n+1)$.} \label{figure: Ant} \end{figure} \end{center} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.7] \draw[very thick] (0,0) circle (5) ; \foreach \a in {1,6,...,21} { \draw (-14.4*\a+104.4:5) edge[very thick, color=black!80, out={-15.6-14.4*\a}, in={204.4-14.4*\a}] (-14.4*\a+75.6:5) ; \draw (-14.4*\a+104.4:5) edge[very thick, color=black!60, out={-25.6-14.4*\a}, in={194.4-14.4*\a}] (-14.4*\a+61.2:5) ; \draw (-14.4*\a+104.4:5) edge[very thick, color=black!40, out={-35.6-14.4*\a}, in={184.4-14.4*\a}] (-14.4*\a+46.8:5) ; \draw (-14.4*\a+104.4:5) edge[very thick, color=black!20, out={-45.6-14.4*\a}, in={174.4-14.4*\a}] (-14.4*\a+32.4:5) ; } ; \foreach \x in {1,2,...,25} { \draw (-14.4*\x+104.4:5) node {$\bullet$} ; \draw (-14.4*\x+104.4:5.6) node {\x} ; } ; \end{tikzpicture} \caption{A collection of arcs of the icosikaipentagon corresponding to a maximal rigid object in $\mathcal{A}_{4,2}$.} \label{figure: icosikaipentagon} \end{figure} \end{center} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.5] \draw[very thick] (0,0) circle (5) ; \foreach \a in {1,6,...,21} \draw[very thick, color=black!40] (-14.4*\a+104.4:5) -- (-14.4*\a+3.6:5) ; \foreach \x in {1,2,...,25} { \draw (-14.4*\x+104.4:5) node {$\bullet$} ; \draw (-14.4*\x+104.4:5.7) node {\x} ; } ; \end{tikzpicture} \caption{A collection of arcs of the icosikaipentagon corresponding to a non-rigid indecomposable object of $\mathcal{A}_{4,2}$.} \label{figure: non-rigid type A} \end{figure} \end{center} \begin{remark} For an example of an arc corresponding to an indecomposable object which is not rigid, see figure~\ref{figure: non-rigid type A}. \end{remark} Let $\mathcal{R}_{\mathcal{A}_{n,t}}$ be the full additive subcategory of $\mathcal{A}_{n,t}$ generated by the rigid objects. We will show in section \ref{section: comparisons} that this category (up to equivalence) only depends on $n$. Here we provide a first step towards that result. Recall that for an additive $\operatorname{Hom}$-finite Krull-Schmidt category $\mathcal{U}$, the quiver $\mathcal{Q}_{\mathcal{U}}$ of $\mathcal{U}$ has vertices corresponding to the isomorphism classes of indecomposable objects, and there are $\dim \operatorname{Irr}(X,Y)$ arrows from the vertex corresponding to $X$ to the vertex corresponding to $Y$, where $\operatorname{Irr}(X,Y)$ is the space of irreducible maps from $X$ to $Y$. \begin{proposition}\label{proposition: max rigid A} The quiver $\mathcal{Q}_{\mathcal{R}_{\mathcal{A}_{n,t}}}\!\!\!\,$ is isomorphic to the quiver $\mathcal{Q}_n$ depicted in figure~\ref{figure: quiver}. \end{proposition} \begin{proof} Consider the Auslander-Reiten quiver of $\mathcal{A}_{n,t}$ depicted in figure \ref{figure: Ant}. Clearly, the irreducible maps in $\mathcal{A}_{n,t}$ with source and target in $\mathcal{R}_{\mathcal{A}_{n,t}}$, are also irreducible in $\mathcal{R}_{\mathcal{A}_{n,t}}$. It is also straightforward to verify, by computations in the derived category $D^b(\mathbb{K} A_{(2t+1)(n+1)-3})$, that the map from $(1 \; n+2)$ to $(1 \; n+2)$ (and all shifts of this) is irreducible in $\mathcal{R}_{\mathcal{A}_{n,t}}$, and that there are no further irreducible maps in $\mathcal{R}_{\mathcal{A}_{n,t}}$. Hence the quiver $\mathcal{Q}_{\mathcal{R}_{\mathcal{A}_{n,t}}}$ is isomorphic to the quiver $\mathcal{Q}_n$ depicted in figure \ref{figure: quiver}. \end{proof} As a special case of the computations necessary for the proof of Proposition \ref{proposition: max rigid A} we also obtain the following. Note that the cluster tilting case $t=1$ of this fact can also be found in~\cite{bo}. \begin{corollary}\label{corollary: max rigid A} Let $n,t\in\mathbb{N}$ and let $T$ be the maximal rigid object of the orbit category $\db{A}{(2t+1)(n+1)-3}/\tau^{t(n+1)-1}[1]$ corresponding to the collection of arcs generated by $[1\;3],[1\;4],\ldots,[1\;n+2]$ (see Lemma~\ref{lemma: rigid A}). Then the endomorphism algebra of $T$ is given by the quiver \[ \xymatrix{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r] & n-1 \ar@{->}[r] & n \ar@(ur,dr)^{\alpha} }, \] with ideal of relations generated by $\alpha^2$. \end{corollary} \begin{remark} See figure \ref{figure: icosikaipentagon} for the collection of arcs corresponding to the maximal rigid object in Corollary \ref{corollary: max rigid A}. \end{remark} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.8, fl/.style={->,shorten <=6pt, shorten >=6pt,>=latex}] \coordinate (13) at (0,0) ; \coordinate (14) at (1,1) ; \coordinate (15) at (2,2) ; \coordinate (1n+1) at (4,4) ; \coordinate (1n+2) at (5,5) ; \coordinate (24) at (2,0) ; \coordinate (25) at (3,1) ; \coordinate (2n+1) at (5,3) ; \coordinate (2n+2) at (6,4) ; \coordinate (2n+3) at (7,5) ; \coordinate (35) at (4,0) ; \coordinate (3n+3) at (8,4) ; \coordinate (13b) at (12,0) ; \coordinate (14b) at (13,1) ; \coordinate (1nb) at (15,3) ; \coordinate (1n+1b) at (16,4) ; \coordinate (1n+2b) at (17,5) ; \coordinate (nn+2) at (8,0) ; \coordinate (nn+3) at (9,1) ; \coordinate (nn+4) at (10,2) ; \coordinate (n2n+1) at (13,5) ; \coordinate (n+1n+3) at (10,0) ; \coordinate (n+1n+4) at (11,1) ; \coordinate (n+12n+1) at (14,4) ; \coordinate (n+12n+2) at (15,5) ; \coordinate (13b) at (12,0) ; \coordinate (14b) at (13,1) ; \coordinate (1nb) at (15,3) ; \coordinate (1n+1b) at (16,4) ; \coordinate (n+1n+5) at (12,2) ; \draw[fl] (nn+2) -- (nn+3) ; \draw[fl] (nn+3) -- (n+1n+3) ; \draw[fl] (nn+3) -- (nn+4) ; \draw[fl] (nn+4) -- (n+1n+4) ; \draw[fl] (n2n+1) -- (n+12n+1) ; \draw[fl] (13) -- (14) ; \draw[fl] (14) -- (15) ; \draw[fl] (14) -- (24) ; \draw[fl] (15) --(25) ; \draw[fl] (1n+1) --(1n+2) ; \draw[fl] (1n+1) --(2n+1) ; \draw[fl] (1n+2) --(2n+2) ; \draw[fl] (24) --(25) ; \draw[fl] (25) --(35) ; \draw[fl] (2n+1) --(2n+2) ; \draw[fl] (2n+2) --(2n+3) ; \draw[fl] (2n+3) --(3n+3); \draw[fl] (n+1n+3) --(n+1n+4) ; \draw[fl] (n+1n+4) --(13b) ; \draw[fl] (n+12n+1) --(n+12n+2) ; \draw[fl] (n+12n+2) --(1n+1b) ; \draw[fl] (13b) --(14b) ; \draw[fl] (1n+1b) --(1n+2b) ; \draw[fl] (n+12n+1) --(1nb) ; \draw[fl] (1nb) --(1n+1b) ; \draw[fl] (n+1n+4) --(n+1n+5) ; \draw[fl] (n+1n+5) --(14b) ; \draw (n2n+1) edge[out=125, in=65, loop, distance=1cm, fl] (n2n+1) ; \draw (1n+2) edge[out=125, in=65, loop, distance=1cm, fl] (1n+2) ; \draw (2n+3) edge[out=125, in=65, loop, distance=1cm, fl] (2n+3) ; \draw (n+12n+2) edge[out=125, in=65, loop, distance=1cm, fl] (n+12n+2) ; \draw (1n+2b) edge[out=125, in=65, loop, distance=1cm, fl] (1n+2b) ; \draw[loosely dotted, shorten <=6pt, shorten >=6pt] (15) --(1n+1) ; \draw[loosely dotted, shorten <=6pt, shorten >=6pt] (25) --(2n+2) ; \draw[loosely dotted] (5,1) --(7,1) ; \draw[loosely dotted] (9,4) --(11,4) ; \draw[loosely dotted, shorten <=6pt, shorten >=6pt] (n+1n+5) --(n+12n+1) ; \draw[loosely dotted, shorten <=6pt, shorten >=6pt] (14b) --(1nb) ; \draw[loosely dotted, shorten <=6pt, shorten >=6pt] (nn+4) --(n2n+1) ; \draw (13) node[scale=0.5] {13} ; \draw (14) node[scale=0.5] {14} ; \draw (15) node[scale=0.5] {15} ; \draw (24) node[scale=0.5] {24} ; \draw (25) node[scale=0.5] {25} ; \draw (1n+1) node[scale=0.5] {$1(n+1)$} ; \draw (1n+2) node[scale=0.5] {$1(n+2)$} ; \draw (2n+1) node[scale=0.5, fill=white] {$2(n+1)$} ; \draw (2n+2) node[scale=0.5] {$2(n+2)$} ; \draw (2n+3) node[scale=0.5] {$2(n+3)$} ; \draw (n+1n+3) node[scale=0.5] {$(n+1)(n+3)$} ; \draw (n+1n+4) node[scale=0.5] {$(n+1)(n+4)$} ; \draw (n+12n+1) node[scale=0.5] {$(n+1)(2n+1)$}; \draw (n+12n+2) node[scale=0.5] {$(n+1)(2n+2)$}; \draw (13b) node[scale=0.5] {13} ; \draw (14b) node[scale=0.5] {14} ; \draw (1nb) node[scale=0.5] {$1n$} ; \draw (1n+1b) node[scale=0.5] {$1(n+1)$} ; \draw (1n+2b) node[scale=0.5] {$1(n+2)$} ; \draw (nn+3) node[scale=0.5] {$n(n+3)$} ; \draw (nn+4) node[scale=0.5] {$n(n+4)$} ; \draw (nn+2) node[scale=0.5] {$n(n+2)$}; \draw (n2n+1) node[scale=0.5] {$n(2n+1)$}; \draw (n+1n+5) node[scale=0.5] {$(n+1)(n+5)$}; \end{tikzpicture} \caption{The quiver $\mathcal{Q}_n$} \label{figure: quiver} \end{figure} \end{center} \subsubsection{Type $\rm{D}$}\label{subsubsection: max rigid D} Let $n,t\geq 1$ and let $P_{n,t}$ be a once-punctured $2t(n+1)$-gon. We denote by $\rho$ the automorphism on the tagged arcs (see \cite{s,fst}) obtained by rotating by $\frac{\pi}{t}$ and switching tags, as in figure~\ref{figure: non-rigid type D}. Recall that $\mathcal{D}_{n,t}$ is the orbit category $\db{D}{2t(n+1)}/\tau^{n+1}\varphi^n$. \begin{landscape} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.80, fl/.style={->,shorten <=6pt, shorten >=6pt,>=latex}, fli/.style={dotted,->,shorten <=8pt, shorten >=8pt,>=space}] \newcommand\cs{0.37} \foreach \x in {0,1,...,4} { \foreach \y in {3,4,...,6} { \newcount\r ; \pgfmathsetcount{\r}{\x +\x + \y} ; \pgfmathparse{int(\x+\y)}\let\z\pgfmathresult ; \pgfmathparse{int(\x +1)}\let\h\pgfmathresult ; \draw (\r -3 ,\y-3) node[scale=\cs] {\h$\;$\z}; }; }; \foreach \x in {0,1,...,5} { \draw[fl] (2*\x+1, 1) -- (2*\x + 2,2) ; \draw[fl] (2*\x, 2) -- (2*\x +1,1) ; \foreach \y in {0,1,...,3} { \draw[fl] (2*\x,2*\y) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x +1,2*\y +1) -- (2*\x+2,2*\y) ; }; }; \draw (13, 1) node[scale=0.5] {$\cdots$}; \draw (13, 2) node[scale=0.5] {$\cdots$}; \foreach \x in {0,1,...,3} { \draw[fl] (2*\x+16, 1) -- (2*\x + 17,2) ; \draw[fl] (2*\x +15, 2) -- (2*\x +16,1) ; \foreach \y in {0,1} { \draw[fl] (2*\x +15,2*\y) -- (2*\x + 16,2*\y+1) ; \draw[fl] (2*\x +16,2*\y +1) -- (2*\x+17,2*\y) ; }; }; \foreach \x in {0,2,...,8,15,17,19,21,23} { \draw (\x , 10) node[scale=\cs] {$\bullet$}; }; \foreach \x in {1,3,...,9,16,18,20,22} { \draw (\x , 10.2) node[scale=\cs] {$\bullet$}; }; \foreach \x in {1,3,...,9,16,18,20,22} { \draw (\x , 11) node[scale=\cs] {$\bullet$}; }; \foreach \x in {1,3,...,7,16,18,20,22} { \draw (\x , 9) node[scale=\cs] {$\bullet$}; }; \foreach \x in {0,2,...,6,15,17,19,21,23} { \draw (\x , 8) node[scale=\cs] {$\bullet$}; }; \draw (0 , 2) node[scale=\cs] {n+1$\;$n+5}; \draw (1 , 3) node[scale=\cs] {n+1$\;$n+6}; \draw (10 , 0) node[scale=\cs] {6$\;$8}; \draw (12 , 0) node[scale=\cs] {7$\;$9}; \draw (11 , 1) node[scale=\cs] {6$\;$9}; \draw (12 , 2) node[scale=\cs] {6$\;$10}; \draw (15 , 0) node[scale=\cs] {n+1$\;$n+3}; \draw (17 , 0) node[scale=\cs] {1$\;$3}; \draw (19 , 0) node[scale=\cs] {2$\;$4}; \draw (21 , 0) node[scale=\cs] {3$\;$5}; \draw (23 , 0) node[scale=\cs] {4$\;$6}; \draw (16 , 1) node[scale=\cs] {n+1$\;$n+4}; \draw (18 , 1) node[scale=\cs] {1$\;$4}; \draw (20, 1) node[scale=\cs] {2$\;$5}; \draw (22 , 1) node[scale=\cs] {3$\;$6}; \draw (15 , 2) node[scale=\cs] {n$\;$n+4}; \draw (17 , 2) node[scale=\cs] {n+1$\;$n+5}; \draw (19 , 2) node[scale=\cs] {1$\;$5}; \draw (21 , 2) node[scale=\cs] {2$\;$6}; \draw (23 , 2) node[scale=\cs] {3$\;$7}; \draw (16 , 3) node[scale=\cs] {n$\;$n+5}; \draw (18 , 3) node[scale=\cs] {n+1$\;$n+6}; \draw (20 , 3) node[scale=\cs] {1$\;$6}; \draw (22 , 3) node[scale=\cs] {2$\;$7}; \draw (0 , 4) node[scale=\cs] {n$\;$2n}; \draw (2 , 4) node[scale=\cs] {n+1$\;$2n+1}; \draw (4 , 4) node[scale=\cs] {1$\;$n+1}; \draw (6 , 4) node[scale=\cs] {2$\;$n+2}; \draw (8 , 4) node[scale=\cs] {3$\;$n+3}; \draw (10 , 4) node[scale=\cs] {4$\;$n+4}; \draw (12 , 4) node[scale=\cs] {5$\;$n+5}; \draw (1 , 5) node[scale=\cs] {n$\;$2n+1}; \draw (3 , 5) node[scale=\cs] {n+1$\;$2n+2}; \draw (5 , 5) node[scale=\cs] {1$\;$n+2}; \draw (7 , 5) node[scale=\cs] {2$\;$n+3}; \draw (9 , 5) node[scale=\cs] {3$\;$n+4}; \draw (11 , 5) node[scale=\cs] {4$\;$n+5}; \draw (0 , 6) node[scale=\cs] {n-1$\;$2n+1}; \draw (2 , 6) node[scale=\cs] {n$\;$2n+2}; \draw (4 , 6) node[scale=\cs] {n+1$\;$2n+3}; \draw (6 , 6) node[scale=\cs] {1$\;$n+3}; \draw (8 , 6) node[scale=\cs] {2$\;$n+4}; \draw (10 , 6) node[scale=\cs] {3$\;$n+5}; \draw (12 , 6) node[scale=\cs] {4$\;$n+6}; \draw (1 , 7) node[scale=\cs] {n-1$\;$2n+2}; \draw (3 , 7) node[scale=\cs] {n$\;$2n+3}; \draw (5 , 7) node[scale=\cs] {n+1$\;$2n+4}; \draw (7 , 7) node[scale=\cs] {1$\;$n+4}; \draw (9 , 7) node[scale=\cs] {2$\;$n+5}; \draw (11 , 7) node[scale=\cs] {3$\;$n+6}; \draw (15 , 4) node[scale=\cs] {n-1$\;$2n-1}; \draw (17 , 4) node[scale=\cs] {n$\;$2n}; \draw (19 , 4) node[scale=\cs] {n+1$\;$2n+1}; \draw (21, 4) node[scale=\cs] {1$\;$n+1}; \draw (23, 4) node[scale=\cs] {2$\;$n+2}; \draw (16 , 5) node[scale=\cs] {n-1$\;$2n}; \draw (18 , 5) node[scale=\cs] {n$\;$2n+1}; \draw (20 , 5) node[scale=\cs] {n+1$\;$2n+2}; \draw (22 , 5) node[scale=\cs] {1$\;$n+2}; \draw (15 , 6) node[scale=\cs] {n-2$\;$2n}; \draw (17 , 6) node[scale=\cs] {n-1$\;$2n+1}; \draw (19 , 6) node[scale=\cs] {n$\;$2n+2}; \draw (21 , 6) node[scale=\cs] {n+1$\;$2n+3}; \draw (23 , 6) node[scale=\cs] {1$\;$n+3}; \draw (16 , 7) node[scale=\cs] {n-2$\;$2n+1}; \draw (18 , 7) node[scale=\cs] {n-1$\;$2n+2}; \draw (20, 7) node[scale=\cs] {n$\;$2n+3}; \draw (22 , 7) node[scale=\cs] {n+1$\;$2n+4}; \draw (8 , 8) node[scale=\cs] {1$\;$u-2}; \draw (9 , 9) node[scale=\cs] {1$\;$u-1}; \draw (10 , 10) node[scale=\cs] {1$\;$u}; \draw (11 , 10.2) node[scale=\cs] {1$\;$u+1}; \draw (11 , 11) node[scale=\cs] {1$\;$u+2}; \draw (10 , 8) node[scale=\cs] {2$\;$u-1}; \draw (11 , 9) node[scale=\cs] {2$\;$u}; \draw (12 , 10) node[scale=\cs] {2$\;$u+1}; \draw (12 , 8) node[scale=\cs] {3$\;$u}; \foreach \x in {0,1,...,5} { \draw[fl] (2*\x+1, 5) -- (2*\x + 2,6) ; \draw[fl] (2*\x, 6) -- (2*\x +1,5) ; \foreach \y in {2,3} { \draw[fl] (2*\x,2*\y) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x +1,2*\y +1) -- (2*\x+2,2*\y) ; }; }; \draw (13, 5) node[scale=0.5] {$\cdots$}; \draw (13, 6) node[scale=0.5] {$\cdots$}; \foreach \x in {0,1,...,3} { \draw[fl] (2*\x+16, 5) -- (2*\x + 17,6) ; \draw[fl] (2*\x +15, 6) -- (2*\x +16,5) ; \foreach \y in {2,3} { \draw[fl] (2*\x +15,2*\y) -- (2*\x + 16,2*\y+1) ; \draw[fl] (2*\x +16,2*\y +1) -- (2*\x+17,2*\y) ; }; }; \foreach \x in {0,1,...,5} { \draw[fl] (2*\x+1, 9) -- (2*\x + 2,10) ; \draw[fl] (2*\x, 10) -- (2*\x +1,9) ; \foreach \y in {4,5} { \draw[fl] (2*\x,2*\y) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x +1,2*\y +1) -- (2*\x+2,2*\y) ; }; }; \draw (13, 9) node[scale=0.5] {$\cdots$}; \draw (13, 10) node[scale=0.5] {$\cdots$}; \foreach \x in {0,1,...,3} { \draw[fl] (2*\x+16, 9) -- (2*\x + 17,10) ; \draw[fl] (2*\x +15, 10) -- (2*\x +16,9) ; \foreach \y in {4,5} { \draw[fl] (2*\x +15,2*\y) -- (2*\x + 16,2*\y+1) ; \draw[fl] (2*\x +16,2*\y +1) -- (2*\x+17,2*\y) ; }; }; \foreach \x in {0,1,...,5} { \draw[fl] (2*\x,10) -- (2*\x + 1,10.2) ; \draw[fl] (2*\x+1,10.2) -- (2*\x + 2,10) ; }; \foreach \x in {0,1,...,3} { \draw[fl] (2*\x+15,10) -- (2*\x + 16,10.2) ; \draw[fl] (2*\x+16,10.2) -- (2*\x + 17,10) ; }; \foreach \x in {1,2,...,7} { \foreach \y in {1,2} { \draw[fli] (2*\x -1, 4*\y-1) -- (2*\x, 4*\y) ; }; }; \foreach \x in {8,9,...,11} { \foreach \y in {1,2} { \draw[fli] (2*\x, 4*\y-1) -- (2*\x+1, 4*\y) ; }; \label{key}}; \draw[thick, dashed, blue] (-1.5,-0.5) -- ++(16.7,0) -- ++(6,6) -- ++(-16.7,0) -- cycle ; \end{tikzpicture} \caption{The Auslander-Reiten quiver of $\mathcal{D}_{n,t}$. The objects in $\mathcal{R}_{\mathcal{D}_{n,t}}$ are in the area inside the dashed blue lines. Here $u=2t(n+1)$.} \label{figure: arcs type D} \end{figure} \end{center} \end{landscape} \begin{lemma}\label{lemma: rigid D} \begin{enumerate} \item There is a bijection between isomorphism classes of basic objects in $\mathcal{D}_{n,t}$ and collections of arcs of $P_{n,t}$ which are stable under $\rho$. Such a bijection is illustrated in figure~\ref{figure: arcs type D}. \item Under the above bijection, rigid objects correspond to non-crossing collections of arcs. In particular: \begin{enumerate} \item The isomorphism classes of indecomposable \sloppy rigid objects in $\mathcal{D}_{n,t}$ are parametrised by the arcs $[i\;(i+2)],\ldots,[i\;(i+n+1)]$ for $i=1,\ldots,n+1$. \item The maximal non-crossing collections which are stable under $\rho$ correspond to (isoclasses of) basic maximal rigid objects. \end{enumerate} \end{enumerate} \end{lemma} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.7] \draw[very thick] (0,0) circle (3) ; \foreach \x in {1,2,...,12} { \draw (-30*\x+120:3) node {$\bullet$} ; \draw (-30*\x+120:3.6) node {\x} ; } ; \draw (0,0) node {$\bullet$} ; \draw (0,-0.5) node {$0$} ; \draw[thick] (90:3) -- (0,0) node[midway, left, scale=0.85] {$\alpha$} ; \draw[thick] (60:3) -- (0,0) node[midway, right, scale=0.85] {$\tau\alpha$} node[near end, sloped, rotate=90, scale=0.75] {$\bowtie$} ; \begin{scope}[xshift=10cm] \draw[very thick] (0,0) circle (3) ; \foreach \a in {1,9,17} \draw[thick, color=black!50] (-15*\a+90:3) -- (0,0) ; \foreach \b in {5,13,21} \draw[thick, color=black!50] (-15*\b+90:3) -- (0,0) node[near end, sloped, rotate=90, scale=0.8] {$\bowtie$} ; \draw (0,0) node {$\bullet$} ; \foreach \x in {1,2,...,24} { \draw (-15*\x+105:3) node {$\bullet$} ; \draw (-15*\x+105:3.4) node[scale=0.8] {\x} ; } ; \end{scope} \end{tikzpicture} \caption{Action of $\tau$ on a tagged arc (left) and a non-rigid indecomposable object of $\mathcal{D}_{3,3}$ (right).} \label{figure: non-rigid type D} \end{figure} \end{center} \begin{proof} \sloppy The proof is similar to that of Lemma~\ref{lemma: rigid A}. There is a $2t$-covering functor from the cluster category $\mathcal{C}_{{\rm D}_{2t(n+1)}}$ to the triangulated orbit category $\mathcal{D}_{t,n} = \db{D}{2t(n+1)}/\tau^{n+1}\varphi^n$. We note that $\varphi$ acts on arcs by switching tags and that $\tau$ acts on arcs $[i\; 0]$ with an endpoint at the puncture 0 by sending it to $[i+1\; 0]$ and by switching tags. Therefore an arc with an endpoint at the puncture corresponds to a non-rigid indecomposable object in $\mathcal{D}_{n,t}$ and the rest of the proof is similar to that in type $\rm{A}$ above. \end{proof} Consider now the full additive subcategory $\mathcal{R}_ {\mathcal{D}_{n,t}}$, generated by the rigid objects in $\mathcal{D}_{n,t}$. We will show, in Section \ref{section: comparisons}, that $\mathcal{R}_ {\mathcal{D}_{n,t}}$ is equivalent to $\mathcal{R}_ {\mathcal{A}_{n,t}}$. For this, we will need the following. \begin{proposition}\label{proposition: max rigid D} The quiver $\mathcal{Q}_{\mathcal{R}_{\mathcal{D}_{n,t}}}$ is isomorphic to the quiver $\mathcal{Q}_n$ depicted in figure~\ref{figure: quiver}. \end{proposition} \begin{proof} Consider the Auslander-Reiten quiver of $\mathcal{D}_{n,t}$ depicted in figure \ref{figure: arcs type D}. Clearly, the irreducible maps in $\mathcal{A}_{n,t}$ with source and target in $\mathcal{R}_{\mathcal{D}_{n,t}}$, are also irreducible in $\mathcal{R}_{\mathcal{D}_{n,t}}$. To proceed, we will need some basic facts about Hom-hammocks in the derived category $\db{D}{N}$, for $N$ even. First note that, in the derived category $\db{D}{N}$, we have $\tau^{-N+1} = [1]$. Thus $\tau^{-N+2} = \tau [1]$ is a Serre functor in $\db{D}{N}$: For any $X,Y\in\db{D}{N}$, there are bi-natural isomorphisms $\operatorname{Hom}_{\db{D}{N}}(X,Y) \simeq D\operatorname{Hom}_{\db{D}{N}}(Y,\tau^{-N+2}X)$. In particular, the Hom-hammock of any object $X$ ends in $\tau^{-N+2}X$ and is symmetric with respect to the vertical line (the blue line in figure~\ref{figure: Hom-hammocks}) going through $\tau^{-\frac{N}{2}+1}X$. Without any computations, we thus obtain that the Hom-hammocks have the shape given in figure~\ref{figure: Hom-hammocks}, where a part of the Hom-hammock of the indecomposable object denoted by $j$ is described. The left-hand side of the figure is easily computed, since all meshes involved are commutative squares. The rectangle on the left-hand side indicates some indecomposable objects $X$ such that $\dim \operatorname{Hom} (j,X) = 1$. Outside this rectangle, to its left and to its right, the zeros indicate that all morphisms from $j$ to some of the indecomposable objects in these regions are zero morphisms. The star indicates a part of the Hom-hammock that we do not compute. The right-hand side of the figure is deduced from the left-hand side by symmetry. We have indicated some specific indecomposable objects in the figure. They are related by the following equalities: $u = \tau^{-j+1}(1)$, $x = \tau^{-j+1}(N-j-1)$, $a=\tau^{-1}(x) = \tau^{-j}(N-j-1)$, $b = \tau^{-N+2}(1)$, $c=\tau^{-N+2}(j)$ and $y = \tau^{-N+n+1}(n)$. Using these Hom-hammocks, it is easy to verify that there is a non-zero map from $n$ to $y$, which becomes an irreducible endomorphism in the category $\mathcal{R}_{\mathcal{D}_{n,t}}$. For this, note that there is a one-dimensional subspace of morphisms from $n$ to $y = \left(\tau^{n+1}\right)^{1-2t}(n)$ (factoring through $N-1$) which do not factor through any indecomposable object in the $\tau$-orbit of $1,\ldots,n-1$. The same will obviously hold for the shifts of this map. We claim that there are no other irreducible maps in $\mathcal{R}_{\mathcal{D}_{n,t}}$. This can be checked, using the Hom-hammocks of figure \ref{figure: quiver}. We leave the details to the reader, but point out the following useful fact. Note that the only indecomposable objects in the rectangles of figure~\ref{figure: Hom-hammocks} that belong to the $\tau^{n+1}$ orbit of $1,2,\ldots,n$ are $1,2,\ldots,n$ and $y$. We claim that any morphism from some $j$, with $1\leq j\leq n$, to $y$ factors through $n$. This holds since $\dim\operatorname{Hom}_{\db{D}{N}}(j,y) = 1$ and the composition $1 \rightarrow 2 \rightarrow \cdots \rightarrow n \rightarrow y$ is non-zero (as can be seen from the case $j=1$ in figure~\ref{figure: Hom-hammocks}). Hence the quiver $\mathcal{Q}_{\mathcal{R}_{\mathcal{D}_{n,t}}}$ is isomorphic to the quiver $\mathcal{Q}_n$ depicted in figure \ref{figure: quiver}. \end{proof} As for type $A$, we obtain the following as a special case of the computations necessary for the proof of proposition \ref{proposition: max rigid D}. \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.4, vertex/.style={fill=white, scale=0.85}] \begin{scope}[y=-1cm] \coordinate (1) at (0,0) ; \coordinate (2) at (1,-1) ; \coordinate (j) at (4,-4) ; \coordinate (n) at (6,-6) ; \coordinate (N-2) at (12,-12) ; \coordinate (N-1) at (13,-13) ; \coordinate (N) at (13,-12) ; \coordinate (tau1) at (10,0) ; \coordinate (x) at (16,-8) ; \coordinate (y) at (20,-6) ; \coordinate (u) at (8,0) ; \coordinate (a) at (18,-8) ; \coordinate (b) at (26,0) ; \coordinate (c) at (30,-4) ; \coordinate (d) at (22,-12) ; \draw (1) --(2) --(j) ; \begin{scope} [rotate=45] \draw (j) rectangle (x) ; \end{scope} \draw (N-2) --(N-1) ; \draw (N-2) --(N) ; \draw (N) --(14,-12) ; \draw (N-1) --(14,-12) ; \draw (13,-11) --(14,-12) ; \draw[loosely dashed] (14,-12) --(a) ; \begin{scope} [rotate=45] \draw (a) rectangle (c) ; \end{scope} \draw (20,-12) --(d) ; \draw (20,-12) --(21,-11) ; \draw (20,-12) --(21,-13) ; \draw (21,-13) --(d) ; \draw[loosely dotted] (j) --(c) ; \draw[loosely dotted] (n) --(y) ; \draw[blue] (17,0) -- (17,-14) ; \draw (1) node {$\bullet$} node[above left] {$\scriptstyle{1}$} ; \draw (2) node {$\bullet$} node[above left] {$\scriptstyle{2}$} ; \draw (j) node[vertex] {$j$} ; \draw (n) node[vertex] {$n$} ; \draw (N-2) node {$\bullet$} node[above left] {$\scriptstyle{N-2}$} ; \draw (N-1) node {$\bullet$} node[above left] {$\scriptstyle{N-1}$} ; \draw (x) node[vertex] {$x$} ; \draw (u) node[vertex] {$u$} ; \draw (N) node {$\bullet$} ; \draw (13,-11) node {$\bullet$} ; \draw (14,-12) node {$\bullet$} ; \draw (a) node[vertex] {$a$} ; \draw (b) node[vertex] {$b$} ; \draw (c) node[vertex] {$c$} ; \draw (d) node {$\bullet$} ; \draw (20,-12) node {$\bullet$} ; \draw (21,-11) node {$\bullet$} ; \draw (21,-13) node {$\bullet$} ; \draw (21,-12) node {$\bullet$} ; \draw (y) node[vertex] {y} ; \draw (4,-1) node {\huge{0}} ; \draw (4,-9) node {\huge{0}} ; \draw (10,-6) node {\huge{1}} ; \draw (14,-3) node {\huge{0}} ; \draw (15.5,-12) node[scale=2] {$\ast$} ; \draw (18.5,-12) node[scale=2] {$\ast$} ; \draw (20,-3) node {\huge{0}} ; \draw (24,-6) node {\huge{1}} ; \draw (30,-1) node {\huge{0}} ; \draw (30,-9) node {\huge{0}} ; \end{scope} \end{tikzpicture} \caption{Hom-hammocks in the derived category $\db{D}{N}$, $N$ even.} \label{figure: Hom-hammocks} \end{figure} \end{center} \begin{corollary}\label{endo:D} Let $n,t\in\mathbb{N}$ and let $T$ be the maximal rigid object of the orbit category $\db{D}{2t(n+1)}/\tau^{n+1}\varphi^n$ corresponding to the collection of arcs generated by $[1\;3],[1\;4],\ldots,[1\;n]$ (see Lemma~\ref{lemma: rigid D}). Then the endomorphism algebra of $T$ is given by the quiver \[ \xymatrix{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r] & n-1 \ar@{->}[r] & n \ar@(ur,dr)^{\alpha} }, \] with ideal of relations generated by $\alpha^2$. \end{corollary} \begin{proof} The computation of the Gabriel quiver is essentially included in the proof of Proposition \ref{proposition: max rigid D}. It is easy to verify that the only relation is $\alpha^2$. \end{proof} Let $\Lambda_n$ denote the algebra appearing in Corollaries \ref{corollary: max rigid A} and \ref{endo:D}. We will need some properties of the module category $\operatorname{mod} \Lambda_n$. Recall that a module $M$ is called $\tau$-rigid if $\operatorname{Hom}(M, \tau M) =0$, see \cite{air}. Now let $\mathcal{R}_{n}$ denote the full additive subcategory generated by the indecomposable $\tau$-rigid modules in $\operatorname{mod} \Lambda_n$. It follows from Proposition \ref{proposition: list CTO}, with $t=1$, that in particular $\Lambda_n$ is a 2-CY-tilted algebra, and so by \cite{air}, a module is $\tau$-rigid if and only if it is of the form $\operatorname{Hom}_{\mathcal C}(T,X)$, where $X$ is a rigid object in $\mathcal C= \db{A}{3n}/\tau^n[1]$. It is easy to check that the quiver $\mathcal{Q}_{\mathcal{R}_n}$ can be obtained by deleting the vertices labeled by $(n+1) (n+3), \dots , (n+1) (2n+2)$ in the quiver $\mathcal{Q}_n$ of figure \ref{figure: quiver}. \subsubsection{Type $\rm{E}$}\label{subsubsection: max rigid E} In this section we investigate the rigid (and maximal rigid) objects in the orbit categories $\db{E}{7}/\tau^2$ and $\db{E}{7}/\tau^5$, appearing in Proposition~\ref{proposition: list max rigid}. There is also a geometric machinery available in type $\rm{E}$, see \cite{la}. However, our description instead relies on simple brute force computations, and we leave out almost all details. For type $\db{E}{7}/\tau^5$, the Auslander-Reiten quiver is given in Figure \ref{figure: e7/5}. There are 5 indecomposable rigid objects, all in the bottom $\tau$-orbit in the figure. Let $x$ be any of these five. Then $x \oplus \tau^2 x$ is maximal rigid, and all maximal rigids are obtained this way. In particular, they all have the same endomorphism ring. \begin{proposition}\label{e7-alg} The endomorphism algebra of any maximal rigid object in the orbit category $\db{E}{7}/\tau^5$ is isomorphic to the path algebra of the quiver: \[\xymatrix{\bullet \ar@(ul,dl)_{\alpha} \ar[r]^{\beta} & \bullet \ar@(ur,dr)^{\gamma}},\] with ideal of relations generated by $\beta \alpha - \gamma \beta$, $\alpha^2$, $\gamma^2$. \end{proposition} \begin{remark} This latter 2-endorigid algebra is shown not to be $2$-CY-tilted in section~\ref{subsection: not 2CY-tilted}. \end{remark} \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.65, fl/.style={->,shorten <=6pt, shorten >=6pt,>=latex}] \foreach \x in {0,1,...,9} { \draw (2*\x + 1, 2.2) node[scale=0.5] {$\bullet$}; \draw[fl] (2*\x, 2) -- (2*\x + 1,2.2) ; \draw[fl] (2*\x+1, 2.2) -- (2*\x + 2,2) ; \foreach \y in {0,1,2} { \draw (2*\x +1, 2*\y +1) node[scale=0.5] {$\bullet$}; \draw[fl] (2*\x,2*\y) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x +1,2*\y +1) -- (2*\x+2,2*\y) ; }; }; \foreach \x in {0,1,...,9} { \foreach \y in {0,1} { \draw (2*\x, 2*\y+ 2) node[scale=0.5] {$\bullet$}; \draw[fl] (2*\x,2*\y+2) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x+1,2*\y+1) -- (2*\x + 2,2*\y+2) ; }; }; \foreach \x in {0,1} { \draw (10*\x, 0) node[scale=0.7] {$a$}; \draw (10*\x+2, 0) node[scale=0.7] {$b$}; \draw (10*\x+4, 0) node[scale=0.7] {$c$}; \draw (10*\x+6, 0) node[scale=0.7] {$d$}; \draw (10*\x+8, 0) node[scale=0.7] {$e$}; }; \begin{scope}[xshift=0.0cm, yshift=0.0cm, rotate=0, xscale=-1] \draw[thick, dashed, blue] (0.6,-0.3) -- ++ (-8.7,0) -- ++(-3.5, 2.8) -- ++ (-2.2, 2.9) -- ++(8.7,0) -- cycle ; \end{scope} \end{tikzpicture} \caption{The orbit category $\db{E}{7}/\tau^5$.} \label{figure: e7/5} \end{figure} \end{center} Let us now consider $\db{E}{7}/\tau^2$. Its Auslander-Reiten quiver is given in figure \ref{figure: e7/2}. There are only two indecomposable rigid objects, both in the top $\tau$-orbit in the figure. \begin{center} \begin{figure} \begin{tikzpicture}[scale=0.65, fl/.style={->,shorten <=6pt, shorten >=6pt,>=latex}] \foreach \x in {0,1,...,9} { \draw (2*\x + 1, 2.2) node[scale=0.5] {$\bullet$}; \draw[fl] (2*\x, 2) -- (2*\x + 1,2.2) ; \draw[fl] (2*\x+1, 2.2) -- (2*\x + 2,2) ; \foreach \y in {0,1,2} { \draw[fl] (2*\x,2*\y) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x +1,2*\y +1) -- (2*\x+2,2*\y) ; }; }; \foreach \x in {0,1,...,9} { \foreach \y in {0,1} { \draw (2*\x +1, 2*\y +1) node[scale=0.5] {$\bullet$}; }; }; \foreach \x in {0,1,...,9} { \foreach \y in {0,1} { \draw (2*\x, 2*\y+ 2) node[scale=0.5] {$\bullet$}; \draw[fl] (2*\x,2*\y+2) -- (2*\x + 1,2*\y+1) ; \draw[fl] (2*\x+1,2*\y+1) -- (2*\x + 2,2*\y+2) ; }; }; \foreach \x in {0,1,...,4} { \draw (4*\x+1, 5) node[scale=0.7] {$a$}; \draw (4*\x+3, 5) node[scale=0.7] {$b$}; }; \foreach \x in {0,1,...,4} { \draw (4*\x, 0) node[scale=0.5] {$\bullet$}; \draw (4*\x+2, 0) node[scale=0.5] {$\bullet$}; }; \begin{scope}[xshift=0.0cm, yshift=0.0cm, rotate=0, xscale=-1] \draw[thick, dashed, blue] (0.6,-0.3) -- ++ (-2.7,0) -- ++(-3.5, 2.8) -- ++ (-2.2, 2.9) -- ++(2.9,0) -- cycle ; \end{scope} \begin{scope}[xshift=4.05cm, yshift=0.0cm, rotate=0, xscale=-1] \draw[thick, dashed, blue] (0.6,-0.3) -- ++ (-2.7,0) -- ++(-3.5, 2.8) -- ++ (-2.2, 2.9) -- ++(2.9,0) -- cycle ; \end{scope} \end{tikzpicture} \caption{The orbit category $\db{E}{7}/\tau^2$.} \label{figure: e7/2} \end{figure} \end{center} Now the full subcategory $\mathcal{R}_{\db{E}{7}/\tau^2}$ generated by the rigids only contains two indecomposable objects with no maps between them. In particular, we have the following. \begin{proposition}\label{prop:e7/2} Any maximal rigid object in the orbit category $\db{E}{7}/\tau ^2$ is indecomposable and its endomorphism algebra is given by a loop $\alpha$ with relation $\alpha^3$. \end{proposition} We can compare this to the case $\db{D}{4}/\tau \varphi$, which appears in Proposition \ref{proposition: list CTO}. The AR-quiver of $\db{D}{4}/\tau \varphi$ is given by \[\xymatrix@-1pc{ & b \ar@{->}[dr] & & c \ar@{->}[dr] & & b \\ a \ar@{->}[r]\ar@{->}[ur]\ar@{->}[dr] & c \ar@{->}[r] & \Sigma a \ar@{->}[r]\ar@{->}[ur]\ar@{->}[dr] & \Sigma b \ar@{->}[r] & a \ar@{->}[r]\ar@{->}[ur]\ar@{->}[dr] & c \\ & d \ar@{->}[ur] & & d \ar@{->}[ur] & & d }\] and the only indecomposable rigid objects are $b$ and $c$. So, we also have that $\mathcal{R}_{\db{D}{4}/\tau^2}$ contains exactly two indecomposable objects, with no maps between them. Moreover it is easily verified that each of these indecomposables are maximal rigid, and that the endomorphism rings are the same as in Proposition \ref{prop:e7/2}. \subsection{Tables}\label{subsection: tables} In a first table, we summarize some known results on orbit categories with cluster tilting objects, which can be found in \cite{a,bikr,bo}. A second table summarizes results from \cite{a,bikr} and from the current section. For each orbit category, we give the number of isomorphism classes of indecomposable objects, the number of summands of any basic maximal rigid object (or equivalently, the rank of the Grothendieck group of its endomorphism algebra), the number of isomorphism classes of indecomposable rigid objects, and the quiver with relations of the endomorphism algebra of some maximal rigid object. Recall that $\varphi$ denotes an automorphism of the derived category of type ${\rm D}$ induced by an automorphism of order two of a Dynkin diagram of type ${\rm D}$. \begin{remark} In the second row of Table 1, the following conventions are used: \begin{itemize} \item If $n=1$, then $a=0$ and $b=0$; \item if $k=2$, then there is no loop $\alpha$, and in the relations, $\alpha$ should be replaced by $ab$. \end{itemize} \end{remark} \begin{remark} Let $\mathcal{C}$ be the orbit category appearing in the last row of the first table. Because of the shape of the quiver in the last column, one might be tempted to think that $\mathcal{C}$ should categorify a cluster algebra of type $\rm{F}_4$. However, $\mathcal{C}$ has 24 indecomposable rigid objects only, while there are 28 almost positive roots in type $\rm{F}_4$. \end{remark} \begin{landscape}\thispagestyle{empty} \begin{figure} \begin{tabular}{|c|c|c|c|c|c|} \multicolumn{6}{c}{Table 1: Orbit categories with cluster tilting objects, which are not acyclic cluster categories.}\\ \multicolumn{6}{c}{}\\ \hline \text{Orbit category} & \text{Indecomposables} & \text{Rank} & \text{Indec. rigids} & \text{Quiver} & \text{Relations} \\ \hline $\db{A}{3n}/\tau^n[1] ^{\phantom{\text{\huge{A}}}} _{\phantom{\text{\huge{A}}}}$ & $\frac{3n(n+1)}{2}$ & $n$ & $n(n+1)$ & $\xymatrix@C=1em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r] & n-1 \ar@{->}[r] & n \ar@(ur,dr)^{\alpha} }$ & $\alpha^2$ \\ \hline $\db{D}{kn}/\tau^n\varphi^n$, $kn\geq 4$, $k>1^{\phantom{\text{\huge{A}}}}$ & $kn^2$ & $n$ & $n(n+1)$ & $\xymatrix@C=1em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r] & n-1 \ar@{->}[r]^a & n \ar@/^/[l]^b \ar@(ur,dr)^{\alpha}}$ & $\alpha^{k-1}-ab$, $\alpha a$, $b\alpha$ \\ \hline $\db{E}{8}/\tau^4 \phantom{\text{\huge{A}}} _{\phantom{\text{\huge{A}}}}$ & 32 & 2 & 8 & $\xymatrix@C=1em@R=1em{ 1 \ar@{->}[r] & 2 \ar@(ur,dr)^{\alpha}}$ & $\alpha^3$ \\ \hline $\db{E}{8}/\tau^8 \phantom{\text{\huge{A}}} _{\phantom{\text{\huge{A}}}}$ & 64 & 4 & 24 & $\xymatrix@C=1em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r]^a & 3 \ar@/^/[l]^b \ar@{->}[r] & 4}$ & $aba$, $bab$ \\ \hline \end{tabular} \vspace{2cm} \begin{tabular}{|c|c|c|c|c|c|} \multicolumn{6}{c}{Table 2: Orbit categories with non-cluster tilting, maximal rigid objects.}\\ \multicolumn{6}{c}{}\\ \hline \text{Orbit category} & \text{Indecomposables} & \text{Rank} & \text{Indec. rigids} & \text{Quiver} & \text{Relations} \\ \hline $\db{A}{(2t+1)(n+1)-3}/\tau^{k(n+1)-1}[1]\phantom{^\text{\huge{A}}}$ & $\scriptstyle{\frac{1}{2}[(2t+1)(n+1)-3](n+1)}$ & $n$ & $n(n+1)$ & $\xymatrix@C=1em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r] & n-1 \ar@{->}[r] & n \ar@(ur,dr)^{\alpha} }$ & $\alpha^2$ \\ $t>1$ & & & & & \\ \hline $\db{D}{2t(n+1)}/\tau^{n+1}\varphi^n \phantom{\text{\huge{A}}} _{\phantom{\text{\huge{A}}}}$ & $2t(n+1)^2$ & $n$ & $n(n+1)$ & $\xymatrix@C=1em@R=1em{ 1 \ar@{->}[r] & 2 \ar@{->}[r] & 3 \ar@{..}[r] & n-1 \ar@{->}[r] & n \ar@(ur,dr)^{\alpha} }$ & $\alpha^2$ \\ \hline $\db{E}{7}/\tau^2 \phantom{\text{\huge{A}}} _{\phantom{\text{\huge{A}}}}$ & 14 & 1 & 2 & $\xymatrix{1 \ar@(ur,dr)^{\alpha}}$ & $\alpha^3$ \\ \hline $\db{E}{7}/\tau^5 \phantom{\text{\huge{A}}} _{\phantom{\text{\huge{A}}}}$ & 35 & 2 & 5 & $\xymatrix@C=1em@R=1em{1 \ar@(ul,dl)_{\alpha} \ar[r]^{\beta} & 2 \ar@(ur,dr)^{\gamma}}$ & $\beta \alpha - \gamma \beta$, $\alpha^2$, $\gamma^2$ \\ \hline \end{tabular} \end{figure} \end{landscape} \section{Comparing subcategories generated by rigid objects}\label{section: comparisons} Our aim, in this section, is to compare the full subcategories of rigid objects of the triangulated categories listed in Table 2. In order to do so, we will follow a strategy we now describe: Let $\mathcal{C}$ and $\mathcal{D}$ be $\mathbb{K}$-linear, Krull--Schmidt, Hom-finite, 2-Calabi--Yau, triangulated categories. We assume that $T\in\mathcal{C}$ is a cluster tilting object and $U\in\mathcal{D}$ a maximal rigid object. Let $\mathcal{R}_\mathcal{C}$, resp. $\mathcal{R}_\mathcal{D}$, be the full subcategory of $\mathcal{C}$, resp. $\mathcal{D}$, generated by the rigid objects. Let $\qc_{\mathcal{R}_{\cat}}$ be a quiver whose vertices are the (isoclasses of) indecomposable rigid objects of $\mathcal{C}$, and whose arrows form a basis for the irreducible morphisms in $\mathcal{R}_\mathcal{C}$. Define $\qc_{\mathcal{R}_{\dc}}$ similarly. Finally, let $\mathcal{Q}^{\tau-\text{rig}}_\mathcal{C}$ be the quiver similarly given by the irreducible morphisms of the image of $\mathcal{C}(T,-)|_{\mathcal{R}_\mathcal{C}}$ in $\operatorname{mod}\operatorname{End}_\mathcal{C}(T)$. Define $\mathcal{Q}^{\tau-\text{rig}}_\mathcal{D}$ similarly. \vspace{5pt} Assume that the following hold: \begin{itemize} \item[(a)] The indecomposable rigid objects of $\mathcal{C}$ are all shifts of indecomposable summands of $T$; and similarly for $\mathcal{D}$. \item[(b)] There is some isomorphism of quivers $\sigma: \qc_{\mathcal{R}_{\cat}} \rightarrow \qc_{\mathcal{R}_{\dc}}$ satisfying the following properties: \begin{itemize} \item[(b1)] The map $\sigma$ commutes with shifts on objects and on irreducible morphisms; \item[(b2)] It sends $T$ to $U$; \item[(b3)] It induces an isomorphism between $\operatorname{End}_\mathcal{C}(T)$ and $\operatorname{End}_\mathcal{D}(U)$. \end{itemize} \item[(c)] The finite dimensional algebra $\operatorname{End}_\mathcal{C}(T)$ is generalised standard, i.e. the morphisms in the module category are given by linear combinations of paths in its Auslander--Reiten quiver \cite{sko}. \item[(d)] The quiver $\mathcal{Q}^{\tau-\text{rig}}_\mathcal{C}$ is isomorphic to the full subquiver of $\qc_{\mathcal{R}_{\cat}}$ whose vertices are not in $\operatorname{add} \Sigma T$; and similarly for $\mathcal{D}$. \end{itemize} \begin{lemma}\label{lemma: comparing subcategories of rigids} Under the assumptions listed above, any morphism in $\mathcal{R}_\mathcal{C}$ is a linear combination of paths in $\qc_{\mathcal{R}_{\cat}}$ and $\sigma$ induces an equivalence of categories $\mathcal{R}_\mathcal{C}\rightarrow\mathcal{R}_\mathcal{D}$. \end{lemma} \begin{proof} Assume that $T=T_1\oplus\cdots\oplus T_n$ is basic, and $T_i$ is indecomposable for each $i$. We prove the statement in three steps: \begin{enumerate} \item Any morphism in $\mathcal{R}_\mathcal{C}$ (resp. $\mathcal{R}_\mathcal{D}$) is a linear combination of paths in $\qc_{\mathcal{R}_{\cat}}$ (resp. $\qc_{\mathcal{R}_{\dc}}$). \item The morphism $\sigma$ induces a well-defined functor $\mathcal{R}_\mathcal{C} \rightarrow \mathcal{R}_\mathcal{D}$, which is faithful. \item The induced functor is dense and full. \end{enumerate} (1) Let $f$ be a morphism in $\mathcal{R}_\mathcal{C}$. By assumption (a), we may assume that it is of the form $\Sigma^a T_i \rightarrow \Sigma^b T_j$ for some $a,b\in\mathbb{Z}$, and $i,j\in\{1,\ldots,n\}$. By assumption (c), the morphism $\mathcal{C}(T,\Sigma^{-a}f)$ is a linear combination of paths in $\qc_{\mathcal{R}_{\cat}}^{\tau-\text{rig}}$. Let $g\in\mathcal{R}_\mathcal{C}$ be the corresponding linear combination of paths in $\qc_{\mathcal{R}_{\cat}}$. Such a morphism exists by assumption (d). We then have $\mathcal{C}(T, \Sigma^{-a}f-g) = 0$ so that $\Sigma^{-a}f-g$ belongs to the ideal $(\Sigma T)$. Since the domain of $\Sigma^{-a}f$ lies in $\operatorname{add} T$, and $T$ is rigid, we have $\Sigma^{-a}f=g$ and $f$ is a linear combination of paths in $\qc_{\mathcal{R}_{\cat}}$. (2) Let $f$ be a linear combination of paths in $\qc_{\mathcal{R}_{\cat}}$. We claim that $f=0$ in $\mathcal{R}_\mathcal{C}$ if and only if $\sigma f = 0$ in $\mathcal{R}_\mathcal{D}$. Indeed: \begin{eqnarray*} f = 0 \text{ in } \mathcal{R}_\mathcal{C} & \Leftrightarrow & \Sigma^{-a} f = 0 \text{ in } \mathcal{R}_\mathcal{C} \\ & \Leftrightarrow & \mathcal{C}(T,\Sigma^{-a} f) = 0 \text{ in } \operatorname{mod}\operatorname{End}_\mathcal{C}(T) \\ & \Leftrightarrow & \mathcal{D}(\sigma T, \sigma \Sigma^{-a}f) = 0 \text{ in } \operatorname{mod}\operatorname{End}_\mathcal{D}(U) \\ & \Leftrightarrow & \mathcal{D}(U,\Sigma^{-a}\sigma f) = 0 \text{ in } \operatorname{mod}\operatorname{End}_\mathcal{D}(U) \\ & \Leftrightarrow & \Sigma^{-a}\sigma f = 0 \text{ in } \mathcal{R}_\mathcal{D} \\ & \Leftrightarrow & \sigma f = 0 \text{ in } \mathcal{R}_\mathcal{D}. \end{eqnarray*} The second equivalence uses the fact that the domain of $\Sigma^{-a}f$ belongs to $\operatorname{add} T$, the fourth equivalence follows from assumptions (b1) and (b2). The third equivalence follows from assumption (d) as follows: This assumption implies that $\sigma$ induces an isomorphism from $\qc_{\mathcal{R}_{\cat}}^{\tau-\text{rig}}$ to $\qc_{\mathcal{R}_{\dc}}^{\tau-\text{rig}}$ which commutes with the inclusions into $\qc_{\mathcal{R}_{\cat}}$ and $\qc_{\mathcal{R}_{\dc}}$. (3) By construction, the functor $\mathcal{R}_\mathcal{C}\rightarrow\mathcal{R}_\mathcal{D}$ induced by $\sigma$ is dense. For all $i=1,\ldots, n$, let $U_i$ be $\sigma T_i$. Let $g$ be a morphism in $\mathcal{R}_\mathcal{D}$. As above, we may assume that it is of the form $U_i \rightarrow \Sigma^k U_j$, for some $k\in\mathbb{Z}$ and some $i,j\in\{1,\ldots,n\}$. There is some $f\in\mathcal{C}(T_i,\Sigma^k T_j)$ whose image in $\operatorname{mod}\operatorname{End}_\mathcal{C}(T)$ is associated with $\mathcal{D}(U,g)$ in $\operatorname{mod}\operatorname{End}_\mathcal{D}(U)$. We thus have $\mathcal{D}(U, g- \sigma f) = 0$, which implies $\sigma f = g$. The functor induced by $\sigma$ is full. \end{proof} \begin{proposition}\label{prop: equivalences} For all $t\geq 1$, there are equivalences of additive categories: \begin{enumerate} \item $\mathcal{R}_{\mathcal{A}_{n,t}} \simeq \mathcal{R}_{\mathcal{A}_{n,1}}$; \item $\mathcal{R}_{\mathcal{D}_{n,t}} \simeq \mathcal{R}_{\mathcal{A}_{n,1}}$; \item $\mathcal{R}_{E_{7,2}} \simeq \mathcal{R}_{\mathcal{D}_{4, \tau \varphi}}$; \end{enumerate} \end{proposition} \begin{proof} For each case, we need to check the assumptions of Lemma~\ref{lemma: comparing subcategories of rigids}. This is done in Sections \ref{subsubsection: max rigid A}, \ref{subsubsection: max rigid D} and \ref{subsubsection: max rigid E}, respectively. \end{proof} \section{2-endorigid algebras of finte type}\label{section: endoalg} \subsection{A 2-endorigid algebra which is not 2-CY tilted}\label{subsection: not 2CY-tilted} Consider the algebra $\Gamma =\mathbb{K} Q/I$, where $Q$ is the quiver \bigskip $$\xymatrix@C=0.3cm@R=0.1cm{ 1 \ar@(ul,dl)_{\alpha} \ar[rrr]^{\beta} & && 2 \ar@(ur,dr)^{\gamma}} $$ \bigskip \bigskip and the relations are $\beta \alpha - \gamma\beta, \alpha^2, \gamma^2 $. The indecomposable projectives in $\operatorname{mod} \Gamma$ are given by $$P_1 = \begin{pmatrix} & 1 & \\ 1& & 2 \\ &2& \end{pmatrix} \text{ and } P_2 = \begin{pmatrix} 2 \\ 2 \end{pmatrix}, $$ while the indecomposable injectives are $$I_1 = \begin{pmatrix} 1 \\ 1 \end{pmatrix} \text{ and } I_2 = P_1. $$ We have a minimal injective coresolution of $\Gamma = P_1 \amalg P_2$ given by $$ 0 \to P_1 \amalg P_2 \to I_2 \amalg I_2 \to I_1 \to 0 $$ and hence ${\operatorname{id}} \Gamma = 1$, that is, $\Gamma$ is Gorenstein of dimension 1. Then, see \cite{kr}, we have that $\operatorname{Sub} \Gamma$ is a Frobenius category with projective (=injective) objects $\operatorname{add} \Gamma$, and $\underline{\operatorname{Sub}} \Gamma$ is a triangulated category, with suspension functor isomorphic to $\Omega_{\operatorname{Sub} \Gamma}^{-1}$. We claim that $\Gamma$ is not a 2-CY-tilted algebra. To see this, consider the simple $S_2$ and the module $X= \begin{pmatrix} 1 \\ 2 \end{pmatrix}$. The exact sequence $$0 \to S_2 \to P_2 \to S_2 \to 0$$ in $\operatorname{Sub} \Gamma$, shows that $\Omega^{-1} (S_2) \simeq S_2 \simeq \Omega^{1}(S_2)$. Hence also $\Omega^{-3} (S_2) \simeq S_2$. We then have that $\underline{\operatorname{Hom}}(S_2, X) \neq 0$, while clearly $\underline{\operatorname{Hom}}(X,S_2) = 0$. Therefore $\underline{\operatorname{Sub}} \Gamma$ is not 3-Calabi-Yau, and this implies that $\Gamma$ is not a 2-CY-tilted algebra, by \cite{kr}. The same argument shows that $\Gamma$ is not $d$-CY-tilted for $d\geq 2$. \subsection{Standard 2-Calabi--Yau categories} Recall that our base field $\mathbb{K}$ is assumed to be algebraically closed and of characteristic 0. In that setup, it is known from \cite{bgrs} that all finite-dimensional algebras of finite representation type are standard: Their module categories are the path categories on their Auslander--Reiten quivers modulo all mesh relations. In this section, we adress the following related question: Let $\mathcal{C}$ be a triangulated category of finite type. If $\mathcal{C}$ is 2-CY with cluster tilting objects, is it standard? We were not able to answer this question so far. However, we prove here that $\mathcal{C}$ is generalised standard \cite{sko} in the following sense. \begin{definition} \rm A $\mathbb{K}$-linear, Krull--Schmidt, Hom-finite, triangulated category with a Serre functor is called \emph{generalised standard} if all of its morphisms are given by linear combinations of paths in its Auslander--Reiten quiver. \end{definition} \begin{proposition}\label{proposition: generalised standard} Let $\mathcal{C}$ be a $\mathbb{K}$-linear, Krull--Schmidt, 2-Calabi--Yau, triangulated category. Assume that $T\in\mathcal{C}$ is a cluster tilting object whose endomorphism algebra is generalised standard. Then $\mathcal{C}$ is generalised standard. \end{proposition} \begin{proof} Let $\Gamma$ be the Auslander--Reiten quiver of $\mathcal{C}$, and $\overline{\Gamma}$ be the one of $\End_{\cat}(T)^\text{op}$. By~\cite[Proposition 3.2]{bmr} the AR-sequences in $\module\Endt$ are induced by the AR-triangles in $\mathcal{C}$. It follows that $\overline{\Gamma}$ is naturally a full subquiver of $\Gamma$ and that we can pick a basis $(e_\alpha)_{\alpha\in\Gamma_1}$ of irreducible morphisms in $\mathcal{C}$ adapted to $\Gamma$ (i.e. satisfying the mesh relations) such that $(\mathcal{C}(T,e_\alpha))_{\alpha\in\overline{\Gamma}_1}$ is a basis of irreducible morphisms in $\module\Endt$ adapted to $\overline{\Gamma}$. In what follows, we will use the following notation: if $p=\sum_i \lambda_i \alpha^i_{k_i}\cdots\alpha^i_1$ is a linear combination of paths in $\Gamma$, we write $e_p$ for the morphism $\sum_i\lambda_i e_{\alpha^i_{k_i}}\circ\cdots\circ e_{\alpha^i_1}$. We note that the statement of the lemma is an immediate consequence of the two claims below. \emph{Claim} 1: Any morphism $f$ in $\mathcal{C}$ is of the form $f = e_p+g$, where $p$ is a linear combination of paths in $\Gamma$ and $g$ belongs to the ideal $(\Sigma T)$. \emph{Proof of Claim} 1: Since $\End_{\cat}(T)^\text{op}$ is generalised standard, $\mathcal{C}(T,f)$ is of the form $\mathcal{C}(T,e_p)$ where $p$ is a linear combination of paths in $\overline{\Gamma}$, viewed as a subquiver of $\Gamma$. We thus have $f = e_p + g$ for some $g\in(\Sigma T)$. \emph{Claim} 2: Any morphism $g\in(\Sigma T)$ is of the form $e_p$, for some linear combination $p$ of paths in $\Gamma$. \emph{Proof of Claim} 2: Let $X\stackrel{g}{\longrightarrow}Y$ belong to $(\Sigma T)$. Then there are some $U\in\operatorname{add} T$, $\Sigma U\stackrel{a}{\longrightarrow}Y$ and $X\stackrel{b}{\longrightarrow}\Sigma U$ such that $g = ab$. Applying Claim 1 to $\Sigma b$ gives a linear combination $q$ of paths in $\Gamma$ and a morphism $h$ in $(\Sigma T)$ such that $\Sigma b = e_q + h$. Since $T$ is rigid and $\Sigma b$ has codomain in $\operatorname{add}\Sigma^2T$, then $h$ is zero. A similar argument shows that $\shift^{-1} a$ is of the form $e_r$. The claim follows. \end{proof} \begin{corollary}\label{corollary: generalised standard} Let $\mathcal{C}$ be a $\mathbb{K}$-linear, Krull--Schmidt, 2-Calabi--Yau, triangulated category. Assume that $T\in\mathcal{C}$ is a cluster tilting object whose endomorphism algebra is of finite representation type. Then $\mathcal{C}$ is generalised standard. \end{corollary} \subsection{The standard 2-endorigid algebras of finite representation type} We call a finite dimensional $\mathbb{K}$-algebra \emph{standard 2-endorigid} if it is isomorphic to the endomorphism algebra of a maximal rigid object in a standard, ($\mathbb{K}$-linear, Krull--Schmidt) $2$-Calabi--Yau, triangulated category. The standard 2-CY-tilted algebras of finite representation type were classified by Bertani--Oppermann in \cite{bo}, where a quiver with potential is given for each isomorphism class. Ladkani noticed, see \cite{l}, that a 2-CY category with cluster tilting objects was missing in the list given in \cite[Appendix]{bikr}. For a comprehensive classification of all standard 2-CY-tilted algebras of finite representation type one thus has to take the algebra appearing in \cite{l} into account. \begin{theorem} The connected, standard 2-endorigid algebras of finite representation type are exactly the standard 2-CY-tilted algebras of finite representation type listed in~\cite{bo} (see also~\cite{l}) and the non-Jacobian 2-endorigid algebra of Section~\ref{subsection: not 2CY-tilted}. \end{theorem} \begin{proof} The theorem follows from the classification \cite{a,bikr} of all standard 2-Calabi--Yau triangulated categories with maximal rigid objects (see Table 1 and Table 2) and from the equivalences of categories in Proposition~\ref{prop: equivalences}. \end{proof} \begin{remark} We note that the conclusion of Corollary~\ref{corollary: generalised standard} being weaker than one would like, we do not know if the list discussed above contains all 2-endorigid algebras of finite representation type. \end{remark}
proofpile-arXiv_067-13733
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Investigation of species abundance is a topic of\break widespread interest in ecology. To estimate and model variation in species abundance, predetermined survey points are visited at each sampling occasion and the number of animals detected are recorded. This results in spatially referenced point count data. Such a sampling protocol is easier to implement than the traditional capture--recapture experiment [e.g., see \citet{williams2002analysis} and the references therein], since each animal encountered does not have to be distinctly tagged. Nevertheless, these spatially referenced data can be utilized to estimate the abundance of animals, for which individual tagging might be difficult or even infeasible due to the amount of effort involved, for example, in some avian ecology surveys. Therefore, to estimate abundance, the development of binomial mixture models has drawn significant attention over the past few decades [e.g., \citet {carroll1985note,royle2004n,kery2005modeling,kery2008estimating,webster2008bayesian}]. In developing statistical models for count data, the choice of the distribution function frequently depends on the dispersion associated with the data. For equidispersed data (i.e., equal mean and variance), the Poisson distribution is frequently used due to its explicit assumption of equidispersion. However, to model overdsipersed data (i.e., the variance is greater than the mean), the choice of distribution functions needs to be made [e.g., see \citet {ver2007quasi}]. Often, the negative binomial (NB) distribution [\citet{cameron1998regression}] is employed, due to a dispersion parameter that conveniently controls the level of overdispersion. Alternatively, the Poisson distribution can also be used with a random effect included to relax the restrictive assumption of equidispersion. Although the Poisson and NB distributions have become the {de facto} options for count data, neither of them accounts for underdispersion (i.e., the variance is less than the mean). Admittedly, overdispersion is more common for data arising from ecological monitoring studies, while underdispersion is often present for rare event data [e.g., \citet {herbers1989community,ridout2004empirical,oh2006accident}]. Nevertheless, cases can arise in ecological monitoring studies where the species of interest is less prevalent (due to being rare occurrences). In principle, these situations would manifest themselves as underdispersion. The Conway--Maxwell Poisson (CMP) distribution [\citet {conway1962queuing}] is an ideal candidate for modeling count data with different types of dispersion, as it has an extra dispersion parameter that flexibly allows for equi-, over-, and underdispersion. Moreover, the CMP distribution is closely related to many other discrete distributions. For example, the CMP distribution contains the Poisson distribution as a special case and generalizes Bernoulli and geometric distributions in the limiting cases [\citet{shmueli2005useful}]. Owing to its versatility, the CMP distribution has become increasingly popular among many subject-matter disciplines. For example, in the context of breeding bird surveys, \citet{wuCMP2013} develop a Bayesian hierarchical spatio-temporal CMP model for complex and high-dimensional count data. A unique aspect of this research is that it allows for dynamic spatial dispersion (i.e., the dispersion over the spatial domain evolves over time). A comprehensive overview regarding the CMP model is provided by \citet{sellers2011poisson} and the references therein. Binomial mixture models have become increasingly popular for analyzing spatial point referenced count data in the context of estimating and modeling variation in species abundance. As a result, various models have been developed with this application in mind. For example, \citet{carroll1985note} consider a Binomial-Beta mixture model to study the problem of estimating an unknown population, $N$, that follows a discrete uniform distribution, in which efficient estimators were obtained through the use of an integrated likelihood method. To improve the estimator proposed by \citet{carroll1985note}, \citet{royle2004n} develops a Binomial--Poisson (Bin--Pois) mixture model, in which $N$ is considered to be an independent random variable from a Poisson distribution. Subsequently, \citet {royle2006hierarchical} propose a more general hierarchical modeling framework with the goal of addressing animal abundance in the case of imperfect detection, wherein the variation associated with the observed data was partitioned into that of abundance and that of detectability. In the context of avian ecology studies, \citet{kery2005modeling} and \citet{kery2008estimating} apply the Bin--Pois models to the estimation of bird abundance. \citet{webster2008bayesian} propose a Bin--Pois model, in which a conditional autoregressive (CAR) model was used to address spatial dependence found in the bird density. \citet{wenger2008estimating} develop zero-inflated Bin--Pois and zero-inflated Binomial--negative binomial (Bin--NB) models for the estimation of species abundance. \citet{kery2010hierarchical} develop a Bin--Pois model with a site-specific random effect to allow for overdispersion and, thus, the equidispersion assumption of the Poisson distribution is relaxed. \citet{graves2011linking} apply the Bin--Pois model to estimate abundance for a grizzly bear population using multiple detection methods, in which covariates are introduced to explain variation in both the detection and intensity process. Under the frequentist framework, \citet{dail2011models} propose a general Bin--Pois model to allow for a formal statistical test regarding the assumption of population closure. However, none of the aforementioned models simultaneously allows for data with different levels of dispersion (over- and underdispersion) and Bayesian model selection (e.g., using the Conway--Maxwell Poisson distribution and reversible jump Markov chain Monte Carlo). Some experiments in ecological studies can be viewed as a robust design [e.g., see \citet{pollock1982capture}], that is, there are secondary, and possibly subsequent, sampling periods nested within each primary sampling occasion. For example, the American Robin (\textit {Turdus migratorius}) data we consider from the Baltimore Ecosystem Study (BES) falls into this category. This nested sampling design contains the design with one primary sampling occasion as a special case. Motivated by American Robin data from BES (Section~\ref {secApp}), we develop a Binomial Conway--Maxwell Poisson (Bin-CMP) mixture model that accommodates both overdispersed and underdispersed data under a nested/unbalanced data structure. The Bin-CMP models we propose are cast in a general Bayesian hierarchical binomial \mbox{mixture} model framework that can accommodate mixtures using distributions other than the CMP. Compared with the existing models in the literature, our contribution can be seen as follows. First, we develop a flexible class of binomial mixture models to account for replicated count data with different types of dispersion, which is achieved by choosing a suitable model for the abundance parameter (e.g., using the CMP distribution). In the case of overdispersed data, our methodology is advantageous from an estimation perspective when compared to the general\vadjust{\goodbreak} modeling strategy that includes a random effect to account for extra dispersion [e.g., see \citet{kery2010hierarchical}], as our model has a fewer number of parameters to be estimated. Although each parameter may be more computationally expensive, compared to the strategy of including a random effect, this computational burden can be alleviated through the use of a lower level programing language and parallel computation. More importantly, our model provides an explicit quantification of dispersion and can also be used in the context of underdispersed data. Additionally, the models we consider can flexibly account for spatial dependence in species abundance by adding a low-rank spatial component to the model for the intensity process. In contrast to the CAR models used by \citet{webster2008bayesian}, our methodology does not require us to define a neighborhood structure for the point count data, which can be difficult in many cases. In the setting of our motivating example, where the bird counts themselves are modeled at the point level rather than on areal units, a geostatistical approach may be more appropriate. Further, through reversible jump Markov chain Monte Carlo (RJMCMC), we introduce automated variable selection for covariates and grouping of dispersion parameters into the binomial mixture modeling framework and, to the best of our knowledge, our approach constitutes the first successful RJMCMC implemented on the CMP dispersion parameters. Last, the variable selection allows us to identify important predictors related to high detectability and abundance for a given species of interest. This paper is organized as follows. Section~\ref{secdataprelim} introduces our motivating data from the BES and provides preliminary background information on the CMP distribution. Section~\ref {secModeldev} describes our proposed Bayesian hierarchical binomial mixture models, including the Bin-CMP model. Section~\ref{secModelsel} provides relevant information on Bayesian variable selection and grouping using RJMCMC. Simulated examples are presented in Section~\ref {secSim}, illustrating the effectiveness of our modeling approach. Section~\ref{secApp} contains an analysis of our motivating data, estimating abundance of the American Robin from the BES, and demonstrates the utility of our methodology. Discussion is provided in Section~\ref{secDiscu}. For convenience of exposition, specific details surrounding our Markov chain Monte Carlo (MCMC) algorithm and full conditional distributions are left to a supplemental article [\citet{wuSupp2015}]. \section{Data and preliminary background}\label{secdataprelim} \subsection{Baltimore Ecosystem Study survey data} As a long-term ecological monitoring study, the BES considers the City of Baltimore, Maryland as a study area, with the objective of understanding how the City of Baltimore evolves as an ecosystem over time [\citet{pickett2011urban}]. Collected as a part of the BES, the American Robin (\textit{Turdus migratorius}) data we consider constitutes spatially replicated point count data on 132 bird census points in the City of Baltimore, which are randomly selected from a set of urban forest effect (UFORE or I-Tree Eco) model points (Section~\ref {secApp}). Considered as the most widespread North American thrush, the American Robin has become common in many North American cities [\citet{sallabanks1999american}]. Despite its abundance, conservation measures, which are enforced by the Migratory Bird Treaty Act of 2004, have been taken to protect the American Robin throughout its geographical range in the United States. Although BES data have been collected across bird survey points since 2005, as an illustration, we consider a subset of data over five years from 2005 to 2009, due to incomplete data in later years. In each year, three surveys were scheduled for each of the survey points throughout May and August, each of which consisted of a five minute survey conducted between 5 am and 10 am on days without rain. During each survey, the recorded count represents the combination of birds that were seen, heard, or flew over each survey point. In the current context, the secondary sampling period consists of the five minute daily survey, while the primary sampling periods are the time frames determined by the dates on which three daily surveys are conducted. As a result, the nested sampling design provides a maximum of 15 spatially referenced counts for each bird census point. Despite the fact that several species are available in the BES, as an illustration, we consider American Robin counts in our analysis, due to their higher abundance relative to other species. Among the 132 bird census points, 131 of them have American Robin detections (Figure~\ref{figeustobsdata}). \begin{figure} \includegraphics{801f01.eps} \caption{Plot of 131 bird census points for American Robin in the City of Baltimore, Maryland (using R package ``RgoogleMaps''). The solid circles are bird census points.}\label{figeustobsdata} \end{figure} \subsection{The Conway--Maxwell Poisson distribution} Let $X$ denote a CMP distributed random variable, that is, $X \sim \operatorname{CMP}(\lambda,\nu)$, where $\lambda>0$ and $\nu\geq0$ are the CMP intensity and dispersion parameters, respectively. The probability mass function (pmf) of $X$ is given by \begin{equation}\label{eqcmppdf} P(X=x)=\frac{\lambda^{x}}{(x!)^\nu}\frac{1}{Z(\lambda,\nu)},\qquad x=0,1,2,\ldots, \end{equation} where \begin{equation} Z(\lambda,\nu)=\sum_{j=0}^{\infty}{ \frac{\lambda^{j}}{(j!)^\nu}} \label{eqcmpzfun} \end{equation} is a normalizing constant (often referred to as the ``$Z$-function''). With the additional parameter $\nu$, the CMP distribution conveniently accommodates equidispersion, overdispersion, and underdispersion. Specifically, $\nu=1$ corresponds to the Poisson distribution, whereas $\nu<1$ and $\nu>1$ represent overdispersion and underdispersion, respectively. In addition, the CMP distribution generalizes to the geometric and Bernoulli distributions in the limiting cases [\citet{shmueli2005useful}]. For the calculation of (\ref{eqcmppdf}), the $Z$-function needs to be computed numerically due to the summation of an infinite series. For certain combinations of $\lambda$ and $\nu$, many terms will be needed in order to truncate the infinite summation with sufficient accuracy, which leads to intensive computation. For these cases, \citet{minka2003computing} derived an asymptotic approximation to the $Z$-function, which is accurate when $\lambda> 10^{\nu}$. \citet{wuCMP2013} discuss further improvements on computation by taking advantage of parallel computing through Open Multiprocessing (OpenMP) and Compute Unified Device Architecture (CUDA), that is, graphics processing unit (GPU). \section{Hierarchical Binomial mixture models}\label{secModeldev} \subsection{Model development} Let $\{\mathbf{s}_{i}\}_{i=1}^{G},\mathbf{s}_{i}\in D\subset\mbb{R}^{2}$ denote a set of sampling locations. We consider an experimental design in which animals are surveyed at each sampling location $\mathbf{s}_{i}$ for a total of $J$ primary sampling occasions, in which there are potentially $K$ nested secondary sampling periods. In principle, the primary sampling occasions can be over any arbitrary time interval, for example, in weeks or months. In addition, we assume a closed population within each primary sampling occasion so that the species abundance at each location varies across primary sampling occasions but not within. Relative to the primary sampling occasion, the secondary sampling period might be over a shorter time interval, for example, daily surveys within the three-month long primary sampling occasions. To allow for an unbalanced data structure, due to missing observations, we assume $n_{ij}\leq K$ successful visits to site $\mathbf{s}_{i}$ during the $j$th primary sampling period with the number of animals detected recorded. Therefore, it follows that $0\leq n_{ij} \leq K$, $i=1,2,\ldots,G$; $j=1,2,\dots,J$. We note that ``missing'' values are not uncommon and can occur for many reasons. For example, some scheduled visits might not be made due to illness of the observer, and as a result no data will be recorded. In the current context, we assume that any missing data are missing completely at random (MCAR) [\citet{little2002statistical}]. For $i=1,2,\ldots,G$, $j=1,2,\ldots,J$, and $k=1,2,\ldots,n_{ij}$, let $y_{ijk}$ be the number of animals observed at location $\mathbf {s}_{i}$ during the $k$th secondary sampling within the $j$th primary sampling occasion. The observed data can be denoted by $\mathbf{Y}=\{ \mathbf{y}_{ij}\dvtx i=1,2,\ldots,G; j=1,2,\ldots,J\}$, where $\mathbf {y}_{ij}=(y_{ij1},y_{ij2},\ldots,y_{ijn_{ij}})'$ and $1 \leq n_{ij} \leq K$. Note that $n_{ij}=0$ corresponds to the case that no successful visits are made to site $i$ and, thus the vector $\mathbf {y}_{ij}$ does not have any elements. Further, let $p_{ijk}$ be the probability of detecting an animal during the $k$th ($k=1,2,\ldots ,n_{ij}$) secondary sampling within the $j$th primary sampling occasion ($j=1,2,\ldots,J$) at location $\mathbf{s}_{i}$ and denote $N_{ij}$ as the unknown animal abundance at location $\mathbf{s}_{i}$ during the $j$th primary sampling occasion. In other words, $N_{ij}$ represents the total number of animals available for sampling during the $j$th primary sampling occasion at location $\mathbf{s}_{i}$. Due to the closed population assumption, $N_{ij}$ does not vary among secondary sampling periods within each primary sampling occasion. The nested design we consider is more general than many of the designs previously investigated [e.g., \citet {royle2004n,kery2005modeling,royle2005general,royle2006hierarchical,kery2008estimating,webster2008bayesian}], all of which can be seen as a special case of ours by setting $K=1$. In contrast, our study design is more similar to those found in \citet{chandler2011inference} and \citet{dail2011models}. Additionally, for the sake of flexibility, it is not necessary that $n_{ij} \equiv K$ (for all $i=1,2,\ldots,G$ and $j=1,2,\ldots,J$). Importantly, the replicated data collected in the secondary sampling provides additional information that could alleviate potential issues caused by missing values as well as improve the accuracy of parameter estimation over the nonnested design. The primary objective of our analysis is to estimate abundance and draw inference about detectability. To achieve these goals, we propose a class of hierarchical binomial mixture models, that includes the Bin-CMP model. The class of binomial mixture models naturally fits into the hierarchical framework [e.g., \citet {royle2008hierarchical,cressiewikle2011}]. In this framework, we define the \textit{observation model} as \begin{equation} y_{ijk}|N_{ij},p_{ijk}\sim\operatorname{Bin}(N_{ij},p_{ijk}), \label{eqdatamodel} \end{equation} for $i=1,2,\ldots,G$; $j=1,2,\ldots,J$; $k=1,2,\ldots,n_{ij}$, where the probability $p_{ijk}$ corresponds to the $k$th secondary sampling within the $j$th primary sampling occasion at location $\mathbf{s}_i$. For the design we consider, (\ref{eqdatamodel}) allows us to estimate abundance parameters $N_{ij}$, which are both location- and time-specific. Also, since the abundance $N_{ij}$ at each site $\mathbf {s}_{i}$ varies over time, we are able to describe the temporal changes in species abundance for all spatial locations, which is often vital in the context of long-term ecological monitoring studies. Another benefit of the design we consider is the potentially sharper estimates of the detection probability. Using a single probabilistically coherent model, we are able to provide spatial maps that illustrate the changes in abundance over time as well as the spatial variation [e.g., see Figures~2 and 3 in the supplementary article, \citet {wuSupp2015}]. More importantly, (\ref{eqdatamodel}) also suggests how over- and underdispersion can be explicitly accounted for in the subsequent model development through the choice of an appropriate count model for abundance parameter, $N_{ij}$. Specifically, under the assumption of independence between $N_{ij}$ and $p_{ijk}$, it follows that \begin{eqnarray*} \mathit{E}(y_{ijk})&=&E(p_{ijk})E(N_{ij}), \label{eqyijkcondmu} \\ \operatorname{Var}(y_{ijk})&=&E(p_{ijk})E(N_{ij})+E \bigl(p_{ijk}^{2}\bigr)\bigl\{\operatorname{Var}(N_{ij})-E(N_{ij}) \bigr\}. \end{eqnarray*} Hence, the mean and variance relationship in the data can be addressed through that of $N_{ij}$. For example, for data with over- and underdispersion, we can choose a model for $N_{ij}$ such that $\operatorname{Var}(N_{ij})>\mathit{E}(N_{ij})$ or $\operatorname {Var}(N_{ij})<\mathit{E}(N_{ij})$, respectively. As such, our approach addresses over- and underdispersed count data through the choice of an appropriate model for abundance parameter, $N_{ij}$. For $i=1,2,\ldots,G$ and $j=1,2,\ldots,J$, the \textit{process model} we consider for the abundance, $N_{ij}$, is given by \begin{equation} N_{ij}|\lambda_{ij},\nu_{j} \sim f( \lambda_{ij},\nu_{j}), \label{eqprocmodel} \end{equation} where $f(\cdot)$ is used to generically denote an appropriate count distribution with intensity parameter $\lambda_{ij}$ and primary sampling period-varying dispersion parameters $\nu_{j}$. There are many possible choices for the distribution function $f(\cdot)$ in the process model (\ref{eqprocmodel}), including the Pois, NB, and CMP, among others. We focus on the case where $f(\cdot)$ is chosen to be the CMP distribution, resulting in a flexible Bin-CMP mixture model that allows for equi-, over-, and/or underdispersion. Alternatively, if $f(\cdot)$ is chosen to be the NB distribution, the resulting Bin--NB mixture model provides a suitable candidate for modeling overdispersed data. Finally, it is important to note that, although we focus on the CMP distribution, in our framework, $f(\cdot)$ can be chosen to be any valid count distribution. Specification of the \textit{parameter model} is usually problem-specific and often depends on the research questions under consideration. In long-term ecological monitoring studies, it is often of interest to understand which factors might be important constituents in the probability of detection, so that an efficient sampling protocol can be designed. To achieve this goal, we relate the detection probability, $p_{ijk}$, to the covariates $x_{ijk,1},\ldots,x_{ijk,P}$ through a logistic link function, that is, \begin{equation} \operatorname{logit}(p_{ijk})=\beta_{1}x_{ijk,1}+ \cdots+\beta_{P}x_{ijk,P}, \label{eqdetectmodel} \end{equation} where $\operatorname{logit}(r)=\log\{r/(1-r) \}$, $i=1,2,\ldots,G$, $j=1,2,\ldots,J$, and $k=1,2,\break \ldots,n_{ij}$. Note that (\ref{eqdetectmodel}) allows for an intercept, by setting $x_{ijk,1}\equiv1$ for all $i$, $j$, and $k$. By incorporating covariates into the model, the objective is to identify and draw statistical inference on important factors governing the probability of detection. Another interest in long-term ecological studies is to gain deeper understanding surrounding the intensity $\lambda_{ij}$, which influences species abundance. The second part of the \textit{parameter model} defines a model for the intensity, $\lambda_{ij}$, as \begin{eqnarray}\label{eqlocmumodel} \log\lambda_{ij}=\mathbf{w}_{ij}'\bolds{\gamma}=w_{ij,1}\gamma _{1}+\cdots+w_{ij,M} \gamma_{M}, \nonumber\\[-8pt]\\[-8pt] \eqntext{i=1,\ldots,G; j=1,\ldots,J.} \end{eqnarray} Here, $\mathbf{w}_{ij}=(w_{ij1},\ldots,w_{ij,M})'$ are a set of covariates and $\bolds{\gamma}=(\gamma_{1},\ldots,\gamma_{M})'$ denotes the associated coefficients. \subsection{Accounting for spatial dependence} For spatially replicated count data, such as those typically encountered in monitoring studies, it is sometimes necessary to explicitly account for spatial dependence in the model for intensity. Under this scenario, we can extend (\ref{eqlocmumodel}) to explicitly incorporate spatial dependence by adding a spatial component in the model for the intensity, that is, \begin{equation} \log\lambda_{ij}=\mathbf{w}_{ij}'\bolds{ \gamma}+\bolds{\phi}_{i}^{\prime}\bolds{\alpha}_{j},\qquad i=1,\ldots,G; j=1,\ldots,J, \label{eqintensitymodel} \end{equation} or \[ \log\bolds{\lambda}=\mathbf{w}'\bolds{\gamma}+\bigl(\bolds{ \Phi} \otimes\bolds{\alpha}'\bigr) \operatorname{vec}( \mathbf{I}_{\tau\times\tau}), \] where $\bolds{\alpha}_{j}=(\alpha_{j1},\ldots,\alpha_{j\tau})'$; $\bolds{\alpha}=(\bolds{\alpha}_{1}, \bolds{\alpha}_{2},\ldots,\bolds {\alpha}_{J})$; $\bolds{\lambda}=(\lambda_{11},\ldots,\lambda _{1J},\ldots,\lambda_{G1},\break \ldots,\lambda_{GJ})'$; $\mathbf{w}=(\mathbf {w}_{11},\ldots,\mathbf{w}_{1J},\ldots,\mathbf{w}_{G1},\ldots,\mathbf {w}_{GJ})$; $\bolds{\Phi}$ denotes a $G \times\tau$ matrix of spatial basis functions $\bolds{\Phi}=[\phi_{1}^{\prime}; \ldots; \phi _{G}^{\prime}]$; $\bolds{\phi}_{i}^{\prime}=(\phi_{i1},\ldots,\phi_{i\tau})$ is a row vector denoting the $i$th row of $\bolds{\Phi}$; $\mathbf{I}_{\tau\times \tau}$ is a $\tau\times\tau$ identity matrix; $\tau$ is the number of basis functions and $\bolds{\alpha}\sim N(\mathbf{0},\bolds{\Sigma }_\alpha)$. There are several advantages to incorporating spatial effects when modeling the intensity function. Most importantly, capturing spatial dependence in the intensity function among neighboring locations will allow us to borrow strength from correlated observations, potentially improving parameter estimation, statistical inference, and prediction. The choice of basis functions is typically problem specific, with advantages arising from specific choices. Popular choices include empirical orthogonal functions (EOFs), Fourier basis function, splines, wavelets, bi-square and predictive process basis [e.g., see \citet {royle2005efficient,cressie2008fixed,cressiewikle2011} and the references therein]. In spatial statistical modeling, low-rank representations are often considered [\citet{wikle2010}]. Following \citet{ruppert2003semiparametric}, we use the thin plate spline basis functions, where \[ \bolds{\Phi}=\mathop{\bigl[C(\mathbf{s}_{i}-\bolds{\kappa}_{l}) \bigr]}_{1\leq l \leq\tau}{}_{1 \leq i \leq I} \quad\mbox{and}\quad C(\mathbf{r})=\llVert\mathbf{r} \rrVert^{2v-2}\log\llVert\mathbf{r}\rrVert,\qquad v>1, \] where $\bolds{\kappa}_{l}$ ($l=1,2,\ldots,\tau$) denote fixed knot points in $\mbb{R}^{2}$ and $v$ is a smoothness parameter [see \citet{holan2008semiparametric} for further discussion]. Here, we choose $v=2$ [cf. \citet{ruppert2003semiparametric}, page 257] and assume $\operatorname{cov}(\bolds{\alpha}_{j})=\sigma_{\alpha _{j}}^{2}\bolds{\Omega}$, where \[ \bolds{\Omega}=\mathop{\bigl[C(\bolds{\kappa}_{l}-\bolds{\kappa}_{l'})\bigr]}_{1 \leq l,l' \leq\tau}. \] The selection of knot points can be facilitated through space-filling designs, as implemented in the {\tt{fields}} package [\citet {furrer2009fields}] in R [\citet{Rsoftware}]. The number of knots $\tau$ can be chosen based on computational considerations followed by sensitivity analysis. Alternatively, the number of knots can be chosen according to $\tau=\max\{20, \min(G/4, 150)\}$ [\citet{ruppert2003semiparametric}, page 257]. Following\vspace*{2pt} \citet{ruppert2003semiparametric}, we define $\bolds{\Phi }^{*}=\bolds{\Phi}\bolds{\Omega}^{-1/2}$ and $\bolds{\alpha}^{*}=\bolds {\Omega}^{1/2}\bolds{\alpha}$. Then, for $i=1,2,\ldots,G$ and $j=1,2,\ldots,J$, we can rewrite (\ref {eqintensitymodel}) as \begin{equation} \log\lambda_{ij}=\mathbf{w}_{ij}'\bolds{ \gamma}+\bolds{\phi}_{i}^{*\prime}\bolds{\alpha}_{j}^{*}= \mathbf{g}_{ij}' \widetilde{\bolds{\gamma}}_{j}, \label{eqintensitymodelbasis} \end{equation} where $\bolds{\phi}_{i}^{*\prime}$ is the $i$th row of the matrix $\bolds {\Phi}^{*}$ and $\operatorname{cov}(\bolds{\alpha}_{j}^{*})=\sigma _{\alpha_{j}}^{2}\mathbf{I}_{\tau\times\tau}$. Further, $\mathbf {g}'_{ij}=(\mathbf{w}_{ij}' \bolds{\phi}_{i}^{*\prime})$ and $\widetilde {\bolds{\gamma}}_{j}=(\gamma_{1},\ldots,\gamma_{M},\alpha _{j1}^{*},\ldots,\alpha_{j \tau}^{*})'$. \subsection{The likelihood} To\vspace*{1.5pt} account for spatial dependence, we require that $\bolds{\alpha }_{j}^{*}$, $j=1,2,\ldots,J$ in (\ref{eqintensitymodelbasis}) are in the model with probability one. Since (\ref{eqlocmumodel}) and (\ref {eqintensitymodelbasis}) are essentially of the same form, we will use the former in the subsequent discussion. We now derive the likelihood function for the model defined by (\ref{eqdatamodel}), (\ref {eqprocmodel}), (\ref{eqdetectmodel}), and~(\ref{eqlocmumodel}). Let $\mcal{M}=\{\mcal{M}_{\bolds{\beta}},\mcal{M}_{\bolds{\gamma}},\mcal {M}_{\bolds{\nu}}\}$, and $\mcal{M}_{\bolds{\beta}},\mcal{M}_{\bolds {\gamma}},\mcal{M}_{\bolds{\nu}}$ denote the model structures for the set of covariates $\mathbf{x}$ and $\mathbf{w}$ and the dispersion parameters $\bolds{\nu}=\{\nu_{1},\ldots,\nu_{J}\}$, respectively. For example, in the case of $P=6, M=6,J=5$, $\mcal{M}_{\bolds{\beta }}=\{x_{1},x_{3}\}$ indicates that only $x_{1}$ and $x_{3}$ are included in the model for detection probability or, equivalently, $\beta_{2}=\beta_{4}=\beta_{5}=\beta_{6}=0$; $\mcal{M}_{\bolds {\gamma}}=\{w_{1},w_{2}\}$ indicates that only $w_{1}$ and $w_{2}$ are included in the model for intensity; $\mcal{M}_{\bolds{\nu}}=\{ 1,2,\ldots,J\}$ indicates that there is only one grouping for dispersion parameters, meaning $\nu_{j}\equiv\nu$ for $j=1,2,\ldots,J$. Under the assumption of conditional independence, the likelihood function for the binomial mixture models we propose is given by \begin{equation} \mcal{L}(\mathbf{Y}|\mcal{M},\bolds{\beta},\bolds{\gamma},\bolds{\nu })=\prod _{i=1}^{G}\prod_{j=1}^{J} \prod_{k=1}^{n_{ij}}{[y_{ijk}|N_{ij}, \bolds{\beta},\mcal{M}_{\bolds{\beta }}]} {[N_{ij}|\mcal{M}_{\bolds{\gamma}}, \bolds{\gamma},\mcal{M}_{\bolds {\nu}},\nu_{j}]},\hspace*{-15pt} \label{eqllikeyij} \end{equation} where, generically, $[\xi|\bolds{\theta}]$ denotes the conditional distribution of $\xi$ given the parameters $\bolds{\theta}$. Integrating out $N_{ij}$ in (\ref{eqllikeyij}) yields the marginal distribution of observing $\mathbf{y}_{ij}$ as \begin{eqnarray}\label{eqyijmarginal} && P(\mathbf{y}_{ij}|\mcal{M},\bolds{\beta},\bolds{\gamma},\nu_{j})\nonumber \\ &&\qquad = \sum_{N_{ij}\geq y_{ij}^{\max}}^{\infty} \Biggl\{\prod_{k=1}^{n_{ij}}{\frac {N_{ij}!}{y_{ijk}!(N_{ij}-y_{ijk})!}p_{ijk}^{y_{ijk}}(1-p_{ijk})^{N_{ij}-y_{ijk}}} \Biggr\} \\ &&\hspace*{66pt}{}\times f(N_{ij}|\mcal{M}_{\bolds{\gamma}},\bolds{\gamma},\mcal {M}_{\bolds{\nu}},v_{j}), \nonumber \end{eqnarray} where $y_{ij}^{\max}=\max\{\mathbf{y}_{ij}\}$. Consequently, we can derive the joint posterior distribution function $\pi(\mcal{M},\bolds {\beta},\bolds{\gamma},\bolds{\nu}|\mathbf{Y})$ based on (\ref {eqyijmarginal}) as \begin{eqnarray}\label{eqintegratedllike} && \pi(\mcal{M},\bolds{\beta},\bolds{\gamma},\bolds{\nu}|\mathbf{Y})\nonumber \\ &&\qquad \propto \Biggl\{\prod_{i=1}^{G}\prod _{j=1}^{J}{P(\mathbf{y}_{ij}|\mcal{M}, \bolds{\beta},\bolds{\gamma},\nu_{j})} \Biggr\} \\ &&\hspace*{31pt}{} \times[\bolds{\beta}|\mcal{M}_{\bolds{\beta}}] [\bolds{\gamma}|\mcal{M}_{\bolds{\gamma}}] [\bolds{\nu}| \mcal{M}_{\bolds{\nu}}] [\mcal{M}_{\bolds{\beta}}] [\mcal{M}_{\bolds {\gamma}}] [\mcal{M}_{\bolds{\nu}}]. \nonumber \end{eqnarray} Here $[\bolds{\theta}]$ denotes the joint prior distribution function of the parameters $\bolds{\theta}$. Examination of (\ref{eqintegratedllike}) raises several computational concerns. First, the calculation of $P(\mathbf{y}_{ij}|\mcal {M},\bolds{\beta},\bolds{\gamma},\nu_{j})$ can be computationally prohibitive, since a multiple integral is involved. This computational issue becomes exacerbated when the domain of $N_{ij}$ covers a wide range of values and/or if $G$ and $J$ are large. In addition to calculating a multiple integral, in the case where $f(\cdot)$ denotes the CMP distribution, evaluating (\ref{eqyijmarginal}) requires computing the $Z$-function, which involves the summation of infinite series. Specifically, for the Bin-CMP model, it is worth pointing out that within each MCMC iteration, sampling elements in $\bolds{\gamma}$ or $\bolds{\nu}$ from their full conditionals requires both the computation of the multiple integral and the approximation of the $Z$-function. Therefore, implementation of our proposed model can be computationally intensive in some cases. We resolve these computational issues through the use of low level programming in C and parallel computing with OpenMP. Finally, we assume the following prior distributions for the model parameters: $\bolds{\beta}\sim\operatorname{Gau}(\bolds{\mu}_{\beta },\bolds{\Sigma}_{\beta})$; $\bolds{\gamma}\sim\operatorname {Gau}(\bolds{\mu}_{\gamma},\bolds{\Sigma}_{\gamma})$. For the dispersion parameters, we assume $\nu_{j}\sim\operatorname {Unif}(a_{j},b_{j})$, $j=1,2,\ldots,J$, where $a_{j}$ and $b_{j}$ are chosen appropriately to allow for different levels of dispersion in the data (e.g., for overdispersed data, one may set $a_{j}\equiv0.02$ and $b_{j}\equiv1.0$). In our case, we assign vague prior distributions that are noninformative relative to scale of the data. \section{Automated Bayesian model selection}\label{secModelsel} For the binomial mixture models we propose, there are several ecological objectives. First, there is a clear need to identify important covariates among a set of candidate covariates in order to gain an understanding of the factors affecting the detectability for a given species of interest. In addition, the selection of influential covariates is vital for studying which factors influence species abundance. Last, the grouping of dispersion parameters will provide us with further information about the level of dispersion associated with the data across different years in the study. In such cases, grouping is desired since some years may exhibit a similar level of dispersion due to environmental changes or other exogenous factors. For example, in our setting, specific neighborhoods may experience slow growth in terms of the number of buildings established and/or certain climate conditions may be more (or less) similar from year to year. Thus, it is conceivable that some years may experience a similar dispersion parameter. As such, we allow for data-driven grouping of the dispersion parameters. To achieve these goals, we first discuss variable selection and grouping in the context of the models we propose. \subsection{Bayesian variable selection and grouping} The literature on Bayesian variable selection is fairly extensive [e.g., see \citet{o2009review,hooten2014guide} for a comprehensive review]. Among the many available choices, the two most commonly used techniques are stochastic search variable selection [\citeauthor{george1993variable} (\citeyear{george1993variable,george1997approaches})] and reversible jump MCMC \mbox{(RJMCMC)} [\citet{green1995reversible}]. For grouping, however, \mbox{RJMCMC} is typically considered more appropriate and, thus, we utilize it for both model selection and grouping. Although one could consider model selection through various model selection criteria [e.g., Deviance Information Criterion---\citet {spiegelhalter2002bayesian}], this would be less advantageous when the goal is both simultaneous variable selection and grouping. For convenience of exposition, we explain our algorithm in the context of the Bin-CMP model and note that the migration to other binomial mixture models is analogous. The implementation of variable selection for $\mathbf{x}$ and $\mathbf{w}$ involves two types of moves: BIRTH (B) and DEATH (D) defined as follows: \begin{enumerate}[B:] \item[B:] propose to add a covariate ($x_m$ or $w_m$) to the current model with probability $p_{m}^{b}$, \item[D:] propose to remove a covariate ($x_m$ or $w_m$) from the current model with probability $p_{m}^{d}$. \end{enumerate} As an example, we consider a D move for $\mathbf{x}$. In general, only a subset of covariates are subject to variable selection, while others are forced to remain in the model with probability one. For notational simplification, let $\mathbf{A}_{x}$ denote the set of indices corresponding to covariates $\mathbf{x}$ that are available for variable selection. For example, if there are three covariates $x_{1}$, $x_{2}$, and $x_{3}$ available and only $x_{1}$ and $x_{3}$ are subject to variable selection (i.e., $x_{2}$ is in the model with the probability 1), then we have $\mathbf{A}_{x}=\{1,3\}$. Moreover, let $|\mathbf{A}_{x}|$ denote the cardinality of the set $\mathbf{A}_{x}$. For each covariate in $\mathbf{A}_{x}$, we assume an equal probability of a B or D move, that is, \[ p_{m}^{b}=p_{m}^{d}=1/2\qquad\mbox{for }m \in\mathbf{A}_{x}. \] Suppose at the current iteration $t$, the model structure is given by $\mcal{M}^{t}=\{\mcal{M}_{\bolds{\beta}}^{t},\mcal{M}_{\bolds{\gamma }}^{t},\mcal{M}_{\bolds{\nu}}^{t}\}$. The\vspace*{1pt} RJMCMC algorithm for variable selection on $\mathbf{x}$ can be outlined as follows: \begin{enumerate}[\textit{Step} 2:] \item[\textit{Step} 1:] Start with the model structure $\mcal {M}^{t}=\{\mcal{M}_{\bolds{\beta}}^{t},\mcal{M}_{\bolds{\gamma }}^{t},\mcal{M}_{\bolds{\nu}}^{t}\}$, where $\mcal{M}_{\bolds{\beta }}^{t}=\{x_{i_{1}},\ldots,x_{i_{m}}\}$ with $\bolds{\beta}^{t}=\{\beta _{i_1},\ldots,\beta_{i_m}\}$. \item[\textit{Step} 2:] Randomly draw an index from $\mathbf{A}_{x}$ with an equal probability $1/|\mathbf{A}_{x}|$. Assume $i_{s} \in\mathbf {A}_{x}$ is chosen: \begin{itemize} \item[--] if $i_{s} \in\mcal{M}_{\bolds{\beta}}^{t}$, then propose a D move and obtain $\mcal{M}_{\bolds{\beta}}^{\prime}=\mcal{M}_{\bolds {\beta }}^{t} \setminus\{x_{i_{s}}\}$ and $\mcal{M}^{\prime}=\{\mcal{M}_{\bolds {\beta}}^{\prime},\mcal{M}_{\bolds{\gamma}}^{t},\mcal{M}_{\bolds{\nu }}^{t}\}$ and $\bolds{\beta}^{\prime}=\{\beta_{i_1},\ldots,\beta _{i_s}=0,\ldots,\beta_{i_m}\}$; \item[--] otherwise propose a B move and obtain $\mcal{M}_{\bolds{\beta }}^{\prime}=\mcal{M}_{\bolds{\beta}}^{t} \cup\{x_{i_{s}}\}$ and $\mcal {M}^{\prime}=\{\mcal{M}_{\bolds{\beta}}^{\prime},\mcal{M}_{\bolds{\gamma }}^{t},\mcal{M}_{\bolds{\nu}}^{t}\}$ and $\bolds{\beta}^{\prime}=\{\beta _{i_1},\ldots,\beta_{i_m},\beta_{i_s}\}$. \end{itemize} \item[\textit{Step} 3:] Adjust the coefficient $\beta_{is}$ corresponding to the covariate $x_{i_{s}}$: \begin{itemize} \item[--] if a D move, set $\beta_{is}=0$; \item[--] otherwise generate $\beta_{is}\sim q(\cdot)$. \end{itemize} \item[\textit{Step} 4:] Generate $u\sim\operatorname{Unif}(0,1)$: \begin{itemize} \item[--] if $u< \min\{1,\mathrm{BF}(\mcal{M}_{\bolds{\beta}}^{\prime},\mcal {M}_{\bolds{\beta}}^{t})\times R\}$, then set $\mcal{M}_{\bolds{\beta }}^{t+1}=\mcal{M}_{\bolds{\beta}}^{\prime}$ and\break $\mcal{M}^{t+1}=\mcal {M}^{\prime}$; \item[--] otherwise $\mcal{M}_{\bolds{\beta}}^{t+1}=\mcal{M}_{\bolds{\beta }}^{t}$ and $\mcal{M}^{t+1}=\mcal{M}^{t}$. \end{itemize} \item[\textit{Step} 5:] Repeat. \end{enumerate} In terms of the proposal distribution $q(\cdot)$, we used a $\operatorname{Gau}(0,\zeta)$ distribution with $\zeta$ being a user-defined tuning parameter. Moreover, \[ R= \cases{ \displaystyle\frac{p_{i_{s}}^{b}}{p_{i_{s}}^{d}}\times q(\beta_{is}), &\quad if D move, \vspace*{5pt}\cr \displaystyle\frac{p_{i_{s}}^{d}}{p_{i_{s}}^{b}}\times\frac{1}{q(\beta_{is})}, &\quad if B move,} \] and \[ \operatorname{BF}\bigl(\mcal{M}_{\bolds{\beta}}^{\prime}, \mcal{M}_{\bolds{\beta }}^{t}\bigr)=\frac{P(\mcal{M}_{\bolds{\beta}}',\bolds{\beta}'|\mathbf {Y},\mcal{M}_{\bolds{\gamma}}^{t},\bolds{\gamma},\mcal{M}_{\bolds{\nu }}^{t},\bolds{\nu})}{P(\mcal{M}_{\bolds{\beta}}^{t},\bolds{\beta }^{t}|\mathbf{Y},\mcal{M}_{\bolds{\gamma}}^{t},\bolds{\gamma},\mcal {M}_{\bolds{\nu}}^{t},\bolds{\nu})}. \] We now discuss the grouping algorithm for the dispersion parameters $\bolds{\nu}$. Assume there are $n_{t}$ different arrangements $T_{1},T_{2},\ldots,T_{n_{t}}$ for $\bolds{\nu}$ at the $t$th iteration of the MCMC, that is, $\mcal{M}_{\bolds{\nu}}^{t}=\{T_{1}, T_{2}, \ldots, T_{m}, \ldots,T_{n_{t}}\}$. For each grouping $T_{m}$, $m=1,2,\ldots,n_{t}$, the corresponding elements are subscripts for the dispersion parameter group membership. For example, if $n_{t}=1$, we have $T_{1}=\{1,2,\ldots,J\}$, that is, $\nu_{j} \equiv\nu$, for $j=1,2,\ldots,J$. Similar to the variable selection previously described, we allow for two types of moves as follows: \begin{enumerate}[C:] \item[C:] propose to combine two different arrangements into one arrangement with $p_{c}$, \item[S:] propose to split the arrangement into two arrangements with probability $p_{s}$. \end{enumerate} Without loss of generality, assume an equal probability of proposing a C or S move, that is, $p_{c}=p_{s}=1/2$. As an illustration, we describe only the S move. Suppose there are $n_{t}^{s}$ out of $n_{t}$ arrangements in $\mcal{M}_{\bolds{\nu}}^{t}$ that have more than one single element. We randomly choose each of these $n_{t}^{s}$ arrangements with an equal probability. Assume that group $T_{m}$ is chosen, where $m \in\{1,\ldots,n_{t}^{s}\}$ and $|T_{m}|>1$. Assuming we split $T_{m}$ into two nonempty sets $T_{m_{1}}$ and $T_{m_{2}}$, we denote the resulting model structure as $\mcal{M}_{\bolds{\nu}}'=\{ T_{1}, T_{2}, \ldots, T_{m_{1}}, T_{m_{2}}, \ldots,T_{n_{t}}\}$. The RJMCMC algorithm for grouping of $\bolds{\nu}$ can be outlined as follows: \begin{enumerate}[\textit{Step} 3:] \item[\textit{Step} 1:] Calculate the probability $P(\mcal{M}_{\bolds {\nu}}'|\mcal{M}_{\bolds{\nu}})$ and $P(\mcal{M}_{\bolds{\nu}}|\mcal {M}_{\bolds{\nu}}')$ as \begin{eqnarray*} P\bigl(\mcal{M}_{\bolds{\nu}}'|\mcal{M}_{\bolds{\nu}}\bigr)&=& \frac{1}{2}\frac {1}{n_{t}^{s}}\frac{1}{2^{(|T_{m}|-1)}-1}, \\ P\bigl(\mcal{M}_{\bolds{\nu}}|\mcal{M}_{\bolds{\nu}}'\bigr)&=& \frac{1}{2}\frac {1}{{n_{t}^{s}+1 \choose2}} \end{eqnarray*} [\citet{king2002bayesian}]. \item[\textit{Step} 2:] Let $\nu_{m}$ denote the value common to all dispersion parameters in $T_{m}$ and $\nu_{m_{1}}$ and $\nu _{m_{2}}$ be the values of dispersion parameters in $T_{m_{1}}$ and $T_{m_{2}}$, respectively. Define a bijective mapping between $\nu_{m}$ and $\nu _{m_{1}},\nu_{m_{2}}$ as \[ \nu_{m_{1}}=\nu_{m}+\varepsilon\quad\mbox{and}\quad \nu_{m_{2}}=\nu_{m}-\varepsilon, \] where $\varepsilon\sim h(\cdot)$. \item[\textit{Step} 3:] Generate $\xi\sim\operatorname{Unif}(0,1)$: \begin{itemize} \item[--] if $\xi< \min\{1,\mathrm{BF}(\mcal{M}_{\bolds{\nu}}^{\prime},\mcal {M}_{\bolds{\nu}}^{t})\times R_{s}\}$, then set $\mcal{M}_{\bolds{\nu }}^{t+1}=\mcal{M}_{\bolds{\nu}}^{\prime}$ and\break $\mcal{M}^{t+1}=\mcal {M}^{\prime}$; \item[--] otherwise $\mcal{M}_{\bolds{\nu}}^{t+1}=\mcal{M}_{\bolds{\nu }}^{t}$ and $\mcal{M}^{t+1}=\mcal{M}^{t}$. \end{itemize} \end{enumerate} In terms of the proposal distribution $h(\cdot)$, we used $h(\eta )=\operatorname{Unif}(-\eta,\eta)$ where $\eta$ is chosen through pilot tuning. Moreover, \begin{eqnarray*} \mathrm{BF}\bigl(\mcal{M}_{\bolds{\nu}}^{\prime},\mcal{M}_{\bolds{\nu }}^{t} \bigr)&=&\frac{P(\mcal{M}_{\bolds{\nu}}',\nu_{m_1},\nu_{m_2}|\mathbf {Y},\mcal{M}_{\bolds{\gamma}}^{t},\bolds{\gamma},\mcal{M}_{\bolds {\beta}}^{t},\bolds{\beta}^{t})}{P(\mcal{M}_{\bolds{\nu}}^{t},\nu _{m}|\mathbf{Y},\mcal{M}_{\bolds{\gamma}}^{t},\bolds{\gamma},\mcal {M}_{\bolds{\beta}}^{t},\bolds{\beta}^{t})}, \\ R_{s}&=& \frac{P(\mcal{M}_{\bolds{\nu}}|\mcal{M}_{\bolds{\nu }}')}{P(\mcal{M}_{\bolds{\nu}}'|\mcal{M}_{\bolds{\nu}})} \times\frac {1}{h(\varepsilon)}\times\biggl \llvert{\frac{\partial{(\nu_{m_{1}},\nu _{m_{2}})}}{\partial{(\nu_{m},\varepsilon)}}} \biggr\rrvert. \end{eqnarray*} \section{Simulated examples}\label{secSim} To evaluate the performance of the binomial mixture models we propose, we considered two simulated examples using the Bin-CMP model, the difference of which only resides in whether or not a spatial component is included in the intensity model. For both simulations, we choose $G=131$, \mbox{$J=5$}, and $K=3$ to be the same as the American Robin data presented in Section~\ref{secApp}. For both examples, we simulate data as $y_{ijk}|N_{ij},p_{ijk} \sim \operatorname{Bin}(N_{ij},p_{ijk})$. For the probability of detection, we consider \[ \operatorname{logit}(p_{ijk})=\beta_{1}x_{ijk,1}+ \beta_{2}x_{ijk,2}+\cdots+\beta_{P}x_{ijk,P}, \] where the values for the covariates $\mathbf{x}$ are set to be the same as in the American Robin data for $i=1,2,\ldots, G$, $j=1,2,\ldots,J$, $k=1,2,\ldots,K$, $l=1,2,\ldots,$ $P=4$. In addition, we set $\bolds {\beta}=(-2.31,-0.4,0.0,-0.4)'$ with $ \{x_{1},x_{2},x_{4} \}$ being important covariates. For the true abundance parameters $N_{ij}$, we simulated from $N_{ij}\sim\operatorname{CMP}(\lambda_{ij},\nu _{j})$, with $\nu_{1}=\nu_{3}=\nu_{5}=0.15$, $\nu_{2}=\nu _{4}=0.06$ and $\bolds{\gamma}_{0}=(0.31,0.13,0.44,0.16,0.35)'$, as estimated from the American Robin data presented in Section~\ref {secApp}. For $i=1,2,\ldots,G$ and $j=1,2,\ldots,J$, the intensity $\lambda_{ij}$ is simulated according to \begin{eqnarray*} \mbox{\textbf{S1}:}\quad \log\lambda_{ij}&=&\mathbf{w}_{i}^{\prime} \bolds{\gamma}+\gamma_{0j}, \\ \mbox{\textbf{S2}:}\quad \log\lambda_{ij}&=&\mathbf{w}_{i}^{\prime} \bolds{\gamma}+\bolds{\phi}_{i}^{*\prime}\bolds{\alpha}+ \gamma_{0j}, \end{eqnarray*} where $\bolds{\phi}_{i}^{*\prime}$ for $i=1,2,\ldots,G$ and $\bolds {\gamma }_{0}=(\gamma_{01},\ldots,\gamma_{05})'$ are determined according to the American Robin data with $\tau=10$. In each of the two models, $\mathbf{w}_{i}$ are set to be the same as in the American Robin data presented in Section~\ref{secApp}. Further, we set $M=11$ and $\bolds {\gamma}=(0.0,0.0,0.0,0.0,0.0,0.06,0.0,0.0,0.0,0.03,0.0)'$, that is, with $ \{w_{6},w_{10} \}$ being important covariates. Particularly, for \textbf{S2}, the coefficients of spatial components, $\bolds{\alpha}$, are randomly sampled from $\operatorname{Unif}(0,1)$ to avoid $y_{ijk}$ being too large. For the two simulations, we apply RJMCMC to perform variable selection and grouping. Similar to the analysis presented in Section~\ref{secApp}, we require $\bolds{\alpha }$ to be included in the model with probability one for \textbf{S2} and set $a_{j}\equiv0.02$ and $b_{j} \equiv2.0$ to allow for both over- and underdispersion. In addition, we set $\bolds{\mu}_{\beta }=\bolds{\mu}_{\gamma}\equiv\mathbf{0}$, $\bolds{\Sigma}_{\beta }=10^{2}\mathbf{I}_{P}$, and $\bolds{\Sigma}_{\gamma}=10^{2}\mathbf{I}_{M}$. \begin{table} \tabcolsep=0pt \tablewidth=250pt \caption{Posterior marginal probabilities of the most probable model for $\mathbf{x}$, $\mathbf{w}$, and $\bolds{\nu}$ in the Bin-CMP mixture models \textup{\textbf{S1}} and \textup{\textbf{S2}} simulated examples (Section~\protect\ref{secSim}) using RJMCMC. Note that \textup{\textbf{S1}} contains only the covariates in the intensity model, whereas \textup{\textbf{S2}} contains both covariates and spatial components in the intensity model and that the posterior probability for $\mathbf{x}$ under both \textup{\textbf{S1}} and \textup{\textbf{S2}} are slightly less than 1.00 and become 1.00 as a result of rounding}\vspace*{8pt}\label{tabBinMixSimRJMCMC} (a) Variable selection and grouping for \textbf{S1} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}@{}llcc@{}} \hline \textbf{Para-}& & & \textbf{Posterior}\\ \textbf{meter} & \textbf{Model} & \textbf{Frequency} & \textbf{probability}\\ \hline $\mathbf{x}$ & $ \{x_{1},x_{2},x_{4} \}$ & 59,838 & 1.00 \\[3pt] $\mathbf{w}$ & $ \{w_{6},w_{10} \}$ & 53,951 & 0.90 \\ & $ \{w_{2},w_{6},w_{10} \}$ & \phantom{0,}4386 & 0.07 \\[3pt] $\bolds{\nu}$ & $T_{1}=\{2,4\},T_{2}=\{1,3,5\}$ & 43,507 & 0.73 \\ & $T_{1}=\{1,3\},T_{2}=\{2,4\}, T_{3}=\{5\}$ & \phantom{0,}7801 & 0.13 \\ & $T_{1}=\{1\},T_{2}=\{2,4\}, T_{3}=\{3,5\}$ & \phantom{0,}3918 & 0.07 \\ \hline \end{tabular*}\vspace*{18pt} (b) Variable selection and grouping for \textbf{S2} \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}@{}llcc@{}} \hline \textbf{Para-}& & & \textbf{Posterior}\\ \textbf{meter} & \textbf{Model} & \textbf{Frequency} & \textbf{probability}\\ \hline $\mathbf{x}$ & $ \{x_{1},x_{2},x_{4} \}$ & 59,741 & 1.00 \\[3pt] $\mathbf{w}$ & $ \{w_{6},w_{10} \}$ & 56,139 & 0.94 \\[3pt] $\bolds{\nu}$ & $T_{1}=\{2,4\},T_{2}=\{1,3,5\}$ & 37,071 & 0.76 \\ & $T_{1}=\{3\}, T_{2}=\{2,4\},T_{2}=\{1,5\}$ & \phantom{0,}7573 & 0.13 \\ \hline \end{tabular*} \end{table} Table~\ref{tabBinMixSimRJMCMC} provides the posterior marginal probabilities for the most probable model for $\mathbf{x}$, $\mathbf{w}$, and $\bolds{\nu}$ in the Bin-CMP models \textbf{S1} and \textbf{S2}. For model \textbf{S1}, the most frequent detection probability model was given by $ \{x_{1},x_{2},x_{4} \}$ and appeared with a frequency of 99.73\%. The most frequent intensity model was defined by $ \{w_{6},w_{10} \}$ and had a frequency of 89.92\%. In addition, the most frequent grouping for dispersion parameters is $\mcal{M}_{\bolds{\nu}}=\{\{2,4\},\{1,3,5\}\}$, which appeared with a frequency of 72.51\%. In all cases, the RJMCMC correctly identified the set of important covariates as well as grouping for dispersion parameters with the posterior marginal probability greater than or equal to 72.51\%. In terms of parameter estimation, in most cases the 95\% credible intervals (CIs), averaged over the different models, contain the true values---providing further indication that the correct model is selected with high probability. For model \textbf {S2}, the most frequent set of covariates for the detection probability model was given by $ \{x_{1},x_{2},x_{4} \}$ and appeared with a frequency of 99.57\%. The most frequent set of covariates $ \{w_{6},w_{10} \}$ for the intensity model had a frequency of 93.57\%. In addition, the most frequent grouping for the dispersion parameters is $\mcal{M}_{\bolds{\nu}}=\{\{2,4\},\{1,3,5\}\}$, which appeared with a frequency of 76.00\%. In summary, the two simulations suggest that we are able to correctly identify important covariates and grouping for dispersion parameters with high posterior probability. Finally, for the estimation of abundance in the two simulations, our approach performs satisfactorily, as measured by coverage of the 95\% CIs. In the presence of spatial components, however, we note that the model averaged estimates of dispersion parameters can be adversely affected by missing data. \section{Application: The Baltimore Ecosystem Study}\label{secApp} In the urban ecosystems literature, bird communities are often used as surrogates for studying urban biodiversity or species responses to urbanization [\citet{shochat2010birds,aronson2014global}]. Within urban areas the bird community is shaped by local-scale features such as habitat features that vary among neighborhoods, landscape pattern, and socioeconomic characteristics of residents that may influence land management decisions [\citet{pickett2012bio}]. The American Community Survey (ACS) is an ongoing survey that is able to provide timely economic, social, and demographic information on small geographies such as census tracts. Thus, to examine the effects of certain demographic characteristics on abundance, we consider several ACS variables. Additionally, environmental features of different neighborhoods can be described by many factors, such as vegetation diversity and are, therefore, also considered in our analysis. Substantial research has been undertaken to investigate how socioeconomic status and environmental variables influence the abundance and diversity of various avian species [see \citet {loss2009relationships,smallbone2011anuran,denison2010effects} and the references therein]. Using socioeconomic variables from the decennial census in 2000 associated with each census tract block groups as covariates, \citet{denison2010effects} considered a simple NB regression with no spatial components under the frequentist paradigm to estimate the relative abundance for European starling in the City of Baltimore, Maryland using a portion of data collected from 2005 to 2007. In contrast, we consider American Robin data from the BES collected from 2005 to 2009 and apply various Bin-CMP models in order to select important covariates for estimating the detection probability and abundance of the American Robin, as well as to identify the grouping of dispersion parameters. Due to missing values, the data we consider has an unbalanced structure. In particular, the percentage of secondary sampling occasions with at least one missing observation for each of five primary sampling occasions is 6.87\%, 6.87\%, 3.05\%, 77.1\%, and 50.38\%, respectively. Moreover, the overall percentage of missing observations in the American Robin data set is 9.62\%. For the American Robin data, a total of 131 bird survey points were visited during three secondary daily surveys within each of the five primary sampling occasions from 2005 to 2009. With three covariates available, we considered a full model for the detection probability as \begin{equation} \operatorname{logit}(p_{ijk})=\beta_{1}+\beta_{2} \texttt{time}_{ijk}+\beta_{3}\texttt{airtemp}_{ijk}+ \beta_{4}\texttt{cloudcover}_{ijk},\hspace*{-20pt} \label{eqndetectprobeust} \end{equation} for $i=1,\ldots, 131$, $j=1,\ldots, 5$, and $k=1,\ldots, n_{ij} \leq K=3$. Regarding the covariates in (\ref{eqndetectprobeust}), \texttt {time}, \texttt{airtemp}, and \texttt{cloudcover} correspond to the start time, air temperature, and cloud cover (i.e., the fraction of the sky obscured by clouds) recorded on each visit to the bird survey points, respectively. In terms of full models for the intensity, we considered the following three models: \begin{eqnarray*} \mbox{\textbf{M1}:}\quad \log\lambda_{ij}&=&\mathbf{w}_{i}^{\prime} \bolds{\gamma}+\widetilde{\bolds{\phi}}{}_{i}^{*\prime}\bolds{ \alpha}+\gamma_{0j}, \\ \mbox{\textbf{M2}:}\quad \log\lambda_{ij}&=&\mathbf{w}_{i}^{\prime} \bolds{\gamma}+\gamma_{0j}, \\ \mbox{\textbf{M3}:}\quad \log\lambda_{ij}&=&\bolds{\phi}_{i}^{*\prime} \bolds{\alpha}+\gamma_{0j}, \label{eqnintensityM2} \end{eqnarray*} where, for $j=1,\ldots,J$, $\gamma_{0j}$ is a year-specific intercept and $\bolds{\phi}_{i}^{*\prime}$ is the $i$th row of the matrix $\bolds {\Phi }^{*}$ as discussed in Section~\ref{secModeldev}. Moreover, the covariates in the intensity model are given by $\mathbf {w}_{i}^{\prime}=(\texttt{uftree}_{i}$, $\texttt {ufbldg}_{i}$, $\texttt{ufmgrass}_{i}$, $\texttt{bld200m}_{i}$, $\texttt{for200m}_{i}$, $\texttt{veg200m}_{i}$, $\texttt{African}_{i}$, $\texttt{bachelor}_{i}$, $\texttt{fmkds}_{i}$, $\texttt{pubassit}_{i}$, $\texttt{houseyr}_{i})$. These covariates are specific to each survey location and do not vary with primary sampling occasions. Among these environmental variables, \texttt{uftree}, \texttt{ufbldg}, and \texttt{ufmgrass} are the UFORE plots variables that indicate tree cover, ground cover by buildings and maintained grass, respectively. Further, \texttt {bld200m}, \texttt{for200m}, and \texttt{veg200m} are variables that measure tree cover, other vegetation cover, and cover by buildings in the 200 meter radius plot, respectively [see Figure~1 in the supplemental article, \citet{wuSupp2015}]. For the ACS variables specific to each census tract block group, \texttt{African} is the percentage of African American residents; \texttt{bachelor} is the percentage of population with Bachelor's degree or higher; \texttt {fmkds} is the percentage of housing units occupied by female householder and children under 18 years; \texttt{pubassit} is the percentage of households on government public income assistance; \texttt{hourseyr} is the median year that a housing unit was built. We used the five-year period estimates from 2005 to 2009 for these ACS variables, which can be obtained at the U.S. Census Bureau website (\surl{http://www.census.gov/acs/www/}). Our specific choice of ACS variables was facilitated by a social areas analysis approach [\citet {denison2010effects,maloney1974social,muller2013patterns}]. Note that we standardize the covariates in (\ref{eqndetectprobeust}) and in the intensity model for numerical stability. Further, based on exploratory analysis involving various collinearity diagnostics (e.g., condition number, etc.) of the site covariates (not shown) and subject matter knowledge, we expect any effects of collinearity between the site covariates to have a minimal affect on the variable selection algorithm. Finally, for model \textbf{M1}, we orthogonalize the matrix of spatial basis function with respect to covariates, to alleviate potential confounding with the covariate effects [\citet {hodges2010adding}]. As a result, $\widetilde{\bolds{\phi}}{}_{i}^{*\prime}$ is the $i$th row of the matrix of $\widetilde{\bolds{\Phi}}{}^{*}$ after the orthogonalization. It is worth pointing out that the choice of models above depends on the goal of the ecological study. For example, \textbf{M3} can be used if no covariates are available for modeling the intensity. For other cases where covariates are available, but there is no spatial dependence (or the spatial dependence is negligible after accounting for covariates), model \textbf{M2} can be utilized. Given both covariates are available and spatial dependence is present, \textbf{M1} represents a potential model. When implementing the RJMCMC algorithm, we require the ``intercept'' term~$\beta_{1}$ in (\ref{eqndetectprobeust}) and $\bolds{\gamma }_{0}$, in the model for intensity, to be included with probability one. In addition, in the presence of spatial components, we require $\bolds{\alpha}$ to be in the model for the intensity with probability one. For the choice of knot points, when using low-rank thin plate basis functions, we considered a sensitivity analysis to choose the number of knots and a space-filling design for placement. Specifically, for three different choices of the number of knot points, $\tau=10$, 15, and 32 in \textbf{M1}, similar results are obtained in terms of abundance estimation, although parameter estimation becomes more difficult as $\tau$ gets large. Equally important, the results of a sensitivity analysis indicate that the variable selection and grouping for the dispersion parameters seem robust to a different number of knot points. Hence, we choose $\tau=10$ for both \textbf{M1} and \textbf {M3}. We used a Metropolis--Hastings within Gibbs sampler consisting of a total of 120,000 MCMC iterations, with the first 60,000 discarded as burn-in. Our inference is based on every third sample after burn-in, which results in a total of 20,000 samples used. In terms of posterior marginal probability, the model having \texttt {time} and \texttt{cloudcover} has the highest probability of being selected in the model for detectability. Similarly, for the intensity model, \texttt{ufbldg}, \texttt{veg200m}, and \texttt{pubassit} are selected with higher probability relative to other covariates. However, the grouping of dispersion parameters varies across models depending on whether spatial components are included. This is not unexpected, as there is a trade-off between the dispersion parameter and inclusion of spatial components. The three models we considered all produce similar results in terms of the selection of important covariates and abundance estimates (results not shown). However, since the goal of our analysis is to identify and draw inference on important covariates relating to detectability and abundance, we present results from the more parsimonious model~\textbf{M2}. From Table~\ref{tabBinCMPamroM2}, it can be seen that \texttt{time} and \texttt{cloudcover} are identified as important predictors for detectability of American Robin. For the covariates in the intensity model, \texttt{ufbldg}, \texttt{veg200m}, and \texttt{pubassit} are selected as the important factors in all cases. For the dispersion parameters, the results suggest the most probable model has the grouping $T_{1}=\{2,4\},T_{2}=\{1,3,5\}$ (with posterior probability 0.6496), indicating that the data in 2005, 2007, and 2009 exhibit a similar amount of dispersion, whereas the data for 2006 and 2008 show similar amounts of dispersion. \begin{table} \tabcolsep=0pt \tablewidth=250pt \caption{Posterior probabilities of the most probable model for \textup{\textbf{M2}} and the posterior summary statistics in the Bin-CMP model assuming the posterior mode model for \textup{\textbf{M2}}. Note that \textup{\textbf{M2}} only contains covariates in the intensity model, and $\widehat{R}$ refers to the Gelman--Rubin diagnostics}\vspace*{8pt}\label{tabBinCMPamroM2} (a) Variable selection and grouping \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}@{}llcc@{}} \hline & & & \textbf{Posterior} \\ \textbf{Variable} & \textbf{Model} & \textbf{Frequency} & \textbf{probability} \\ \hline $\mathbf{x}$ &\{cloudcover\} & 31,992 & 0.53 \\ &\{time, cloudcover\} & 27,587 & 0.46 \\[3pt] $\mathbf{w}$ &\{veg200m, pubassit\} & 51,343 & 0.86 \\ &\{ufbldg, veg200m, pubassit\} & \phantom{0,}7234 & 0.12 \\[3pt] $\bolds{\nu}$ & $T_{1}=\{2,4\},T_{2}=\{1,3,5\}$ & 38,973 & 0.65 \\ & $T_{1}=\{2\},T_{2}=\{1,3,4,5\}$ & \phantom{0,}7445 & 0.12 \\ &$T_{1}=\{2\},T_{2}=\{4\},T_{2}=\{1,3,5\}$& \phantom{0,}3745 & 0.06 \\ \hline \end{tabular*}\vspace*{18pt} (b) Parameter estimation \begin{tabular*}{\tablewidth}{@{\extracolsep{\fill}}@{}ld{2.2}d{2.2}d{2.2}d{2.2}c@{}} \hline \textbf{Parameter} & \multicolumn{1}{c}{$\bolds{\mu_{\mathrm{post}}}$} & \multicolumn{1}{c}{$\bolds{\sigma_{\mathrm{post}}}$} & \multicolumn{1}{c}{$\bolds{Q_{0.025}}$} & \multicolumn{1}{c}{$\bolds{Q_{0.975}}$} & $\bolds{\widehat{R}}$ \\ \hline intercept & -2.31 & 0.07 & -2.45 & -2.17 & 1.00\\ time & -0.10 & 0.03 & -0.15 & -0.04 & 1.00\\ cloudcover & -0.04 & 0.03 & -0.09 & 0.01 & 1.00\\ ufbldg & -0.02 & 0.01 & -0.03 & -0.01 & 1.00 \\ veg200m & 0.06 & 0.01 & 0.05 & 0.09 & 1.00\\ pubassit & 0.02 & 0.01 & 0.01 & 0.04 & 1.00\\ $\gamma_{01}$ & 0.35 & 0.07 & 0.23 & 0.51 & 1.01 \\ $\gamma_{02}$ & 0.16 & 0.05 & 0.07 & 0.26 & 1.01 \\ $\gamma_{03}$ & 0.48 & 0.08 & 0.33 & 0.67 & 1.01 \\ $\gamma_{04}$ & 0.14 & 0.05 & 0.05 & 0.24 & 1.01\\ $\gamma_{05}$ & 0.38 & 0.07 & 0.25 & 0.55 & 1.01 \\ $\nu_{24}$ & 0.08 & 0.02 & 0.05 & 0.11 & 1.01 \\ $\nu_{135}$ & 0.17 & 0.03 & 0.12& 0.23 & 1.01 \\ \hline \end{tabular*} \end{table} Last, we consider the posterior mode model (i.e., the model with the highest posterior probability) for the Bin-CMP mixture model \textbf {M2} in order to draw inference about how the different covariates affect high detectability and abundance of the American Robin within the study domain. We conclude that an important covariate is a positively (or negatively) significant factor if the lower (or upper) end of 95\% CIs is greater (or smaller) than 0, respectively. For the posterior mode model, we include only the intercept, \texttt{time}, and \texttt{cloudcover} in (\ref{eqndetectprobeust}), whereas for the covariates in the intensity model, only \texttt{ufbldg}, \texttt {veg200m}, and \texttt{pubassit} are included. For the dispersion parameters, we consider the case where $\nu_{2}=\nu_{4}=\nu_{24}$ and $\nu_{1}=\nu_{3}=\nu_{5}=\nu_{135}$. Table~\ref {tabBinCMPamroM2} presents the posterior summary statistics and Gelman--Rubin diagnostics [\citet{brooks1998general}] for model parameters. It is shown that in all cases $\widehat{R}$ is close to 1, indicating convergence has been reached. Moreover, \texttt{time} is negatively correlated with the detectability of the American Robin, that is, the earlier the survey is conducted, the more likely it is that we can detect American Robin. In terms of the intensity, \texttt {ufbldg} is negatively related to the abundance of American Robin, whereas \texttt{veg200m} and \texttt{pubassit} are positively related. As a result, for bird survey points nearby more buildings, the abundance of American Robin is lower; while for survey points with a higher percentage of vegetation and residents of lower socio-economic status, the abundance of American Robin is higher. \begin{figure} \includegraphics{801f02.eps} \caption{Plots of posterior mean and standard deviation of abundance estimates for 2009 in the Bin-CMP model assuming the posterior mode model for \textup{\textbf{M2}}. Note that \textup{\textbf{M2}} only contains covariates in the intensity model. \textup{(a)}~Posterior mean, \textup{(b)}~posterior sd.}\label{figBinCMPamroM2N2009}\label {figY2009mu}\label{figY2009sd} \end{figure} As an example, Figure~\ref{figBinCMPamroM2N2009} provides a spatial map for the posterior mean and standard deviation of the abundance estimate (from \textbf{M2}) for 2009, whereas Figures~2 and 3 of the supplemental article [\citet{wuSupp2015}] illustrate how the abundance estimates and their standard errors change over the duration of the period studied (2005--2009). Last, our results suggest that the American Robin are overdispersed within the study domain over all of the years considered. \section{Discussion}\label{secDiscu} Motivated by the American Robin data from the BES, we developed a class of Bayesian hierarchical binomial mixture models that allow for automated variable selection and grouping in the presence of unbalanced nested design. In addition, we demonstrate that over- and underdispersion in the data can be accounted for by specifying an appropriate model for the abundance parameter, namely, a Bin-CMP model. More importantly, we allow for large-scale spatial dependence to be accounted for by adding a spatial component to the intensity model (i.e., through a spatial basis function expansion). Under the binomial mixture modeling framework, the use of a low-rank spatial representation proves to be a computationally advantageous approach to building in spatial dependence. Although we have presented a model (\textbf{M2}) that accounts for covariate information, spatial maps that predict abundance at unobserved locations could be obtained using model \textbf{M3} and thereby take advantage of the spline formulation. In contrast, both models \textbf{M1} and \textbf{M2} would require imputation of covariates at unobserved locations (i.e., additional data models) to predict abundance at unobserved locations. Consequently, since our goal is primarily inferential, this direction has not been pursued here. The class of binomial mixture models we consider assume population closure within each primary sampling period. Such an assumption is often justified based on biological and/or ecological considerations, when the primary sampling period covers a relatively short time frame. In our case, the justification of the closed population assumption is based on ecological considerations. However, it may also be possible to extend our model to verify the assumption of population closure following the framework of \citet{dail2011models} by decomposing the true abundance into the sum of two independent components, that is, the total number of survivors from the previous sampling period (by introducing a survival rate parameter in the model) and new additions prior to the current sampling period (by introducing a birth parameter in the model). This is a subject of future research. Although the binomial mixture models we propose can accommodate unbalanced data structures, the amount of missing data can impact model selection and parameter estimation. As discussed in the second simulated example, the model averaged estimates for dispersion parameters are positively biased when the simulated data exhibit the same missing pattern as the American Robin data and spatial components are included to account for spatial dependence in the intensity model. Nevertheless, we note that grouping of dispersion parameters leads to a ``borrowing of strength,'' since data collected over different years are pooled together if the corresponding dispersion parameters fall into the same group. In other words, this pooling of data helps mitigate the negative impacts of missing values. In general, a comprehensive assessment of the effect of missing data is problem specific and depends on both the pattern of missingness and the underlying spatial dependence (e.g., the effective sample size). In practice, we advocate evaluating these effects through simulated data examples, similar to those conducted here. It is important to note that all of the models we considered for the American Robin data provide similar results regarding the identification of important covariates for detectability and intensity, as well as the grouping of dispersion \mbox{parameters}. First, \texttt {time,} and \texttt{cloudcover} are identified to be important covariates for high detectability of the American Robin, with the former being negatively related to observing the American Robin. However, one should be careful when interpretating \texttt{cloudcover} due to the difficulty in estimating it \mbox{objectively} [\citet {vignola2012solar}]. On the other hand, \texttt{ufbldg}, \texttt {veg200m,} and \texttt{pubassit} are found to be important predictors for abundance of the American Robin. In terms of dispersion, the American Robin data demonstrates overdisperion. Importantly, the class of binomial mixture models we propose is of \mbox{independent} interest and when coupled with the CMP distribution can be used in cases where the type of dispersion (i.e., over- and underdispersion) varies over time. In this sense, the Bin-CMP mixture model is extremely versatile, as it can be used for modeling equi-, over-, and underdispersed data (e.g., for modeling abundance of less prevalent species, such as the Eastern Wood Pewee or Wood Thrush in the BES). \section*{Acknowledgments} The authors would like to thank the Editor Tilmann Gneiting, Associate Editor, and three anonymous referees for providing valuable comments that have helped strengthen this manuscript. \begin{supplement} [id-suppA] \stitle{Supplement to ``Bayesian binomial mixture models for estimating abundance in ecological monitoring studies''} \slink[doi]{10.1214/14-AOAS801SUPP} \sdatatype{.pdf} \sfilename{aoas801\_supp.pdf} \sdescription{The supplementary material contains the MCMC sampling algorithm, details regarding computation times for the models implemented, and additional figures.} \end{supplement}
proofpile-arXiv_067-13738
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Abstract} We introduce a new approach to obtaining pointwise estimates for solutions of elliptic boundary value problems when the operator being considered satisfies a certain type of weighted integral inequalities. The method is illustrated on several examples, including a scalar second-order elliptic equation, the 3D Lam\'{e} system, and a scalar higher-order elliptic equation. The techniques can be extended to other elliptic boundary value problems provided that the corresponding weighted integral inequalities are satisfied. \section{Introduction}\label{sec_intro} An important open problem in the mathematical theory of linear elasticity is whether solutions of the elasticity system, when supplemented with homogeneous Dirichlet boundary conditions and sufficiently smooth right-hand side data, are uniformly bounded in \emph{arbitrary} domains. A similar question stands in the theory of hydrostatics, where the uniform boundedness of solutions of the Stokes system in general domains remains unknown. For bounded domains $\Omega$ with smooth boundaries $\partial \Omega$, inequalities of the form \begin{equation} \norm{u}_{L^{\infty}(\Omega)} \leq C_{\Omega} \norm{D^{k} u}_{L^{q}(\Omega)}^{a} \norm{Lu}_{W^{l,p}(\Omega)}^{b} \label{eqn_pt_bd} \end{equation} can often be obtained, by combining appropriate \emph{a priori} estimates with Sobolev inequalities. The problem of such inequalities is that the constant $C_{\Omega}$ generally depends on the smoothness of the domain $\Omega$, and can blow up if the boundary of $\Omega$ contains geometric singularities. Efforts have been devoted to the study of inequalities of the type \eqref{eqn_pt_bd} with constants \emph{independent} of the domain $\Omega$. In \citet{xie1991}, a sharp pointwise bound \begin{equation} \norm{u}_{L^{\infty}(\Omega)}^{2} \leq \frac{1}{2\pi}\, \norm{Du}_{L^{2}(\Omega)} \norm{\Delta u}_{L^{2}(\Omega)} \label{eqn_pt_bd_lap} \end{equation} was obtained for functions $u$ with zero-trace and with $L^{2}$-integrable gradient $Du$ and Laplacian $\Delta u$ on arbitrary three-dimensional domains $\Omega$. The constant $(2\pi)^{-1}$ was shown to be the best possible. In \citet{xie1992}, a similar inequality with a slightly different best constant ($(3\pi)^{-1}$ instead of $(2\pi)^{-1}$) was conjectured for the Stokes system on arbitrary domains $\Omega \subseteq \mathbb{R}^{3}$, and was proved in the special case $\Omega = \mathbb{R}^{3}$. Further development and results along these lines can be found in \citet{heywood2001} and the references there in. Regarding the 3D Lam\'{e} system, estimates of the form \eqref{eqn_pt_bd_lap} seem to be less well studied, and we are not aware of any results or conjectures similar to \eqref{eqn_pt_bd_lap}. In this paper, we introduce a new approach to obtaining pointwise inequalities of the form \eqref{eqn_pt_bd} with constants independent of the domain $\Omega$. The method works for elliptic operators $L$ satisfying a weighted integral inequality \begin{equation} \int_{\Omega} Lu \cdot \Phi u\,dx \geq 0, \label{eqn_wpd} \end{equation} where the weight $\Phi$ is either the fundamental solution or Green's function of $L$. Weighted inequalities of the form \eqref{eqn_wpd} were first established for second-order scalar operators (see Section \ref{ssec_pt_bd_1} below for a prototypical derivation), and were later generalized to certain higher-order scalar operators \citep{mazya2002} and second-order systems \citep{lm2007,lm2010}. They have important applications in the regularity theory of boundary points, and have been studied extensively in \citet{mazya1977,mazya1979,mazya1999,mazya2002,mazya1991,eilertsen2000}. By utilizing a slightly modified version of \eqref{eqn_wpd} (see \eqref{eqn_wpd_sx} below), we shall show that the pointwise estimate \eqref{eqn_pt_bd} follows almost immediately from the weighted positivity of $L$, and the constant thus obtained is independent of the domain $\Omega$. The method is illustrated on several concrete problems, including a scalar second-order elliptic equation, the 3D Lam\'{e} system, and a scalar higher-order elliptic equation. The techniques can be extended to other elliptic boundary value problems provided that the corresponding weighted integral inequalities are satisfied. In what follows, $W_{0}^{k,q}(\Omega)$ denotes the usual Sobolev space of zero-trace functions, equipped with the norm \begin{displaymath} \norm{u}_{W_{0}^{k,q}(\Omega)} := \biggl( \sum_{\abs{\beta} \leq k} \norm{D^{\beta} u}_{L^{q}(\Omega)}^{q} \biggr)^{1/q}. \end{displaymath} \begin{theorem} Let $L$ be a second-order elliptic operator, \begin{displaymath} Lu = -D_{i} (a_{ij}(x) D_{j} u),\qquad D_{i} = \frac{\partial}{\partial x_{i}}, \end{displaymath} defined in a bounded domain $\Omega \subset \mathbb{R}^{n}\ (n \geq 3)$ with real, measurable coefficients $a_{ij}(x)$. Suppose $L$ satisfies the strong ellipticity condition \begin{displaymath} \lambda \abs{\xi}^{2} \leq a_{ij}(x) \xi_{i} \xi_{j} \leq \Lambda \abs{\xi}^{2},\qquad \Lambda \geq \lambda > 0, \end{displaymath} for almost all $x \in \Omega$ and all $\xi = (\xi_{1},\dotsc,\xi_{n}) \in \mathbb{R}^{n}$. Let $s < n/(n-2),\ p = s/(s-1),\ q = (n-2)s$ and let $u \in W_{0}^{1,q}(\Omega)$ be such that $Lu \in L^{p}(\Omega)$. Then \begin{displaymath} \norm{u}_{L^{\infty}(\Omega)} \leq C \norm{Lu}_{L^{p}(\Omega)}^{a} \norm{Du}_{L^{q}(\Omega)}^{1-a},\qquad a = \frac{1}{n-1}, \end{displaymath} where $C$ is an absolute constant depending only on $\lambda,\ \Lambda,\ n$, and $s$. \label{thm_pt_bd_1} \end{theorem} \begin{theorem} Let $L$ be the 3D Lam\'{e} system, \begin{equation} Lu = -\Delta u - \alpha \mathop{\rm grad}\nolimits \mathop{\rm div}\nolimits u = -D_{kk} u_{i} - \alpha D_{ki} u_{k},\qquad i = 1,2,3, \label{eqn_lame} \end{equation} where $\alpha = 1/(1-2\nu) > -1$ and $\nu$ is Poisson's ratio. Let $\alpha \in (\alpha_{-},\alpha_{+}) \approx (-0.194,1.524),\ q < 3,\ p = q/(q-1)$ and let $u \in W_{0}^{1,q}(\Omega)$ be such that $Lu \in L^{p}(\Omega)$, where $\Omega$ is an arbitrary bounded domain in $\mathbb{R}^{3}$. Then \begin{displaymath} \norm{u}_{L^{\infty}(\Omega)} \leq C \norm{Lu}_{L^{p}(\Omega)}^{1/2} \norm{Du}_{L^{q}(\Omega)}^{1/2}, \end{displaymath} where $C$ is an absolute constant depending only on $\alpha$ and $q$. \label{thm_pt_bd_lame} \end{theorem} \begin{theorem} Let $L$ be an elliptic operator of order $2m$, \begin{displaymath} L = (-1)^{m} \sum_{\abs{\alpha} = \abs{\beta} = m} a_{\alpha \beta} D^{\alpha+\beta},\qquad D^{\alpha} = \frac{\partial^{\abs{\alpha}}}{\partial x_{1}^{\alpha_{1}} \partial x_{2}^{\alpha_{2}} \dotsb \partial x_{n}^{\alpha_{n}}}, \end{displaymath} defined in $\mathbb{R}^{n}\ (n > 2m)$ with real, constant coefficients $a_{\alpha\beta}$. Suppose $L$ satisfies the strong ellipticity condition \begin{displaymath} \lambda \abs{\xi}^{2m} \leq a_{\alpha\beta} \xi^{\alpha} \xi^{\beta} \leq \Lambda \abs{\xi}^{2m},\qquad \Lambda \geq \lambda > 0, \end{displaymath} for all $\xi = (\xi_{1},\dotsc,\xi_{n}) \in \mathbb{R}^{n}$. Let $F$ be the fundamental solution of $L$. Let $q < n/(n-2m),\ q' = q/(q-1),\ k = n-2m$ and let $u \in W_{0}^{k,q}(\Omega)$ be such that $Lu \in L^{q'}(\Omega)$, where $\Omega$ is an arbitrary bounded domain in $\mathbb{R}^{n}$. If $F$ is homogeneous of order $2m-n$ and $L$ is weighted positive with the weight $F$, then \begin{displaymath} \norm{u}_{L^{\infty}(\Omega)}^{2} \leq C \norm{D^{k} u}_{L^{q}(\Omega)} \norm{Lu}_{L^{q'}(\Omega)}, \end{displaymath} where $C$ is an absolute constant depending only on $\lambda,\ \Lambda,\ m,\ n$ and $q$. \label{thm_pt_bd_m} \end{theorem} The proofs of Theorems \ref{thm_pt_bd_1}--\ref{thm_pt_bd_m} are given in Section \ref{sec_pt_bd}. \section{The Notion of Weighted Positivity}\label{sec_wpd} Let $\Omega$ be a domain in $\mathbb{R}^{n}$ and let $L = (L_{i})_{i=1}^{N}$ be a scalar elliptic operator ($N = 1$) or an elliptic system ($N > 1$) defined on $\Omega$. Without making further structural assumptions on $L$, we recall first the abstract notion of weighted positivity. \begin{definition}[Weighted positivity] Assume that $0 \in \Omega$ and that $\Psi(x) = (\Psi_{ij}(x))_{i,j=1}^{N}$ is a given (matrix) function that is sufficiently regular except possibly at $x = 0$. The operator $L = (L_{i})_{i=1}^{N}$ is said to be weighted positive ($N = 1$) or weighted positive definite ($N > 1$) with the weight $\Psi$ if \begin{equation} \int_{\Omega} Lu \cdot \Psi u\,dx = \int_{\Omega} (Lu)_{i} \Psi_{ij} u_{j}\,dx \geq 0, \label{eqn_wpd_w} \end{equation} for all real-valued, smooth vector functions $u = (u_{i})_{i=1}^{N} \in C_{0}^{\infty}(\Omega \setminus \{0\})$. \end{definition} The concept of weighted positivity is often more useful when the integral $\int_{\Omega} Lu \cdot \Psi u\,dx$ in \eqref{eqn_wpd_w} has a \emph{positive} lower bound. This motivates the following definition. \begin{definition}[Strong weighted positivity] Assume that $0 \in \Omega,\ L = (L_{i})_{i=1}^{N}$ is of order $2m,\ m \geq 1$, and that $\Psi(x) = (\Psi_{ij}(x))_{i,j=1}^{N}$ is a given (matrix) function that is sufficiently regular except possibly at $x = 0$. The operator $L$ is said to be strongly weighted positive ($N = 1$) or strongly weighted positive definite ($N > 1$) with the weight $\Psi$ if, for some $c > 0$, \begin{equation} \int_{\Omega} Lu \cdot \Psi u\,dx \geq c \sum_{k=1}^{m} \int_{\Omega} \abs{D^{k} u}^{2} \abs{x}^{2k-2m} \abs{\Psi}\,dx, \label{eqn_wpd_s} \end{equation} for all real-valued, smooth vector functions $u = (u_{i})_{i=1}^{N} \in C_{0}^{\infty}(\Omega \setminus \{0\})$. Here $D^{k}$ denotes the gradient operator of order $k$, i.e. $D^{k} = \{D^{\alpha}\}$ with $\abs{\alpha} = k$, and $\abs{\Psi}$ stands for the Frobenius norm of $\Psi$, i.e. $\abs{\Psi}^{2} = \sum_{i,j=1}^{N} \abs{\Psi_{ij}}^{2}$. \end{definition} Among all possible candidates of the weight function $\Psi$, the special choice $\Psi = \Phi$ where $\Phi$ is the fundamental solution or Green's function of $L$ is of the most interest. In particular, if $L$ has constant coefficients and is strongly weighted positive in the sense of \eqref{eqn_wpd_s}, with the weight $\Psi = \Phi$, then it can be shown that, for all $x \in \Omega$ and for the same constant $c$ as given in \eqref{eqn_wpd_s}, \begin{equation} \int_{\Omega} Lu \cdot \Phi(x-y) u\,dy \geq \frac{1}{2}\, \abs{u(x)}^{2} + c \sum_{k=1}^{m} \int_{\Omega} \frac{\abs{D^{k} u}^{2}}{\abs{x-y}^{2m-2k}}\, \abs{\Phi(x-y)}\,dy, \label{eqn_wpd_sx} \end{equation} for all real-valued, smooth vector functions $u = (u_{i})_{i=1}^{N} \in C_{0}^{\infty}(\Omega)$ \citep[see, for example,][]{mazya2002}. Estimate \eqref{eqn_wpd_sx} is significant because it provides a pointwise bound for the test function $u$, and it serves as the basis of the estimates to be derived below in Section \ref{sec_pt_bd}. \section{Proofs of the Main Theorems}\label{sec_pt_bd} \subsection{Proof of Theorem \ref{thm_pt_bd_1}}\label{ssec_pt_bd_1} We shall show that the multiplicative inequality \begin{equation} \norm{u}_{L^{\infty}(\Omega)}^{n-1} \leq C \norm{Lu}_{L^{p}(\Omega)} \norm{Du}_{L^{q}(\Omega)}^{n-2},\qquad p = \frac{s}{s-1},\ q = (n-2)s, \label{eqn_pt_bd_1} \end{equation} holds for all $s < n/(n-2)$ with a constant $C = C(\lambda,\Lambda,n,s)$ independent of the domain $\Omega$, if $u \in W_{0}^{1,q}(\Omega)$ and $Lu \in L^{p}(\Omega)$. Inequality \eqref{eqn_pt_bd_1} will be obtained by a modification of weighted positivity of the operator $L$ with zero Dirichlet boundary conditions. More precisely, let $G(x,y)$ denote Green's function of $L$. Its existence and uniqueness are classical facts as well as the estimates \begin{equation} c_{1} \abs{x-y}^{2-n} \leq G(x,y) \leq c_{2} \abs{x-y}^{2-n}, \label{eqn_G_1} \end{equation} where $c_{1}$ and $c_{2}$ are positive constants depending on $\lambda$ and $\Lambda$ \citep[see][]{royden1962,lsw1963}. By definition of a weak solution of the Dirichlet problem and by a standard approximation argument, we obtain, for almost all $x \in \Omega$, \begin{multline*} \int_{\Omega} Lu \cdot G(x,y) u \abs{u}^{n-3}\,dy \\ = (n-2) \int_{\Omega} a_{ij} D_{i} u D_{j} u \cdot G(x,y) \abs{u}^{n-3}\,dy + \int_{\Omega} a_{ij} D_{y_{i}} G(x,y) D_{j} u \cdot u \abs{u}^{n-3}\,dy \\ \geq \frac{1}{n-1} \int_{\Omega} a_{ij} D_{y_{i}} G(x,y) D_{j} \abs{u}^{n-1}\,dy = \frac{1}{n-1}\, \abs{u(x)}^{n-1}. \end{multline*} Hence for $s < n/(n-2)$, \begin{displaymath} \abs{u(x)}^{n-1} \leq c_{2} (n-1) \norm{Lu}_{L^{p}(\Omega)} \biggl( \int_{\Omega} \frac{\abs{u}^{q}}{\abs{x-y}^{q}}\,dy \biggr)^{1/s},\qquad p = \frac{s}{s-1},\ q = (n-2)s, \end{displaymath} where $c_{2}$ is the constant in \eqref{eqn_G_1}. By Hardy's inequality, the last integral does not exceed \begin{displaymath} \biggl( \frac{q}{n-q} \biggr)^{n-2} \biggl( \int_{\Omega} \abs{Du}^{q}\,dy \biggr)^{1/s}, \end{displaymath} thus \eqref{eqn_pt_bd_1} follows with \begin{displaymath} C = c_{2} (n-1) \biggl( \frac{q}{n-q} \biggr)^{n-2}. \end{displaymath} This completes the proof of Theorem \ref{thm_pt_bd_1}. \begin{remark} Note that for $L = -\Delta$, we have $c_{2} = [(n-2) \omega_{n}]^{-1}$ where $\omega_{n}$ is the measure of the unit $(n-1)$-sphere $S^{n-1}$. In particular, for $n = 3,\ s = 2$, and $L = -\Delta$, we have $p = 2,\ q = 2$, and $c_{2} = (4\pi)^{-1}$. Thus Theorem \ref{thm_pt_bd_1} implies that \begin{displaymath} \pi \norm{u}_{L^{\infty}(\Omega)}^{2} \leq \norm{\Delta u}_{L^{2}(\Omega)} \norm{Du}_{L^{2}(\Omega)}. \end{displaymath} This is similar to the pointwise bound \eqref{eqn_pt_bd_lap} obtained by \citet{xie1991}, but with a slightly worse constant ($\pi$ instead of $2\pi$). \end{remark} \begin{remark} Note that the inequality \eqref{eqn_pt_bd_1} fails for $s = n/(n-2)$. Indeed, when $s = n/(n-2)$, it is easily checked that $p = n/2$ and $q = n$. Let $\zeta \in C_{0}^{\infty}[0,\infty)$ be a smooth cutoff function on $\mathbb{R}$ such that $0 \leq \zeta \leq 1,\ \zeta(x) = 1$ for $x \leq 1/2$, and $\zeta = 0$ for $x \geq 1$. Set \begin{displaymath} u(x) = \zeta(\abs{x}) \log \abs{\log\abs{x}}. \end{displaymath} It is easily verified that $u$ is unbounded in any neighborhood of $x = 0$ while \begin{displaymath} \abs{Du(x)} \leq C \abs{x}^{-1} \abs{\log\abs{x}}^{-1},\qquad \abs{\Delta u(x)} \leq C \abs{x}^{-2} \abs{\log\abs{x}}^{-1}, \end{displaymath} for small $\abs{x}$, indicating that $Du \in L^{n}(\mathbb{R}^{n})$ and $\Delta u \in L^{n/2}(\mathbb{R}^{n})$. This violates \eqref{eqn_pt_bd_1}. \end{remark} \subsection{Proof of Theorem \ref{thm_pt_bd_lame}}\label{ssec_pt_bd_lame} Let $\Phi$ be the fundamental matrix of the 3D Lam\'{e} operator $L$, \begin{subequations}\label{eqn_lame_fs} \begin{align} \Phi_{ij}(x) & = c_{\alpha} r^{-1} \biggl( \delta_{ij} + \frac{\alpha}{\alpha+2}\, \omega_{i} \omega_{j} \biggr),\qquad i,j = 1,2,3, \label{eqn_lame_fs_phi} \\ c_{\alpha} & = \frac{\alpha+2}{8\pi(\alpha+1)} > 0, \label{eqn_lame_fs_c} \end{align} \end{subequations} where $\delta_{ij}$ is the Kronecker delta, $r = \abs{x}$, and $\omega_{i} = x_{i}/\abs{x}$. The weighted positive definiteness of $L$ with the weight $\Phi$ has been established in \citet{lm2010} for certain ranges of the parameter $\alpha$. \begin{theorem}[\citealp{lm2010}] The 3D Lam\'{e} system $L$ is weighted positive definite with the weight $\Phi$ when $\alpha_{-} < \alpha < \alpha_{+}$, where $\alpha_{-} \approx -0.194$ and $\alpha_{+} \approx 1.524$. It is not weighted positive definite with the weight $\Phi$ when $\alpha < \alpha_{-}^{(c)} \approx -0.902$ or $\alpha > \alpha_{+}^{(c)} \approx 39.450$. \label{thm_lame_wpd} \end{theorem} By examining the proof of Theorem \ref{thm_lame_wpd}, it can be observed that $L$ is in fact \emph{strongly} weighted positive definite with the weight $\Phi$, i.e. \begin{equation} \int_{\mathbb{R}^{3}} Lu \cdot \Phi u\,dx \geq \frac{1}{2}\, \abs{u(0)}^{2} + c \int_{\mathbb{R}^{3}} \abs{Du}^{2} \abs{x}^{-1}\,dx,\qquad \text{for some $c > 0$}, \label{eqn_wpd_lame} \end{equation} for all $u = (u_{i})_{i=1}^{3} \in C_{0}^{\infty}(\mathbb{R}^{3})$ when $\alpha_{-} < \alpha < \alpha_{+}$. Using Theorem \ref{thm_lame_wpd}, we shall show that the multiplicative inequality \begin{equation} \norm{u}_{L^{\infty}(\Omega)}^{2} \leq C \norm{Lu}_{L^{p}(\Omega)} \norm{Du}_{L^{q}(\Omega)},\qquad p = \frac{q}{q-1}, \label{eqn_pt_bd_lame} \end{equation} holds on arbitrary bounded domains $\Omega \subset \mathbb{R}^{3}$ with a constant $C = C(\alpha,q)$ independent of the domain, provided that $\alpha_{-} < \alpha < \alpha_{+},\ q < 3,\ u \in W_{0}^{1,q}(\Omega)$ and $Lu \in L^{p}(\Omega)$. Inequality \eqref{eqn_pt_bd_lame} will be obtained as an immediate corollary of the weighted positive definiteness of $L$. More precisely, the weighted inequality \eqref{eqn_wpd_lame} implies that \begin{displaymath} \int_{\mathbb{R}^{3}} Lu \cdot \Phi(x-y) u\,dy \geq \frac{1}{2}\, \abs{u(x)}^{2}, \end{displaymath} for all real-valued $u = (u_{i})_{i=1}^{3} \in C_{0}^{\infty}(\mathbb{R}^{3})$. By definition of a weak solution and by a standard approximation argument, the same inequality can be proved for $u \in W_{0}^{1,q}(\Omega)$ for which $Lu \in L^{p}(\Omega)$. Hence for all such $u$ and all $q < 3$, we have \begin{displaymath} \abs{u(x)}^{2} \leq 2c_{3} \norm{Lu}_{L^{p}(\Omega)} \biggl( \int_{\Omega} \frac{\abs{u}^{q}}{\abs{x-y}^{q}}\,dy \biggr)^{1/q},\qquad c_{3} = c_{\alpha} \biggl( 1 + \frac{\abs{\alpha}}{\alpha+2} \biggr),\ p = \frac{q}{q-1}, \end{displaymath} where $c_{\alpha}$ is the constant in \eqref{eqn_lame_fs_c}. By Hardy's inequality, the last integral does not exceed \begin{displaymath} \frac{q}{3-q} \biggl( \int_{\Omega} \abs{Du}^{q}\,dy \biggr)^{1/q}, \end{displaymath} thus \eqref{eqn_pt_bd_lame} follows with \begin{displaymath} C = 2 c_{\alpha} \biggl( 1 + \frac{\abs{\alpha}}{\alpha+2} \biggr) \biggl( \frac{q}{3-q} \biggr). \end{displaymath} This completes the proof of Theorem \ref{thm_pt_bd_lame}. \begin{remark} The hydrostatic limit $\alpha \to \infty$ of the 3D Lam\'{e} system lies unfortunately outside the regime of weighted positive definiteness, hence the uniform estimates of solutions of the 3D Stokes system, if they exist, cannot be deduced from the weighted inequality \eqref{eqn_wpd_lame}. \end{remark} \subsection{Proof of Theorem \ref{thm_pt_bd_m}}\label{ssec_pt_bd_m} Let $F(x)$ denote the fundamental solution of $L$. It is well known that $F$ exists for all $n > 2m$ and is homogeneous of order $2m-n$, \begin{equation} F(x) = \abs{x}^{2m-n} F \biggl( \frac{x}{\abs{x}} \biggr),\qquad x \in \mathbb{R}^{n} \setminus \{0\}, \label{eqn_F_m} \end{equation} when $n$ is odd \citep{john1982}. When $n$ is even, \eqref{eqn_F_m} may not be valid since terms of the order $\abs{x}^{2m-n} \log\abs{x}$ may occur in $F$. Under the assumptions that $L$ is weighted positive with the weight $F$, i.e. \begin{equation} \int_{\mathbb{R}^{n}} Lu \cdot F u\,dx \geq 0, \label{eqn_wpd_m} \end{equation} for all real-valued $u \in C_{0}^{\infty}(\mathbb{R}^{n} \setminus \{0\})$, and that $F$ satisfies \eqref{eqn_F_m}, we shall show that the multiplicative inequality \begin{equation} \norm{u}_{L^{\infty}(\Omega)}^{2} \leq C \norm{D^{k} u}_{L^{q}(\Omega)} \norm{Lu}_{L^{q'}(\Omega)},\qquad k = n-2m,\ q' = \frac{q}{q-1}, \label{eqn_pt_bd_m} \end{equation} holds on arbitrary bounded domains $\Omega \subset \mathbb{R}^{n}$ with a constant $C = C(\lambda,\Lambda,n,m,q)$ independent of the domain, provided that $q < n/(n-2m),\ u \in W_{0}^{k,q}(\Omega)$ and $Lu \in L^{q'}(\Omega)$. Like in the previous two theorems, inequality \eqref{eqn_pt_bd_m} will be obtained as an immediate corollary of the weighted positivity of $L$. More precisely, the weighted inequality \eqref{eqn_wpd_m} implies that \begin{displaymath} \int_{\mathbb{R}^{n}} Lu \cdot F(x-y) u\,dy \geq \frac{1}{2}\, \abs{u(x)}^{2}, \end{displaymath} for all real-valued $u \in C_{0}^{\infty}(\mathbb{R}^{n})$ \citep[see][Proposition 3]{mazya2002}. By definition of a weak solution and by a standard approximation argument, the same inequality can be proved for $u \in W_{0}^{k,q}(\Omega)$ for which $Lu \in L^{q'}(\Omega)$. Hence for all such $u$ and all $q < n/(n-2m)$, we have \begin{displaymath} \abs{u(x)}^{2} \leq 2c_{4} \norm{Lu}_{L^{q'}(\Omega)} \biggl( \int_{\Omega} \frac{\abs{u}^{q}}{\abs{x-y}^{kq}}\,dy \biggr)^{1/q},\qquad k = n-2m,\ q' = \frac{q}{q-1}, \end{displaymath} where $c_{4} = \max_{\omega \in S^{n-1}} \abs{F(\omega)}$ is a positive constant depending on $\lambda$ and $\Lambda$. By repeated applications of Hardy's inequality, the last integral does not exceed \begin{displaymath} \biggl( \frac{1}{r-k} \biggr) \biggl( \frac{1}{r-k+1} \biggr) \dotsb \biggl( \frac{1}{r-1} \biggr) \biggl( \int_{\Omega} \abs{D^{k} u}^{q}\,dy \biggr)^{1/q},\qquad r = n/q, \end{displaymath} thus \eqref{eqn_pt_bd_m} follows with \begin{displaymath} C = 2c_{4} \biggl( \frac{1}{r-k} \biggr) \biggl( \frac{1}{r-k+1} \biggr) \dotsb \biggl( \frac{1}{r-1} \biggr). \end{displaymath} This completes the proof of Theorem \ref{thm_pt_bd_m}. \begin{remark} Note that for $L = (-\Delta)^{m}$ with $2m < n$, \eqref{eqn_wpd_m} is satisfied if and only if $n = 5,\ 6,\ 7$ for $m = 2$ \citep{mazya1979} and $n = 2m+1,\ 2m+2$ for $m > 2$ \citep{mazya1997}. In particular, for $m = 2$ and $q = 2$, we have $q' = 2$ and $c_{2} = [2(n-2)(n-4)\omega_{n}]^{-1}$ where \begin{displaymath} \omega_{n} = \frac{n \pi^{n/2}}{\Gamma(\frac{1}{2} n+1)} \end{displaymath} is the measure of the unit $(n-1)$-sphere $S^{n-1}$. Thus \begin{displaymath} \norm{u}_{L^{\infty}(\Omega)}^{2} \leq \frac{\Gamma(4-\frac{1}{2} n)}{2\pi^{n/2} (n-2)(n-4)}\, \norm{D^{n-4} u}_{L^{2}(\Omega)} \norm{\Delta^{2} u}_{L^{2}(\Omega)},\qquad n = 5,\ 6,\ 7. \end{displaymath} To the best of our knowledge, inequalities of this type have not been known before. \end{remark}
proofpile-arXiv_067-13841
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \vspace*{-.15cm} Many mathematical proofs involve a change of representation from a domain in which it is difficult to reason about the entities in question to one in which some aspects essential to the proof become evident and the proof falls out naturally. Many times the transformation makes it explicitly into the written proof, but sometimes it remains hidden as part of the esoteric process of coming up with the proof in the mathematician's mind. For a formal, mechanical proof, this can be problematic, not only because we need to account for the logical validity of the transformation, but because if we want a computational system to find a proof like a mathematician would, we need to be able to incorporate something like the esoteric transformations going on inside the mathematician's mind into the mechanical search. The importance of representational changes in mathematics is evidenced in historically notable works like Kurt Gödel's incompleteness theorems, where the proof involves matching (or encoding) meta-theoretical concepts like `sentence' and `proof' as natural numbers, or more recently Andrew Wiles' proof of Fermat's Last Theorem, which involves matching the Galois representations of elliptic curves with modular forms. This phenomenon is also seen in refinement based formal methods (e.g. VDM and B): one starts with a highly abstract representation that is easy to reason with, and then it is step-wise refined to a very concrete representation that can be implemented as a computer program. All of these transformations are justified by a general notion of morphism. In this paper we give an overview of a general mathematical framework suitable for reasoning about representational changes in type-theoretic higher-order logics (these are transformations/morphisms between structures that land us in different theories). We see that the operation of Isabelle's \textit{transfer} methods \cite{huffman2013lifting} fit into this notion of transformation. It is a way of mechanising inference between two domains, if the system is provided with a transformation by the user. We present a set of transformations we have identified as essential for reasoning in discrete mathematics, and show how we have used the \textit{transfer} tool to implement mechanical proofs in Isabelle that use these transformations. We show our work towards automating the search for representation as a tactic for use within proofs in discrete mathematics in Isabelle. \vspace*{-.15cm} \section{Background} \vspace*{-.2cm} Isabelle/HOL is a theorem proving framework based on a simple type-theoretical higher-order logic \cite{isabelle2013}. It is one of the most widely used proof assistants for the mechanisation of proofs. Apart from ensuring the correctness of proofs written in its formal language, Isabelle has powerful automatic tactics like \texttt{simp} and \texttt{auto}, and through time it has been enriched with some internally-verified theorem provers like \texttt{metis} \cite{Hurd2005} and \texttt{smt} \cite{Weber2011}, along with a connection from the internal provers to some very powerful external provers like E, SPASS, Vampire, CVC3 and Z3 through the Sledgehammer tool \cite{PaulsonLawrenceC.Blanchette2010}. The \textit{Transfer} package was first released for Isabelle 2013-1 as a general mechanism for defining quotient types and transferring knowledge from the old `representation' type into the new `abstract' type \cite{huffman2013lifting}. However, their generalisation is not restricted to the definition of new quotient types, but allows the user to relate any two types by theorems of a specific shape called \textit{transfer rules}. Some of these rules can be defined automatically when the user defines a new quotient type, but the user is free to add them manually, provided that they prove a preservation theorem. Central to this package, the \textit{transfer} and \textit{transfer}$^{\prime}$ tactics try to automatically match the goal sentence to a new one related by either equivalence or implication, inferring this relation from the transfer rules. We have taken full advantage of the generality of the transfer package as a means of automating the translation between sentences across domains which are related by what we consider an appropriate and general notion of \textit{structural transformation}. In Section \ref{transthy} we give an overview of our notion of transformation and how the tactics of the transfer package are useful mechanisms for exploiting the knowledge of a structural transformation. \vspace*{-.15cm} \section{Overall vision}\label{overall} \vspace*{-.15cm} The worlds of mathematical entities are interconnected. Numbers can be represented as sets, pairs of sets, lists of digits, bags of primes, etc. Some representations are only \textit{foundational} and the reasoner often finds it more useful to discard the representation for practical use (e.g., natural number 3 is represented by $\{ \emptyset, \{\emptyset\}, \{\emptyset, \{\emptyset\}\} \}$ in the typical ZF foundations, but this representation is rarely used in practice), and some are \textit{emergent}; they only come about after a fair amount of accumulated knowledge about the objects themselves, but are more helpful as reasoning tools (e.g., natural numbers as bags of primes). Overall, we think that there is no obvious notion of `better representation', and it's up to the reasoner to choose, depending on the task at hand. Thus, we envision a system where the representation of entities can be fluidly transformed. We have looked at problems in discrete mathematics and the transformations commonly used for solving them. Below, we give one motivating example and show how we have mechanised the transformation in question inside Isabelle/HOL. Other motivating examples are briefly mentioned. \subsection*{Numbers as bags of primes}\label{firstexample} Let us start with an example of the role of representation in number theory. Consider the following problem: \begin{problem}\label{prob1} Let $n$ be a positive integer. Assume that, for every prime $p$, if $p$ divides $n$ then $p^2$ also divides $n$. Prove that $n$ is the product of a square and a cube. \end{problem} A standard solution to this problem is to take a set of primes $p_i$ such that $n = p_1^{a_1} p_2^{a_2} \cdots p_k^{a_k}$. Then we notice that the condition ``if $p$ divides $n$ then $p^2$ also divides $n$" means that $a_i \neq 1$, for each $a_i$. Then, we need to find $x_1, x_2, \ldots , x_k$ and $y_1, y_2, \ldots, y_k$ where \[(p_1^{x_1} p_2^{x_2} \cdots p_n^{x_k})^2 (p_1^{y_1} p_2^{y_2} \cdots p_n^{y_k})^3 = p_1^{a_1} p_2^{a_2} \cdots p_n^{a_k}\] or simply \[2(x_1, x_2, \ldots , x_k) + 3(y_1, y_2, \ldots, y_k) = (a_1, a_2, \ldots, a_k).\] Thus, we only need to prove that for every $a_i \neq 1$ there is a pair $x_i$, $y_i$ such that $2x_i + 3y_i = a_i$. The proof of this is routine. The kind of reasoning used for this problem is considered standard by mathematicians. However, it is not so simple in current systems for automated theorem proving. The non-standard step is the `translation' from an expression containing various applications of the exponential function into a simpler form in a linear arithmetic of lists, validated by the fundamental theorem of arithmetic. The informal nature of the argument, in the usual mathematical presentation, leaves it open whether the reasoning is best thought as happening in an arithmetic of lists where the elements are the exponents of the primes, or perhaps a theory of bags (multisets) where the elements are prime numbers. The reader might find it very easy to fluidly understand how these representations match with each other and how they are really just different aspects of the same thing. Such ease supports our overall argument and vision: that to automate mathematical reasoning, we require a framework in which data structures are linked robustly by logically valid translations, where the translation from one to another is easily conjured up. The \textit{numbers-as-bags-of-primes} transformation that links each positive integer to the bag of its prime factors is valid because there are operations on each side (numbers and multisets) that correspond to one another. For example, `divides' corresponds to `sub(multi)set', `least common multiple' corresponds to `union', `product' corresponds to `multiset addition', etc. Furthermore, all the predicates used in the statement of problem \ref{prob1} have correspondences with well-known predicates regarding bags of primes. Thus, the problem can be translated as a whole. Other representations may not be very productive, e.g., try thinking about exponentiation in terms of lists of digits. Table \ref{tab1} shows more examples of number theory problems with their corresponding problem about multisets. \vspace*{-.5cm} \begin{table} \setlength{\abovecaptionskip}{10pt plus 2pt minus 2pt} \renewcommand{\arraystretch}{1.5} \renewcommand{\tabcolsep}{6pt} \begin{tabular}{|p{0.47\linewidth}|p{0.47\linewidth}|}\hline {\bf Problem in $\mathbb{N}$} & {\bf Problem in multisets} \\ \hline {Prove that there is a unique set $\{x,y,z\}$ with different $x$, $y$, $z$ greater than 1, such that $xyz = 100$.} & {Prove that there is a unique way to partition $\{2,2,5,5\}$ into three different non-empty parts.} \\ \hline {Prove that in a set of 9 natural numbers, where none is divided by a prime larger than 6, there is a pair whose product is a perfect square.} & {Take 9 multisets whose only elements are $2$, $3$ and $5$. Prove that two of the multisets have multiplicities with the same parity.} \\ \hline \end{tabular} \caption{Number theory problems and their multiset counterparts.} \label{tab1} \end{table} \vspace*{-1cm} \subsection*{Numbers as sets} Many numerical problems have \textit{combinatorial proofs}. Theses are proofs where numbers are interpreted to be cardinalities of sets, and the whole problem can be converted to a problem about sets. \textit{Enumerative combinatorics} studies how sets relate to their cardinalities. As such, its theorems provide the link that allows us to translate numerical problems into finite set-theoretical problems. Table \ref{tab2} shows examples of arithmetic problems with their corresponding finite set theory problems. While the proofs of the numerical versions are not obvious at all (some of which are important results in basic combinatorics), the proofs of their finite set versions can be considered routine. \vspace*{-.5cm} \begin{table}[ht] \setlength{\abovecaptionskip}{10pt plus 2pt minus 2pt} \renewcommand{\arraystretch}{1.2} \renewcommand{\tabcolsep}{6pt} \begin{tabular}{|p{0.38\linewidth}|p{.56\linewidth}|} \hline {\bf Problem in $\mathbb{N}$} & {\bf Problem in sets} \\ \hline \centering \multirow{3}{*}{$\displaystyle{n+1 \choose k+1} = {n \choose k} + {n \choose k+1}$} & \multirow{3}{\linewidth}{The set $\{x \subseteq \{0,1,\ldots,n\} : |x| = k+1\}$ can be partitioned into $2$ parts: those that contain element $n$ and those that don't.}\\ & \\ & \\ \hline \centering \multirow{3}{*}{$\frac{n(n+1)}{2} = \displaystyle 1+ 2+ \cdots + n$} & \multirow{3}{\linewidth}{The set $\{x \subseteq \{0,1,\ldots,n\} : |x| = 2\}$ can be partitioned into $n$ parts $X_1,X_2,\ldots,X_n$ where the largest element of each $x \in X_i$ is $i$.}\\ & \\ & \\ \hline \centering \multirow{4}{*}{$2^{n+1} - 1 = \displaystyle\sum_{i=0}^n 2^i$} & \multirow{4}{\linewidth}{The power set of $\{0,1,\ldots,n\}$, excluding the empty set, can be partitioned into $n$ parts $X_1,X_2,\ldots,X_n$ where the largest element of each $x \in X_i$ is $i$.}\\ & \\ & \\ & \\ \hline \centering \multirow{3}{*}{$2^n = \displaystyle\sum_{i=0}^n {n \choose i}$} & \multirow{3}{\linewidth}{The power set of $\{1,\ldots,n\}$ can be partitioned into $n+1$ parts $X_0,X_1,\ldots,X_{n}$ where $|x| = i$ for every $x \in X_i$.}\\ & \\ & \\ \hline \end{tabular} \caption{Numerical problems and their set counterparts.} \label{tab2} \end{table} \vspace*{-1cm} \subsection*{Interconnectedness} We want to stress the importance of having fluidity of representations. For example, we talked about the ease with which we could think that the \textit{numbers-as-bags-of-primes} transformation is actually a transformation of numbers to a theory of lists, where elements of the list are the exponents of the ordered prime factors. Inspired by this, we have mechanised many other simple transformations, but whose composition allows us to translate fluently from one representation to another. Our global vision of transformations useful in discrete mathematics, which we have mechanised\footnote{These can be found in http://homepages.inf.ed.ac.uk/s1052074/AutoTransfer/. They are updated regularly.}, is represented in Figure \ref{figure}. It is worth mentioning that the diagram is not commutative and that it abstracts logical relations (information may be lost, so some paths can only be traversed in one direction). \begin{figure}[htb]\small\centering \begin{tikzpicture}[node distance=1.5cm, auto] \node (list) {$\mathbb{N}$ list}; \node (nat) [right of=list] {$\mathbb{N}$}; \node (mset) [below of=nat] {$\mathbb{N}$ multiset}; \node (set) [below of=list] {$\mathbb{N}$ set}; \node (int) [right of=nat] {$\mathbb{Z}$}; \node (rat) [right of=int] {$\mathbb{Q}$}; \node (int2) [below of=rat] {$\mathbb{Z}/2\mathbb{Z}$}; \node (bool) [below of=int] {$\mathbb{B}$}; \node (fset) [below of=set] {$(\mathbb{N} \to \mathbb{B})$}; \node (fmset) [below right of=mset] {$(\mathbb{N} \to \mathbb{N})$}; \draw[-] (mset) to node {} (nat); \draw[-] (nat) to node {} (set); \draw[-] (list) to node {} (mset); \draw[-] (mset) to node {} (fmset); \draw[-] (set) to node {} (fset); \draw[-] (fset) to node {} (fmset); \draw[-] (nat) to node {} (int); \draw[-] (nat) to node {} (bool); \draw[-] (int) to node {} (rat); \draw[-] (int) to node {} (int2); \draw[-] (bool) to node {} (int2); \draw[-, loop below, out = 315, in = 225, distance=1cm] (rat) to node {} (rat); \end{tikzpicture} \caption{\footnotesize{Nodes stand for theories connected by transformations useful in discrete mathematics. Apart from the aforementioned transformations, it includes other simpler ones. Actually, some of these transformations, such as that connecting $\mathbb{N}$ list and $\mathbb{N}$ set, are polymorphic, but presented in the diagram as relating only to type $\mathbb{N}$. This is because the numbers-as-bags-of-primes transformation is not polymorphic.}} \label{figure} \end{figure} In the next section we show how a notion of transformation that accounts for this kind of correspondence between structures can be applied in formal proofs using Isabelle's Transfer tool. \vspace*{-.15cm} \section{On Transformations and the Transfer tool}\label{transthy} \vspace*{-.2cm} In this section we give a brief overview of a very general theory of transformations. We do not claim originality of the essence of this theory. However, we believe that the presentation we give brings clarity to the problem. We explain how Isabelle's Transfer tool relates to it. Consider the following definitions: \begin{definition} A \textbf{domain} is a class of entities and a set of types, where each entity of the domain corresponds to exactly one type. \end{definition} \begin{definition} A \textbf{transformation} from a domain $\mathscr{X}$ to a domain $\mathscr{Y}$ is a collection $\mathscr{R}$ where every $R \in \mathscr{R}$ is a relation $R : X \to Y \to \mathbb{B}$ between a type $X$ of domain $\mathscr{X}$ and a type $Y$ of domain $\mathscr{Y}$.\footnote{$\mathbb{B}$ stands for type of booleans.}\end{definition} This relational notion of a transformation makes it possible to account for partial and multivalued mappings in a logic like Isabelle's HOL. We consider a \textit{structure} to be the class conformed by all the entities of the closure of a domain under a set of type constructors. In this work, we focus on structures containing type $\mathbb{B}$, generated with the \textit{function type constructor} $\to$, because the basis of a higher order logic can be fully expressed under such a structure. Then, if our domain has entities of types $A$ and $B$, its structure under $\to$ has all the entities of types $A \to B$, $A \to B \to A$, etc. Preservation of structure is captured with the use of structural \textit{relators}, which can be thought of as rules for extending relations (transformations) to the structures of their domains. In particular, given that our work is based on Isabelle/HOL and on the Transfer package, we focus on one relator. \begin{definition} The \textbf{standard functional extension} of two relations $R_A : A \to A' \to \mathbb{B}$ and $R_B : B \to B' \to \mathbb{B}$ (written $R_A \Mapsto R_B$) is a relation that relates two functions $f:A \to B$ and $f':A' \to B'$ whenever they satisfy the following property: \[\forall\, a : A.\;\, \forall\, a' : A'.\;\, \left[R_A \, a \, a' \,\longrightarrow\, R_B \, (f \, a) \, (f' \, a') \right]\] \end{definition} We call the operator $\Mapsto$ the \textbf{standard function relator}. Intuitively, $(R_A \Mapsto R_B)\, f\, g$ means that $f$ and $g$ send arguments related by $R_A$ to values related by $R_B$. This relator allows us to express how functions (and by extension relations) map to each other in a way that the structure of the domain is preserved. For the numbers-as-bags-of-primes transformation, consider relation $\mathcal{F} : \mathbb{N} \to \mathbb{N}\; \texttt{multiset} \to \mathbb{B}$, that relates every positive integer with the multiset of its prime factors. \begin{example} Let $\ast : \mathbb{N} \to \mathbb{N} \to \mathbb{N}$ be the usual multiplication and \linebreak $\uplus : \mathbb{N} \; \texttt{multiset} \to \mathbb{N} \; \texttt{multiset} \to \mathbb{N} \; \texttt{multiset}$ the `addition' of multisets (in which the multiplicities are added per element). Then we have $(\mathcal{F} \Mapsto (\mathcal{F} \Mapsto \mathcal{F}))\, \ast \, \uplus$ (also written $(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \mathcal{F})\, \ast \, \uplus$). \noindent Note that, by expanding the definition of\, $\Mapsto$ in $(\mathcal{F} \Mapsto (\mathcal{F} \Mapsto \mathcal{F}))\, \ast \, \uplus$ we get \[\forall\,n_1.\, \forall\,N_1.\; \mathcal{F}\, n_1\, N_1 \,\longrightarrow\, \left(\forall\,n_2.\, \forall\, N_2.\; \mathcal{F}\, n_2\, N_2 \,\longrightarrow\, \mathcal{F}\, (n_1 \ast n_2)\, (N_1 \uplus N_2)\right)\] which is equivalent to \[\forall\,n_1, n_2.\; \forall\,N_1, N_2.\; \left(\mathcal{F}\, n_1\, N_1 \wedge \mathcal{F}\, n_2\, N_2\right) \,\longrightarrow\, \mathcal{F}\, (n_1 \ast n_2)\, (N_1 \uplus N_2)\] This demonstrates how nesting the operator\, $\Mapsto$ preserves its intuitive definition: `related arguments map to related values'. In this particular case, this is true due to the law of exponents $p^a p^b = p^{a+b}$. \end{example} Furthermore, the matching of relations can also be expressed with the help of $\Mapsto$, using a boolean relation, as demonstrated by the example below with equivalence (boolean equality) $\texttt{eq} : \mathbb{B} \to \mathbb{B} \to \mathbb{B}$. \begin{example} Let $\texttt{div}: \mathbb{N} \to \mathbb{N} \to \mathbb{B}$ be the relation such that $\texttt{div} \; n \, m$ whenever $n$ divides $m$ (also written $n|m$), and $\subseteq: \mathbb{N} \; \texttt{multiset} \to \mathbb{N} \; \texttt{multiset} \to \mathbb{B}$ the relation such that $a \subseteq b$ whenever the multiplicity of each element of $a$ is lesser or equal to its multiplicity in $b$. Then, we have $(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \texttt{eq}) \;\, \texttt{div} \, \subseteq$, because $n$ divides $m$ if and only if every prime is contained at least as many times in the multiset-factorisation of $m$ as it is in $n$. \end{example} Logical matches (preservation of truth values) can also be expressed across structures, e.g.,\, $(\texttt{eq} \Mapsto \texttt{eq} \Mapsto \texttt{eq}) \; \texttt{imp} \; \texttt{imp}$\, represents that implication $\texttt{imp}$ preserves truth if its arguments are replaced by equivalent ones. Other interesting logical matches can be expressed as well. The general notion of transformation above tells us how theories will relate to one another. Isabelle's Transfer method is an algorithm for transforming a sentence using knowledge about one of these transformations. The simple standard function relator is at the basis of the method. We give a short introduction next. \subsection{Transforming sentences with the Transfer tool}\label{transformingproblems} \vspace*{-.1cm} When trying to prove a sentence $\beta$ we want to find another sentence $\alpha$ such that $\alpha \,\longrightarrow\, \beta$, along with a proof for $\alpha$. In particular, if $\beta$ talks about a domain $B$ and we know a structural transformation from a domain $A$ to $B$, we might be able to find an $\alpha$ about $A$, such that $\alpha \,\longrightarrow\, \beta$. Isabelle's \textit{Transfer} tool provides a method for finding such $\alpha$. The user has to provide theorems of the forms $R_1\, a\, b$ or $(R_1 \Mapsto R_2)\, f\, g$ (and their proofs), i.e., instances of a structural transformation, and the tactics $\texttt{transfer}$ and $\texttt{transfer}'$ will try to automatically infer a sentence $\alpha$ such that $\alpha \,\longleftrightarrow\, \beta$ (in the case of $\texttt{transfer}$), or a weaker one such that $\alpha \,\longrightarrow\, \beta$ (in the case of $\texttt{transfer}'$). Recall that the intuitive interpretation of $(R_1 \Mapsto R_2)\, f \, g$ is `arguments related by $R_1$ are mapped to values related by $R_2$ by $f$ and $g$. Thus, the first step of the transfer method is to search for a theorem of the structural transformation with the shape $(R_1 \Mapsto \texttt{eq})\, p \, q$ in the case of $\texttt{transfer}$ and $(R_1 \Mapsto \texttt{imp})\, p \, q$ in the case of $\texttt{transfer}'$, where $q$ is the property wrapping the sentence we want to prove. Finding it would imply that we can replace $q$ by $p$ provided that we can find that their arguments are related by $R_1$. Thus, the method searches recursively for rules in the structural transformation to prove this. The algorithm is analogous to type inference. It is based on the following derivation rules: \begin{prooftree} \AxiomC{$\mathscr{A}^{\ast}_{\mathscr{C}} \vdash (R_1 \Mapsto R_2) \, f\, g$} \AxiomC{$\mathscr{A}^{\ast}_{\mathscr{C}} \vdash R_1\, x \, y$}\RightLabel{elim} \BinaryInfC{$\mathscr{A}^{\ast}_{\mathscr{C}} \vdash R_2\, (f\, x) \, (g\, y)$} \end{prooftree} \begin{prooftree} \AxiomC{$\mathscr{A}^{\ast}_{\mathscr{C}}, \def2pt} R_1 \, x \, y \vdash R_2\, (f \, x) \, (g \, y)$}\RightLabel{intro{2pt} R_1 \, x \, y \vdash R_2\, (f \, x) \, (g \, y)$}\RightLabel{intro} \UnaryInfC{$\mathscr{A}^{\ast}_{\mathscr{C}} \vdash (R_1 \Mapsto R_2) \; (\lambda x.\, f\, x)\; (\lambda y.\, g\, y)$} \end{prooftree} where $\mathscr{A}^{\ast}_{\mathscr{C}}$ represents knowledge about the structural transformation. Practically, the user provides knowledge specific to this transformation (a set of theorems called \textit{transfer rules}), and the algorithm includes in the search other general transfer rules such as $(\texttt{eq} \Mapsto \texttt{eq} \Mapsto \texttt{eq})\; \texttt{imp} \; \texttt{imp}$. For more details of the actual implementation of the algorithm see \cite{huffman2013lifting}. \vspace*{-.15cm} \section{Mechanising transformations in Isabelle's HOL} \vspace*{-.15cm} In Section \ref{firstexample} we presented some problems in discrete mathematics which involve structural transformation. We have mechanised the transformations by proving the necessary transfer rules. The transfer tool allows us to use the transformations in proofs. In this section we present a couple of examples from a larger catalogue of the transformations we have mechanised in Isabelle. The transformations we have formalised, as suggested in Figure \ref{figure}, are the following: \begin{enumerate} \item \textbf{numbers-as-bags-of-primes}, where each natural number is related to the multiset of its prime factorisation. \item \textbf{numbers-as-sets}, where numbers are related to sets by the cardinality function. \item \textbf{sets-as-$\mathbb{B}$-functions}, where sets are seen as boolean-valued functions. \item \textbf{multisets-as-$\mathbb{N}$-functions}, where multisets are seen as natural-valued functions\footnote{This one is actually by construction using \texttt{typedef} and the Lifting package, which automatically declares transfer rules from definitions lifted by the user from an old type to the newly declared type.}. \item \textbf{sets-as-lists}, where sets are related to lists of their elements. \item \textbf{bits-from-integers}, where type \texttt{bit} is created as an abstract type from the integers. \item \textbf{bits-as-booleans}, where bits are matched to booleans. \item \textbf{$\mathbb{Q}$-automorphisms}, where rational numbers are stretched and contracted, parametric on a factor. \item \textbf{zero-or-some}, where natural 0 is related to bit 0 and positive natural numbers are related to bit 1. \item \textbf{multisets-as-lists}, where multisets are related to lists of their elements. \item \textbf{set-to-multiset}, where the functional representations of multisets and sets are related (this one, we get it for free from the zero-or-some transformation). \item \textbf{naturals-as-integers}, where naturals are matched to integers (this one was built by the developers of the transfer package, not us). \item \textbf{integers-as-rationals}, where integers are matched to rational numbers. Notice that composition of transformations leads to other natural transformations, like the simple relation between sets and multisets.\footnote{The mechanisation of these transformations have been submitted to the Archive of Formal Proofs, along with some examples of their use.} \end{enumerate} Every transformation starts with a declaration and proof of \textit{transfer rules}, which are sentences satisfied by the structural transformation. \subsection{Numbers as bags of primes} \vspace*{-.1cm} The relation at the centre of this transformation is $\mathcal{F} : \mathbb{N} \to \mathbb{N}\;\texttt{multiset} \to \mathbb{B}$, which relates every positive number to the multiset of its prime factors. It is defined as follows: $\mathcal{F}\, n\, M$ holds if and only if \[(\forall\, x.\; \texttt{count}\, M \, x > 0 \,\longrightarrow\, \texttt{prime} \, x) \land n = \prod_{x \in M} x^{\texttt{count}\; M \, x}\] The most basic transfer rules (instances of the structural transformation) are theorems such as $\mathcal{F}\; 6\; \{2,3\}$, whose proof are trivial calculations. Moreover, from the Unique Prime Factorisation theorem we know that $\mathcal{F}$ is bi-unique. Thus, we know that \[(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \texttt{eq})\, \texttt{eq}\, \texttt{eq}\] i.e., that equality is preserved by the transformation. From the fact that every positive number has a factorisation we have \begin{align*} &((\mathcal{F} \Mapsto \texttt{revimp}) \Mapsto \texttt{revimp})\; \forall_{>0} \; \forall \hspace*{1cm} &((\mathcal{F} \Mapsto \texttt{revimp}) \Mapsto \texttt{revimp})\; \exists \; \exists_p \\ &((\mathcal{F} \Mapsto \texttt{eq}) \Mapsto \texttt{eq})\; \forall_{>0} \; \forall_p \hspace*{1cm} &((\mathcal{F} \Mapsto \texttt{eq}) \Mapsto \texttt{eq})\; \exists_{>0} \; \exists_p \end{align*} where $\texttt{revimp}$ is reverse implication, $\forall_p$ is the bounded quantifier representing `for every multiset where all its elements are primes' and $\forall_{>0}$ is the bounded quantifier representing `for every positive number', and similarly for $\exists_p$ and $\exists_{>0}$. The mechanised proofs of these sentences follow relatively straightforward from the Unique Prime Factorisation theorem which is already part of Isabelle's library of number theory. Furthermore, we proved the following correspondences of structure: \begin{align*} &(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \mathcal{F}) \, \ast\; \uplus \hspace*{2cm} &(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \mathcal{F}) \, \texttt{lcm}\; \cup \\ &(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \mathcal{F}) \, \texttt{gcd}\; \cap \hspace*{2cm} &(\mathcal{F} \Mapsto \mathcal{F} \Mapsto \texttt{eq}) \, \texttt{div} \subseteq \\ &(\mathcal{F} \Mapsto \texttt{eq} \Mapsto \mathcal{F})\; \texttt{exp} \; \texttt{smult} \hspace*{2cm} &(\mathcal{F} \Mapsto \texttt{eq})\; \texttt{prime} \; \texttt{sing} \end{align*} \subsubsection*{Application in proofs}\mbox{} \vspace*{.2cm} \newline We formalised the proof of problem \ref{prob1}: \begin{center} \emph{ Let $n$ be a positive integer. Assume that, for every prime $p$, if $p$ divides $n$ \break then $p^2$ also divides it. Prove that $n$ is the product of a square and a cube. } \end{center} \noindent Formally, we state this as \begin{align*}\forall\, n>0.\; (\forall\, p>0.\; \texttt{prime}\; p \,\land\, p\; \texttt{div}\; n \,\longrightarrow\, p^2 \;\texttt{div}\; n&) \\ \,\longrightarrow\, (\exists\, a>0.\; \exists\, b>0.\; a^2 \ast b^3 = n&) \end{align*} Notice that the quantifiers of $n$ $p$, $a$ and $b$ are bounded (greater than 0). This is not necessary (e.g., $p$ is prime, so it is redundant to say that it is positive), but it is convenient for the proof. If we want a proof for the unbounded version (which is also a theorem) we can divide in cases, when $n=0$ and when $n>0$. The case for $n = 0$ is trivial because then $a= 0$ and $b = 0$ are solutions. Thus, we prove prove directly the case for $n > 0$. When we apply the transfer method to the sentence we get the following sentence about multisets: \[ \forall_p\, n.\; (\forall_p \, p.\; \texttt{sing}\; p \land p \subseteq n \,\longrightarrow\, 2 \cdot p \subseteq n) \,\longrightarrow\, (\exists_p\, a.\; \exists_p\, b.\; 2 \cdot a + 3 \cdot b = n) \] where $\forall_p$ is the universal quantifier bounded to prime numbers, and operator $\cdot$ represents the symmetric version of the multiplication previously referred to as $\texttt{smult}$ (we present it as we do for reading ease). The premise $(\forall_p \, p.\; \texttt{sing}\; p \land p \subseteq n \,\longrightarrow\, 2 \cdot p \subseteq n)$ is easily proved to be equivalent to $\forall\, q. \; \texttt{count}\, n\, q \neq 1$. Then it is sufficient to show \[ \forall_p\, n.\; (\forall\, q. \; \texttt{count}\, n\, q \neq 1) \,\longrightarrow\, (\exists_p\, a.\; \exists_p\, b.\; 2 \cdot a + 3 \cdot b = n) \] With a bit of human interaction, this can further be reduced to proving that, for every element of $n$, its multiplicity $n_i$ (which the premise says is different from 1) can be written as $2 a_i + 3 b_i$, or formally: \[ \forall\,n_i :\mathbb{N}.\; n_i \neq 1 \,\longrightarrow\, \exists\,a_i.\;\exists\,b_i.\; 2 a_i + 3 b_i = n_i\] This problem can actually be solved in a decidable part of number theory (Presburger arithmetic), for which there is a method implemented in Isabelle. \subsection{Numbers as sets} \vspace*{-.1cm} At the centre of this transformation is the relation $\mathcal{C}$ where $\mathcal{C}\, A\, n $ holds if and only if\, $\texttt{finite} \,A \wedge \texttt{card}\, A = n$. We first prove trivial cardinality properties like\, $\mathcal{C} \, \{1 \cdots n\}\, n$, which allows us to consider standard representatives of numbers. This relation is right-total but not left total, so we have the following two rules: \begin{align*} ((\mathcal{C} \Mapsto \texttt{imp}) \Mapsto \texttt{imp})\; \forall \; \forall \hspace*{1.5cm} ((\mathcal{C} \Mapsto \texttt{eq}) \Mapsto \texttt{eq})\; \forall_{\texttt{fin}} \; \forall \end{align*} where $\forall_{\texttt{fin}}$ is the universal quantifier restricted to finite sets. Furthermore, the relation is left-unique but not right-unique, so we have \begin{align*} (\mathcal{C} \Mapsto \mathcal{C} \Mapsto \texttt{imp})\; \texttt{eq} \; \texttt{eq} \hspace*{1.5cm} (\mathcal{C} \Mapsto \mathcal{C} \Mapsto \texttt{eq})\; \texttt{eqp} \; \texttt{eq} \end{align*} where $\texttt{eqp}$ is the relation of being equipotent, or bijectable. Then, we have the following rules for the structural correspondence: \begin{align*} (\mathcal{C} \Mapsto \mathcal{C})&\; \texttt{Pow}\; (\lambda x.\, 2^x) \\ (\mathcal{C} \Mapsto \texttt{eq} \Mapsto \mathcal{C})&\; \texttt{n-Pow}\; \left(\lambda\, n\, m.\, \textstyle{n \choose m}\right) \\ (\mathcal{C} \Mapsto \mathcal{C} \Mapsto \texttt{imp})&\, \subseteq\; \leq \\ (\mathcal{C} \Mapsto \mathcal{C} \Mapsto \mathcal{C} \Mapsto \texttt{imp})&\; \texttt{disjU} \; \texttt{plus} \end{align*} where $\texttt{n-Pow}\, S\, n$ is the operator that takes the set of subsets of $S$ that have cardinality $n$. Also, $\texttt{disjU}\; a\; b\; c$ means $\texttt{disjoint} \; a\; b \wedge a \cup b = c$ and $\texttt{plus}$ is the predicative form of operator $+$. We have mechanised combinatorial proofs, like the ones for the problems given in Table \ref{tab2}, of theorems using this transformation. \vspace*{-.15cm} \section{Automated change of representation} \vspace*{-.15cm} We have built a tactic that searches within the space of representations given a set of transformations. Then it tries to reason about each representation. Our goal is for it to embody our vision presented in Section \ref{overall}. This is work in progress, but we address some simple requirements that we have already implemented and present our observations. \vspace*{-.15cm} \subsection{Transformations as sets of transfer rules} \vspace*{-.1cm} As described in Section \ref{transthy}, we consider a transformation as a set of `base' relations, and a structural extension of them. Then, \textit{knowing} a transformation means knowing instances where the relations and their extensions (with respect to relators such as $\Mapsto$) hold. These instances of knowledge are what the Transfer package calls \textit{transfer rules}. They are theorems that the user has to prove and, with enough of them provided, the transfer method will transform the goal to an equivalent, or stronger sentence in another domain. In the traditional use of the Transfer method, there is a single attribute that encompasses all transfer rules. Given a goal, the Transfer method will try to derive an equivalent or stronger subgoal using all the rules with such attribute, with a simple inference mechanism (described in briefly in Section \ref{transformingproblems} and more detailed in \cite{huffman2013lifting}). We have packaged each of the transformations described in Figure \ref{figure} as a set of transfer rules. Then, our tactic applies the transfer method one transformation at a time. \vspace*{-.15cm} \subsubsection*{Transformation-specific language.} Each transformation has a set of definitions that are linked by the transfer package. Some of them are defined only for use of the transformation, like \texttt{disjU} and \texttt{plus} (the predicative version of disjoint union of sets and addition of natural numbers, respectively), or bounded quantifiers. These are necessary for the transfer method to find matches, but theorems will not generally be stated in such terms. Our tactic normalises the language of the goal to suit the specific transformation that is going to be applied. \vspace*{-.15cm} \subsection{Reversing transformations} \vspace*{-.1cm} We have implemented a tool to automatically \textit{reverse} transformations. Let us explain this. If we want to transform a sentence $p\, a$ to an equivalent one, the Transfer method will search for transfer rules $(R \Mapsto \texttt{eq})\, q \, p$ and $R\, b\, a$ for some $R$, $q$ and $b$. If found, it will transform the sentence to the equivalent one $q\, b$. The fact that the sentences are equivalent means that if we had started with $q \, b$ as a goal, it would have been valid to transform it to $p\, a$. This means that, in theory, the same transfer rules can be used to do inference in one direction or the other, at least when the rules are regarding equivalence. The Transfer method does not do so: if one wants to use a transformation in both directions one has to define two distinct transformations, i.e., two distinct sets of transfer rules (in our example above one needs transfer rules $(R^{\prime} \Mapsto \texttt{eq})\, p \, q$ and $R^{\prime}\, a\, b$, where $R^{\prime}$ is the reverse of $R$). A transfer rule always has a `reverse' version (although only equivalent ones retain full information), so we should be able to get these automatically. We have built a conversion tool that, given a set of transfer rules, will generate all their reverse rules (in a logically valid way, i.e., the reverse version is always equivalent to the original). Our program uses the following rewrite rules: \begin{align*} R\, a\, b &\Rightarrow (\texttt{swap}\, R) \, b\, a \\ \texttt{swap}\, (R_1 \Mapsto R_2) &\Rightarrow (\texttt{swap}\, R_1 \Mapsto \texttt{swap}\, R_2) \end{align*} where $\texttt{swap}$ simply swaps the place of the arguments of a function. It is easy to see that these rules are valid. Moreover, $\texttt{swap}\, R$ equals $R$ when $R$ is symmetric, which means that in some relations we can drop the $\texttt{swap}$ function. Thus, our program drops \texttt{swap} from $\texttt{eq}$ and turns $\texttt{swap}\; \texttt{imp}$ and $\texttt{swap}\; \texttt{revimp}$ into $\texttt{revimp}$ and $\texttt{imp}$, respectively. By reversing every transformation we can traverse every path in Figure \ref{figure} in any direction (which does not mean that every sentence has a transformation to an equivalent one). \vspace*{-.15cm} \subsection{Search between representations} \vspace*{-.1cm} Our tactic searches the space of representations by applying each transformation, then reasoning within the theory where it arrived, and, if there are still open subgoals it will repeat the process iteratively. Recall that transformations are relational. As such, the process is non-\linebreak deterministic for each transformation, so there will be many branches per transformation. Apart from being non-deterministic, the transfer method will allow transformations of a sentence where some matches are left open, i.e., in the place of some constant we get a \textit{schematic variable} that the user can instantiate manually, and prove their validity with the new instantiation. This can be handy, but our tactic favours branches with the lowest number of open subgoals, thus favouring complete matches; e.g., matches that will not leave any proof obligations open. We have also noticed that the order in which the transformations are searched is crucial and have set an ad hoc order that favours the transformations we consider more interesting. Heuristics deserve further work, but that remains as a task for the future. \vspace*{-.15cm} \subsubsection*{Discarding false representations.} \vspace*{-.1cm} Recall that our transformations do not necessarily yield equivalent sentences when applying the transfer algorithm (unless we restrict it to do so). Actually, the \textit{numbers-as-sets} transformation can only be applied in useful ways if we allow the reduction of the goal to a strictly stronger subgoal (because, e.g., $A \subseteq B$ implies that $|A| \leq |B|$, but not the other way around, meaning that we can prove $|A| \leq |B|$ by showing first $A \subseteq B$, but we cannot prove $A \subseteq B$ by showing $|A| \leq |B|$). This can lead to false subgoals. Thus, our tactic calls the counterexample checker nitpick \cite{blanchette2010nitpick} and discards branches where a counterexample is found for one of its goals. \subsection{Overview} \vspace*{-.1cm} In a single step in the search, our tactic does the following: \begin{enumerate} \item Normalise to transformation-specific language. \item Apply transformation. \item If working with a transformation that generates a stronger subgoal, search for counterexamples and discard if they are found. \item Apply \texttt{auto} tactic to transformed sentence. \end{enumerate} The tactic can be applied recursively to search for a transformation to a domain more than one step away. When searching, the obvious stop condition is that the theorem has been proved, although there can be other good reasons to stop in a domain to allow the user to reason interactively. Each of the 4 steps mentioned can have plenty of branches, so there is search involved. Branches with the least number subgoals are favoured, and the order in which the transformations are applied matters, but there are no clever heuristics involved. Even though our observations about the trace of the search have led us to the current design and implementation of the tactic, the design is not yet complete and its implementation (although functional) is very much subject to change. There are still open questions regarding what search strategies, \textit{stop} conditions, and reasoning tactics (between transformations) are the best, because these are subject to what evaluation criterion we should use. In Section \ref{concl} we discuss why this is problematic and how we are confronting it. \vspace*{-.15cm} \section{Related Work} \vspace*{-.2cm} Although representation is widely recognised as a crucial aspect of reasoning, to our knowledge there has been no attempt to incorporate the \textit{automatic} search of representation into reasoning tools. \vspace*{-.2cm} \subsection*{Institutions and HETS} \vspace*{-.1cm} The concept of Institution was introduced to as a general notion of logical system \cite{goguen1992institutions}. The Heterogeneous Tool Set (HETS) \cite{hets07} was developed mainly to manage and integrate heterogeneous specifications. Based on the theory of Institutions, it links various logics, including Isabelle's HOL and FOL, and provides a way of translating between them. The uses of HETS have been to bring together various aspects of complex systems where different programming languages and reasoning tools are used for different parts of the system. We do not know of any uses of HETS where heterogeneity is taken advantage of as a means of finding proofs in one representation where other representations fail. \vspace*{-.2cm} \subsection*{Little Theories and IMPS} \vspace*{-.1cm} ``Little Theories'' is the notion that reasoning is best done when it is modular \cite{farmer}. IMPS is a an interactive proof system implemented based on the principles of Little Theories \cite{farmerimp}. The modules, or `little theories' of IMPS are small axiomatic theories connected by \textit{theory interpretation}. Thus, it concerns different levels of abstraction of a theory, and not directly representation of the entities of the theory. \vspace*{-.2cm} \subsection*{Uses of the Transfer package} \vspace*{-.1cm} The use of the Transfer package has changed how new quotient types and subtypes are defined. This is what the \textit{Lifting} package does \cite{huffman2013lifting}. As part of the lifting package, there is a way of automatically transferring definitions from an old type to a new type (e.g., multisets are defined as an abstract type from the type of $\mathbb{N}$-valued functions). The Lifting package has been the main application of the Transfer package, although the generality of their approach is acknowledged by the developers. Embodying this generality, they have built an Isabelle theory of transference from integers to natural numbers, very much in the spirit of the various transformations we have built ourselves. \vspace*{-.15cm} \section{Evaluation, Future Work and Conclusion}\label{concl} \vspace*{-.2cm} The main contributions presented in this paper are: \begin{itemize} \item We mechanised various useful transformations observed in proofs of discrete mathematics. \item We have proved example theorems using these transformations. \item We have identified some requirements for search over the space of representations, and implemented both a tool (for reversing transformations) and a tactic fulfilling the requirements. \end{itemize} Our tactic has yet to be evaluated properly. Below we examine some of the difficulties associated with this task. What makes one proof better than another? There is no definite answer for this question. Simple measures, such as length, are important, but unsatisfactory as a whole. At the very least, we can agree that some proof is better than no proof. Thus, the simplest scenario for evaluation would be that in which our tactic that reasons within many representations finds proofs which cannot be found otherwise. Unfortunately, the current state of automatic theorem provers does not seem to be conducive to this. All the examples in which we have tested our techniques belong to either of the following classes: \begin{enumerate} \item They are so simple that they can be proved automatically\footnote{using Isabelle tactics like \texttt{auto}} without the need of a transformation. \item They are too complicated and require an intervention from the user to complete the proof, even after automatically applying a transformation.\footnote{The examples of this second (more interesting) class have been selected from either maths textbooks for undergraduate students, or from training material for contests such as the Mathematical Olympiads.} \end{enumerate} Thus, the proof-or-no-proof criterion is not applicable . Then, it is necessary to work on close analysis of interactive proofs with transformations and without them. A venture for future research is the potential application of this framework for the transformation of geometric problems into algebraic representations, e.g., Gröbner bases \cite{buchberger1998grobner}, where there has been plenty of success in automated reasoning, or into SAT/SMT, which also have been an area of success in automation.\footnote{We thank the anonymous referees of this paper for suggested these possibilities. They remain as future work.} Interestingly, we have an example (Pascal's theorem) that belongs to the class of problems where Isabelle's automatic tactics can find a proof, but where its proof using a transformation deserves attention. It is provable automatically (from the definition of the \texttt{choose} operator included in Isabelle's combinatorics library by its developers), but can be transformed using the numbers-as-sets transformations and proved only interactively there. Arguably, a combinatorial proof could be highly valued by mathematicians (or a scientist who analyses proofs), making this an example where the interactive proof deserves equal, or even more, attention than the automatic proof. Furthermore, even in the case in which we had automatic proofs using the usual tactics (like Pascal's theorem, mentioned above), we have to consider that these tactics depend on background knowledge (in our case, this amounts to Isabelle's libraries, which have been vastly populated by users). This raises the question: are there ways in which we can measure success independently of the background theories? We think that this is partially achievable by building simpler theories, with some equal level of measurable simplicity, and testing tactics that incorporate representational change there. Even if impractical by itself, this might bring some scientific insight that might lead to better reasoning tactics and theorem provers in the future.
proofpile-arXiv_067-13860
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Coherent quantum control \cite{Rice-Zhao-2000,Shapiro-Brumer-2003} has been extensively studied for a wide variety of systems and proven to be a useful approach to controlling properties of atomic and molecular systems. For example, in bound systems it has been used to suppress spontaneous emission from a manifold of states \cite{Frishman-Shapiro-2001}, and to control radiationless transitions in collinear carbonyl sulfide OCS \cite{Collinear-OCS} and in pyrazine C$_4$H$_4$N$_2$ \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-2,Christopher-Pyrazine-3}. Christopher et al. examined \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-3} radiationless transitions in pyrazine from the $S_2$ to the $S_1$ electronic state and controlled the process by optimizing the superposition states belonging to $S_2$. The problem was first studied \cite{Christopher-Pyrazine-1} using a simplified four-mode model for the pyrazine vibrational motion \cite{Borrelli-Peluso-2003}. The optimization technique used showed the possibility of performing active phase control of $S_2 \leftrightarrow S_1$ interconversion, and that this control is directly related to the presence of overlapping resonances \cite{Levine-1969,Shapiro-1972} in the $S_2$ manifold. Subsequently \cite{Christopher-Pyrazine-2,Christopher-Pyrazine-3}, the full 24-dimensional vibrational motion of pyrazine \cite{Raab-1999} was considered, and the dynamical problem solved using an efficient L\"owdin-Feshbach QP-partitioning approach. Previous control results were fully confirmed and refined, proving the high controllability of $S_2 \leftrightarrow S_1$ internal conversion by actively exploiting the effect of quantum interferences which was shown to rely on the presence of overlapping resonances. In Refs. \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-3} coherent control was implemented for pyrazine that was already prepared in the excited $S_2$ state. The $S_0 \to S_2$ excitation process was not considered, assuming instead that the excited states in $S_2$ were already populated. Recently, we showed the possibility of performing effective coherent control in a simple IBr diatomic model, where we explicitly included the exciting laser in an approach that simultaneously considered excitation and decay to a continuum \cite{IBr-model,Shapiro-1998}. In that case we introduced an optimization schemes different from the simple one used in Refs. \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-3} and demonstrated the reliance of control on overlapping resonances. Below we considerably generalize this study to pyrazine, explicitly introducing the laser to excite the 24-dimensional $S_1 + S_2$ vibronic pyrazine model \cite{Christopher-Pyrazine-2}, and using the same control and optimization schemes as for IBr \cite{IBr-model}. Significantly, we confirm the dependence of controllability on the properties of the $S_2$ resonances in pyrazine. This paper is organized as follows. Section \ref{Theory-General} reviews the theory explicitly accounting for the exciting laser in the weak field limit. Section \ref{Theory-Control} introduces the coherent control approach for the $S_2$ population, points out its connection with the properties of $S_2$ resonances, and provides additional details of the approach. Section \ref{Comp-Results} provides computational results for control of pyrazine internal conversion. Section \ref{Summary} provides a summary and conclusions. \section{$S_0 \to S_2$ Excitation and $S_2 \leftrightarrow S_1$ Internal Conversion} \label{Theory-General} Below, $| \kappa \rangle$ denotes vibrational states belonging to the $S_2$ electronic state, with corresponding projection operator $Q = \sum_\kappa | \kappa \rangle \langle \kappa |$. Since the $| \kappa \rangle$ states are not eigenstates of the full Pyrazine Hamiltonian, the system evolves in time if it were prepared in these states. Hence, such states are termed resonances. The states $| \beta \rangle$ denote vibrational states belonging to the $S_1$ electronic state, with $P = \sum_\beta | \beta \rangle \langle \beta |$ being the associated projection operator. The full vibronic states, which are eigenstates of the full Pyrazine system, are denoted $| \gamma \rangle$, so that $P + Q = I = \sum_\gamma | \gamma \rangle \langle \gamma |$. \subsection{Time Evolution of the System Assumed Already Excited} \label{Theory-General-Already-Excited} In Refs. \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-2,Christopher-Pyrazine-3} $S_0 \to S_2$ laser excitation is assumed to allow preparation of a superposition of $| \kappa \rangle$ resonances: \begin{equation} | \Psi(0) \rangle = \sum_{\kappa'} c_{\kappa'} | \kappa' \rangle. \label{Psi_0_C_controlled} \end{equation} The dynamics of internal conversion was then described by an action of the propagator $U(t)$ on $| \Psi(0) \rangle$: $| \Psi(t) \rangle = U(t) | \Psi(0) \rangle$. Because $| \gamma \rangle$ are exact states of the system Hamiltonian, the spectral resolution of the evolution operator $U(t) = \exp(-i H t/\hbar)$ is $U(t) = \sum_\gamma \exp(-i E_\gamma t/\hbar) | \gamma \rangle \langle \gamma |$. This gives \begin{equation} | \Psi(t) \rangle = \sum_{\kappa'} c_{\kappa'} \sum_\gamma \exp(-i E_\gamma t/\hbar) \langle \gamma | \kappa' \rangle | \gamma \rangle = \sum_\gamma a_\gamma \exp(-i E_\gamma t/\hbar) | \gamma \rangle, \label{Psi_t_a_expansion} \end{equation} where $a_\gamma \equiv \sum_{\kappa'} c_{\kappa'} \langle \gamma | \kappa' \rangle$. The $S_2$ electronic state population $P_{S_2}$ at time $t$ is an observable defined by the projection operator $Q$ onto the state $| \Psi(t) \rangle$: \begin{equation} P_{S_2} (t) = \langle \Psi (t) | Q | \Psi (t) \rangle = \sum_{\gamma',\gamma''} \tilde{a}_{\gamma'}^* (t) \, \tilde{a}_{\gamma''} (t) Q_{\gamma',\gamma''}, \label{P_S_2_t_a_scalar_expansion} \end{equation} where $\tilde{a}_{\gamma} (t) \equiv a_{\gamma} \exp(-i E_{\gamma} t /\hbar)$ and $Q_{\gamma',\gamma''} \equiv \langle \gamma' | Q | \gamma'' \rangle$. Equation (\ref{P_S_2_t_a_scalar_expansion}) can be rewritten in matrix form as: \begin{equation} P_{S_2} (t) = \mathbf{a}^\dagger \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{Q} \, \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \mathbf{a}, \label{P_S_2_t_a_matrix_expansion-1} \end{equation} where $\mathbf{a}$ is a vector with $a_\gamma$ components, $\underline{\underline{\mathbf{e}}}^{\pm i E t/\hbar}$ are square diagonal matrices composed of $\exp(\pm i E_\gamma t/\hbar)$ values, and $\mathbf{Q}$ is a square matrix with $Q_{\gamma',\gamma''}$ matrix elements. Since $Q = \sum_\kappa | \kappa \rangle \langle \kappa |$, the matrix elements $Q_{\gamma',\gamma''} = \langle \gamma' | Q | \gamma'' \rangle = \sum_\kappa \langle \gamma' | \kappa \rangle \langle \gamma'' | \kappa \rangle^*$. Introducing the matrix $\mathbf{R}$ with $R_{\gamma,\kappa} = \langle \gamma | \kappa \rangle$, then $Q_{\gamma',\gamma''} = \sum_\kappa R_{\gamma',\kappa} R_{\gamma'',k}^* = \sum_\kappa R_{\gamma',\kappa} R_{\kappa,\gamma''}^\dagger = [R R^\dagger]_{\gamma',\gamma''}$, giving \begin{equation} \mathbf{Q} = \mathbf{R} \mathbf{R}^\dagger. \label{Q_eq_R_R_dagger} \end{equation} In turn, according to Eq. (\ref{Psi_t_a_expansion}), the vector $\mathbf{a}$ can be written as \begin{equation} \mathbf{a} = \mathbf{R} \mathbf{c}, \label{a_eq_R_c} \end{equation} where $\mathbf{c}$ is a vector composed of $c_{\kappa'}$ coefficients. Inserting Eqs. (\ref{Q_eq_R_R_dagger}) and (\ref{a_eq_R_c}) into Eq. (\ref{P_S_2_t_a_matrix_expansion-1}) gives: \begin{eqnarray} P_{S_2} (t) & = & \mathbf{c}^\dagger \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{R} \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \mathbf{R} \mathbf{c} \equiv \mathbf{c}^\dagger \mathbf{M}^{c \dagger} (t) \mathbf{M}^c (t) \mathbf{c} \equiv \mathbf{c}^\dagger \mathbf{K}^c (t) \mathbf{c} \nonumber \\ & = & \sum_{\kappa',\kappa''} c^*_{\kappa'} c_{\kappa''} K^c_{\kappa',\kappa''} (t) = \sum_{\kappa'} | c_{\kappa'} |^2 K^c_{\kappa',\kappa'} (t) + \sum_{\kappa' \ne \kappa''} c^*_{\kappa'} c_{\kappa''} K^c_{\kappa',\kappa''} (t), \label{P_S_2_t_a_matrix_expansion-2} \end{eqnarray} where $\mathbf{M}^c (t)$ and $\mathbf{K}^c (t)$ matrices are defined as \begin{equation} \mathbf{M}^c (t) \equiv \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \mathbf{R}, \label{M_c_t_definition} \qquad \mathbf{K}^c (t) \equiv \mathbf{M}^{c \dagger} (t) \mathbf{M}^c (t) = \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{R} \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \mathbf{R}. \label{K_c_t_definition} \end{equation} The matrix elements of $\mathbf{M}^c(t)$ have the form \begin{equation} M^c_{\kappa,\kappa'} (t) = \sum_\gamma \langle \kappa | \gamma \rangle \langle \gamma | \kappa' \rangle \exp (-i E_\gamma t/\hbar) = \langle \kappa | U(t) | \kappa' \rangle, \label{M_c_t_matrix_element} \end{equation} being matrix elements of the $U(t)$ propagator operating between the resonances $| \kappa \rangle$ and $| \kappa' \rangle$. According to Eq. (\ref{M_c_t_matrix_element}), $M^c_{\kappa,\kappa'} (t) \neq 0$ for $\kappa \neq \kappa'$, only if there is at least one state $| \gamma \rangle$ such that $\langle \kappa | \gamma \rangle \neq 0$ and $\langle \gamma | \kappa' \rangle \neq 0$. If so, then resonances $| \kappa \rangle$ and $| \kappa' \rangle$ are said to be overlapping. This resonance overlap property is crucial for $\mathbf{M}^c (t)$ nondiagonality which, in turn, provides $\mathbf{K}^c (t)$ nondiagonality, which allows efficient phase control of $P_{S_2} (t)$ in Eq. (\ref{P_S_2_t_a_matrix_expansion-2}) by means of phases $\varphi_{\kappa'}$ of complex coefficients $c_{\kappa'} = |c_{\kappa'}| \exp(i \varphi_{\kappa'})$ \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-3}. Such phase control is termed active control, in contrast to passive control, which is control via the $|c_{\kappa'}|$ amplitudes only. In the case of pyrazine, which has 24 vibrational degrees of freedom, there is a large number of $| \gamma \rangle$ states \cite{Christopher-Pyrazine-2,Raab-1999}. To make the computations feasible, instead of exact states, a set of approximate coarse-grained states is used to compute the time evolution. Specifically, the energy axis is divided into small bins $I_\alpha$, of size $\Delta_\alpha$, center energy $E_\alpha$ and density of states $\rho_\alpha$. The projector onto the coarse-grained state $| \alpha \rangle$ is then defined as: $$| \alpha \rangle \langle \alpha | = (1/(\rho_\alpha \Delta_\alpha)) \sum_{\gamma \in I_\alpha} | \gamma \rangle \langle \gamma |,~~{\rm hence}~~ \sqrt{\rho_\alpha \Delta_\alpha} | \alpha \rangle \langle \alpha | \sqrt{\rho_\alpha \Delta_\alpha} = \sum_{\gamma \in I_\alpha} | \gamma \rangle \langle \gamma |.$$ Thus, the coarse-grained state $| \alpha \rangle$ effectively replaces all the $| \gamma \rangle$ states in the bin $I_\alpha$. Numerically, the weighted states $ | \overline{\alpha} \rangle \equiv \sqrt{\rho_\alpha \Delta_\alpha} | \alpha \rangle $ and their overlaps with resonances $| \kappa \rangle$ are available through our iterative solution method for pyrazine, based on QP-partitioning algorithm (described in detail in Ref. \cite{Christopher-Pyrazine-2}), giving \begin{equation} | \overline{\alpha} \rangle \langle \overline{\alpha} | = \sum_{\gamma \in I_\alpha} | \gamma \rangle \langle \gamma |. \label{alpha_bar_projector_definition} \end{equation} All the $| \gamma \rangle$ states belonging to the same bin $I_\alpha$ are treated as one effective state $| \alpha \rangle$; so that \begin{eqnarray} M^c_{\kappa,\kappa'}(t)& = &\sum_\gamma \langle \kappa | \gamma \rangle \langle \gamma | \kappa' \rangle \exp (-i E_\gamma t/\hbar) = \sum_\alpha \sum_{\gamma \in I_\alpha} \langle \kappa | \gamma \rangle \langle \gamma | \kappa' \rangle \exp (-i E_\gamma t/\hbar) \nonumber \\ &\approx & \sum_\alpha \langle \kappa | \alpha \rangle \langle \alpha | \kappa' \rangle \rho_\alpha \Delta_\alpha \cdot \frac{1}{\Delta_\alpha} \sum_{\gamma \in I_\alpha} \frac{1}{\rho_\alpha} \exp( -i E_\gamma t/\hbar). \label{LaserLessCoarseGraining} \end{eqnarray} The remaining inner sum over $\gamma \in I_\alpha$ in Eq. (\ref{LaserLessCoarseGraining}) is approximated by a corresponding integral: \begin{eqnarray} \frac{1}{\Delta_\alpha} \sum_{\gamma \in I_\alpha} \frac{1}{\rho_\alpha} \exp (-i E_\gamma t/\hbar) & \approx & \frac{1}{\Delta_\alpha} \int^{E_\alpha + \Delta_\alpha/2}_{E_\alpha - \Delta_\alpha/2} dE_\gamma \exp (-i E_\gamma t/\hbar) \nonumber \\ & = &\exp (-i E_\alpha t/\hbar) \frac{\sin(\Delta_\alpha t/(2\hbar))}{\Delta_\alpha t/(2\hbar)} \equiv \tau_\alpha(t), \label{tauIntegral} \end{eqnarray} giving the final coarse-grained expression for $M^c_{\kappa,\kappa'}(t)$: \begin{equation} M^c_{\kappa,\kappa'}(t) \approx \sum_\alpha \langle \kappa | \overline{\alpha} \rangle \langle \overline{\alpha} | \kappa' \rangle \tau_\alpha (t) = \langle \kappa | \left[ \sum_\alpha \tau_\alpha (t) | \overline{\alpha} \rangle \langle \overline{\alpha} | \right] | \kappa' \rangle. \label{M_c_t_coarse_grained_definition} \end{equation} The quantity in the square brackets is the coarse-grained approximation to the $U(t)$ propagator, and the sum is over all available $| \overline{\alpha} \rangle$ states. Equation (\ref{M_c_t_coarse_grained_definition}) is accurate for the evolution times which are not too large, i.e., when $| \tau_\alpha (t) | = |\sin (\Delta_\alpha t/(2 \hbar))/(\Delta_\alpha t/(2 \hbar))| \approx 1$, implying that $|t| \ll 2\hbar/\Delta_\alpha$. The resonance overlap phenomenon and the need for nonzero coarse-grained off-diagonal $M^c_{\kappa,\kappa'}(t)$ discussed above remains the same, except that the $| \gamma \rangle$ states are replaced by $|\overline{\alpha} \rangle$ states. \subsection{Time Evolution Due to Laser Excitation} \label{Theory-General-Driven-By-Exciting-Laser} Consider now the result of single photon excitation from the ground electronic state $S_0$, which produces the excited time-dependent wavepacket, as a superposition of $| \gamma \rangle$ states (here the subscript $p$ denotes pulse): \begin{equation} | \Psi_p (t) \rangle = \sum_\gamma b_{\gamma} (t) \exp (-i E_\gamma t/\hbar) | \gamma \rangle, \label{Wavepacket_general} \end{equation} where $b_{\gamma} (t)$ coefficients are, in general, time-dependent. The $S_2$ electronic state population at time $t$ is given by: \begin{equation} P_{S_2} (t) = \langle \Psi_p (t) | Q | \Psi_p (t) \rangle = \sum_{\gamma',\gamma''} \tilde{b}_{\gamma'}^* (t) \, \tilde{b}_{\gamma''} (t) Q_{\gamma',\gamma''}, \label{P_S_2_t_b_scalar_expansion} \end{equation} where $\tilde{b}_{\gamma} (t) \equiv b_{\gamma} (t) \exp(-i E_{\gamma} t /\hbar)$. Equation (\ref{P_S_2_t_b_scalar_expansion}) can be written in matrix form as \begin{equation} P_{S_2} (t) = \mathbf{b}^\dagger (t) \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{Q} \, \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \mathbf{b} (t), \label{P_S_2_t_b_matrix_expansion-1} \end{equation} where $\mathbf{b} (t)$ is a vector composed of $b_\gamma (t)$ components. If the exciting laser pulse is weak, first-order time-dependent perturbation theory is applicable, and the $b_{\gamma}(t)$ expansion coefficients in Eq. (\ref{Wavepacket_general}) can be written as \begin{equation} b_{\gamma}(t) = (i/\hbar) \langle \gamma | \mu | g \rangle \varepsilon_p(\omega_{\gamma,g},t), \label{b_FOTDPT} \end{equation} where $\mu$ is the dipole operator, $| g \rangle$ is the ground vibrational state on $S_0$, $\omega_{\gamma,g} \equiv (E_\gamma - E_g)/\hbar$, and $\varepsilon_p(\omega_{\gamma,g},t)$ is the finite-time Fourier transform of the $\varepsilon_p(t)$: \begin{equation} \varepsilon_p(\omega_{\gamma,g},t) \equiv \int^t_{-\infty} dt' \varepsilon_p(t') \exp (i \omega_{\gamma,g} t'). \label{epsilon_FTFT} \end{equation} Eq. (\ref{b_FOTDPT}) can be written in matrix-vector form as \begin{equation} \mathbf{b} (t) = \underline{\underline{\mu}} \; \underline{\varepsilon} (t), \label{b_FOTDPT_matrix-vector} \end{equation} where $\underline{\underline{\mu}}$ is a square diagonal matrix composed of $(i/\hbar) \langle \gamma | \mu | g \rangle$ values, and $\underline{\varepsilon} (t)$ is a vector composed of $\varepsilon_p(\omega_{\gamma,g},t)$ components. Inserting Eqs. (\ref{Q_eq_R_R_dagger}) and (\ref{b_FOTDPT_matrix-vector}) into Eq. (\ref{P_S_2_t_b_matrix_expansion-1}) gives, for the $P_{S_2} (t)$ population, \begin{eqnarray} & & P_{S_2} (t) = \underline{\varepsilon}^\dagger (t) \underline{\underline{\mu}}^\dagger \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{R} \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \underline{\underline{\mu}} \; \underline{\varepsilon} (t) \equiv \underline{\varepsilon}^\dagger (t) \mathbf{M}^{\varepsilon \dagger} (t) \mathbf{M}^\varepsilon (t) \underline{\varepsilon} (t) \nonumber \equiv \underline{\varepsilon}^\dagger (t) \mathbf{K}^\varepsilon (t) \underline{\varepsilon} (t) \nonumber \\ & = & \sum_{\gamma'} | \varepsilon_p (\omega_{\gamma',g},t) |^2 K^\varepsilon_{\gamma',\gamma'} (t) + \sum_{\gamma'\ne \gamma''} \! \! \varepsilon_p^* (\omega_{\gamma',g},t) \varepsilon_p (\omega_{\gamma'',g},t) K^\varepsilon_{\gamma',\gamma''} (t), \label{P_S_2_t_b_matrix_expansion-2} \end{eqnarray} where $\mathbf{M}^\varepsilon (t)$ and $\mathbf{K}^\varepsilon (t)$ matrices are defined as \begin{equation} \mathbf{M}^\varepsilon (t) \equiv \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \underline{\underline{\mu}}, \label{M_varepsilon_t_definition} \qquad \mathbf{K}^\varepsilon (t) \equiv \mathbf{M}^{\varepsilon \dagger} (t) \mathbf{M}^\varepsilon (t) . \label{K_varepsilon_t_definition} \end{equation} Since $\underline{\underline{\mu}}$ and $\underline{\underline{\mathbf{e}}}^{\pm i E t/\hbar}$ are diagonal, the only source of nondiagonality in Eqs. (\ref{P_S_2_t_b_matrix_expansion-2}) and (\ref{K_varepsilon_t_definition}) for $\mathbf{K}^\varepsilon (t)$ is $\mathbf{Q} = \mathbf{R} \mathbf{R}^\dagger$. Thus, phase control via the phases $\phi_\gamma (t)$ of complex $\varepsilon_p (\omega_{\gamma,g},t) = |\varepsilon_p (\omega_{\gamma,g},t)| \exp(i \phi_\gamma(t))$, depends solely on properties of $\mathbf{Q}$. A few comments are in order. First, $\mathbf{R}$ is a rectangular matrix, with each $\kappa^{th}$ column composed of overlaps $R_{\gamma,\kappa} = \langle \gamma | \kappa \rangle$ of the resonance $| \kappa \rangle$ with all $| \gamma \rangle$ states. On the one hand, each resonance, being broadened in energy, has more than one nonzero $\langle \gamma | \kappa \rangle$ term in its $\kappa_{th}$ own column. On the other hand, if resonances $| \kappa \rangle$ and $| \kappa' \rangle$ overlap, then they have at least one common $| \gamma \rangle$ such that, for this $| \gamma \rangle$, both $R_{\gamma,\kappa} \neq 0$ and $R_{\gamma,\kappa'} \neq 0$ simultaneously. Second, all nonzero $\langle \gamma | \kappa \rangle$ components of each column in the $\mathbf{R}$ matrix that are related to one particular resonance $| \kappa \rangle$ form a square block centered along the main diagonal in the resulting $\mathbf{Q}= \mathbf{R} \mathbf{R}^\dagger$ matrix, filled by terms $Q_{\gamma',\gamma''} = \langle \gamma' | \kappa \rangle \langle \kappa | \gamma'' \rangle$. Thus, $\mathbf{Q}$ displays block-diagonal structure. Since each block dimensionality is larger than one due to resonance energy broadening, nondiagonal matrix elements in these blocks are generally nonzero, contributing to $\mathbf{K}^\varepsilon (t)$ nondiagonality, and thereby providing $P_{S_2} (t)$ phase control \textit{associated with the energy broadening of each particular resonance}. This kind of control will be discussed below. Furthermore, if resonances $| \kappa \rangle$ and $| \kappa' \rangle$ overlap, then the corresponding blocks overlap, so that the $\mathbf{Q}$ matrix acquires a non-block-diagonal structure. In this case $Q_{\gamma',\gamma''}$ matrix elements belonging to two blocks simultaneously are a sum of terms borrowed from each block (produced by its corresponding resonance): $Q_{\gamma',\gamma''} = \langle \gamma' | \kappa \rangle \langle \kappa | \gamma'' \rangle + \langle \gamma' | \kappa' \rangle \langle \kappa' | \gamma'' \rangle$. Similarly, in the case of overlap of $N$ blocks, the sum contains $N$ terms: $Q_{\gamma',\gamma''} = \sum_{\kappa = \kappa_1}^{\kappa_N} \langle \gamma' | \kappa \rangle \langle \kappa | \gamma'' \rangle$. As will be discussed below, \textit{the resonance overlap effect greatly increases the overall phase controllability in comparison with a pure resonance energy broadening effect}. The nondiagonality in this section (see above), is very different from that discussed in Sect. \ref{Theory-General-Already-Excited}. Specifically, in Eq. (\ref{P_S_2_t_a_matrix_expansion-2}), for the case when the system is already assumed to be excited, control is performed by means of the $c_{\kappa'}$ coeeficients, so that $\mathbf{a} = \mathbf{R} \, \mathbf{c}$, giving $\mathbf{K}^c (t) = \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{R} \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \mathbf{R}$ [Eq. (\ref{K_c_t_definition})]. This greatly simplifies the $\mathbf{K}^c (t)$ nondiagonality dependence, effectively removing the resonance broadening effect and leaving only resonance overlap as the crucial effect that provides nondiagonality, \textit{i.e.}, phase control. By contrast, in this section, $\mathbf{K}^\varepsilon (t) = \underline{\underline{\mu}}^\dagger \underline{\underline{\mathbf{e}}}^{i E t/\hbar} \mathbf{R} \mathbf{R}^\dagger \underline{\underline{\mathbf{e}}}^{-i E t/\hbar} \underline{\underline{\mu}}$ [Eq. (\ref{K_varepsilon_t_definition})] and nondiagonality is provided only by the $\mathbf{Q} = \mathbf{R} \mathbf{R}^\dagger$ matrix itself, whose nondiagonality, responsible for phase control, depends on \textit{both} resonance broadening and resonance overlap effects. It can be noted that $\mathbf{M}^\varepsilon (t) \underline{\varepsilon} (t)$ in Eq. (\ref{P_S_2_t_b_matrix_expansion-2}) is a vector composed of components \begin{equation} \langle \kappa | \Psi_p (t) \rangle = \sum_\gamma \varepsilon_p (\omega_{\gamma,g},t) M^\varepsilon_{\kappa,\gamma} (t) = \sum_\gamma \varepsilon_p (\omega_{\gamma,g},t) \left[ \langle \kappa | \gamma \rangle \exp (-i E_\gamma t/\hbar) \frac{i}{\hbar} \langle \gamma | \mu | g \rangle \right]. \label{kappa_Psi_p_OverlapInitial} \end{equation} In the case of pyrazine, transition dipole matrix elements for the $S_0 \to S_1$ excitation are an order of magnitude smaller than for the $S_0 \to S_2$ excitation \cite{Christopher-Pyrazine-1,Christopher-Pyrazine-2,Christopher-Pyrazine-3,ChemPhysLett-2009}, thus allowing the following ``doorway" approximation: \begin{equation} \langle \gamma | \mu | g \rangle = \langle \gamma | (P + Q) \mu | g \rangle = \sum_\beta \langle \gamma | \beta \rangle \langle \beta | \mu | g \rangle + \sum_\kappa \langle \gamma | \kappa \rangle \langle \kappa | \mu | g \rangle \approx \sum_\kappa \langle \gamma | \kappa \rangle \langle \kappa | \mu | g \rangle. \label{S0ToS2Transition} \end{equation} Equation (\ref{S0ToS2Transition}) indicates that the excitation to a full vibronic state $| \gamma \rangle$ takes place by means of a preliminary intermediate transition to a manifold of $| \kappa \rangle$ resonances. Inserting Eq. (\ref{S0ToS2Transition}) into Eq. (\ref{kappa_Psi_p_OverlapInitial}) gives \begin{equation} \langle \kappa | \Psi_p (t) \rangle = \sum_\gamma \varepsilon_p (\omega_{\gamma,g},t) \left[ \langle \kappa | \gamma \rangle \exp (-i E_\gamma t/\hbar) \frac{i}{\hbar} \sum_{\kappa'} \langle \gamma | \kappa' \rangle \langle \kappa' | \mu | g \rangle \right], \label{kappa_Psi_p_OverlapExtended} \end{equation} which can be rewritten as \begin{equation} \langle \kappa | \Psi_p (t) \rangle = \sum_{\kappa'} \frac{i}{\hbar} \langle \kappa' | \mu | g \rangle \left[ \sum_\gamma \varepsilon_p(\omega_{\gamma,g},t) \langle \kappa | \gamma \rangle \langle \gamma | \kappa' \rangle \exp (-i E_\gamma t/\hbar) \right]. \label{kappa_Psi_p_OverlapIntermediate} \end{equation} In order to make the computations below feasible, we introduce here a coarse-graining procedure for the quantity in square brackets in Eq. (\ref{kappa_Psi_p_OverlapIntermediate}). This procedure is similar to the one made in Ref. \cite{Christopher-Pyrazine-2}, taking into account Eqs. (\ref{LaserLessCoarseGraining}) and (\ref{tauIntegral}). Namely, $\sum_\gamma$ is written as $\sum_\alpha \sum_{\gamma \in I_\alpha}$: \begin{eqnarray} \sum_\alpha \sum_{\gamma \in I_\alpha} \varepsilon_p(\omega_{\gamma,g},t) \langle \kappa | \gamma \rangle \langle \gamma | \kappa' \rangle \exp (-i E_\gamma t/\hbar) & \approx & \sum_\alpha \varepsilon_p (\omega_{\alpha,g},t) \langle \kappa | \alpha \rangle \langle \alpha | \kappa' \rangle \rho_\alpha \Delta_\alpha \cdot \frac{1}{\Delta_\alpha} \sum_{\gamma \in I_\alpha} \frac{1}{\rho_\alpha} \exp (-i E_\gamma t/\hbar) \nonumber \\ &\approx & \sum_\alpha \varepsilon_p (\omega_{\alpha,g},t) \langle \kappa | \overline{\alpha} \rangle \langle \overline{\alpha} | \kappa' \rangle \tau_\alpha(t), \label{LaserPresentCoarseGraining} \end{eqnarray} where $\omega_{\alpha,g} \equiv (E_\alpha - E_g)/\hbar$. Inserting Eq. (\ref{LaserPresentCoarseGraining}) into Eq. (\ref{kappa_Psi_p_OverlapIntermediate}) gives: \begin{equation} \langle \kappa | \Psi_p (t) \rangle \approx \sum_\alpha \varepsilon_p (\omega_{\alpha,g},t) \left[ \langle \kappa | \overline{\alpha} \rangle \, \tau_\alpha(t) \, \frac{i}{\hbar} \sum_{\kappa'} \langle \overline{\alpha} | \kappa' \rangle \langle \kappa' | \mu | g \rangle \right] \equiv \sum_\alpha \varepsilon_p (\omega_{\alpha,g},t) M^{\varepsilon, \alpha}_{\kappa,\alpha} (t). \label{kappa_Psi_p_Overlap} \end{equation} Below, a superscript $\alpha$ indicates the coarse-grained nature of the corresponding values. Here, the quantity \begin{equation} M^{\varepsilon, \alpha}_{\kappa,\alpha} (t) \equiv \langle \kappa | \overline{\alpha} \rangle \, \tau_\alpha(t) \, \frac{i}{\hbar} \sum_{\kappa'} \langle \overline{\alpha} | \kappa' \rangle \langle \kappa' | \mu | g \rangle \equiv \langle \kappa | \left[ \, \tau_\alpha(t) | \overline{\alpha} \rangle \langle \overline{\alpha} | \, \right] \left[ \sum_{\kappa'} \frac{i}{\hbar} \langle \kappa' | \mu | g \rangle | \kappa' \rangle \right] \label{M_varepsilon_kappa_alpha} \end{equation} is a coarse-grained version of $M^\varepsilon_{\kappa,\gamma} (t)$ [Eq. (\ref{kappa_Psi_p_OverlapInitial})] and depends only on the material system properties. If one defines \begin{eqnarray} \mu^\alpha_\alpha & \equiv & \frac{i}{\hbar} \sum_{\kappa'} \langle \overline{\alpha} | \kappa' \rangle \langle \kappa' | \mu | g \rangle = \frac{i}{\hbar} \langle \overline{\alpha} | \mu | g \rangle, \label{mu_alpha_def} \qquad R^\alpha_{\alpha,\kappa} \equiv \langle \overline{\alpha} | \kappa \rangle, \end{eqnarray} then \begin{equation} \mathbf{M}^{\varepsilon, \alpha} (t) = \mathbf{R}^{\alpha \dagger} \, \underline{\underline{\tau}}^\alpha (t) \, \underline{\underline{\mu}}^\alpha, \label{M_alpha_varepsilon_t_definition} \qquad \mathbf{K}^{\varepsilon, \alpha} (t) = \mathbf{M}^{\varepsilon, \alpha \dagger} (t) \mathbf{M}^{\varepsilon, \alpha} (t) , \label{K_alpha_varepsilon_t_definition} \end{equation} where $\underline{\underline{\tau}}^\alpha (t)$ is a square diagonal matrix composed of $\tau_\alpha(t)$ values, and $\underline{\underline{\mu}}^\alpha$ is a square diagonal matrix composed of $(i/\hbar) \langle \overline{\alpha} | \mu | g \rangle$ values. Then the $P_{S_2} (t)$ population in terms of coarse-grained values becomes \begin{eqnarray} P_{S_2} (t)& = &\underline{\varepsilon}^{\alpha \dagger} (t) \mathbf{M}^{\varepsilon, \alpha \dagger} (t) \mathbf{M}^{\varepsilon, \alpha} (t) \, \underline{\varepsilon}^\alpha (t) \equiv \underline{\varepsilon}^{\alpha \dagger} (t) \mathbf{K}^{\varepsilon, \alpha} (t) \, \underline{\varepsilon}^\alpha (t) \nonumber \\ & = & \sum_{\alpha'} | \varepsilon_p (\omega_{\alpha',g},t) |^2 K^{\varepsilon, \alpha}_{\alpha',\alpha'} (t) + \sum_{\alpha' \ne \alpha''} \! \! \varepsilon_p^* (\omega_{\alpha',g},t) \varepsilon_p (\omega_{\alpha'',g},t) K^{\varepsilon, \alpha}_{\alpha',\alpha''} (t), \label{P_S_2_t_alpha_matrix_expansion} \end{eqnarray} where $\underline{\varepsilon}^\alpha (t)$ is a vector composed of $\varepsilon_p (\omega_{\alpha,g},t)$ components. The quantities $\underline{\underline{\mu}}^\alpha$ and $\underline{\underline{\tau}}^\alpha (t)$ are diagonal matrices, so the only origin of nondiagonality in Eq. (\ref{K_alpha_varepsilon_t_definition}) for $\mathbf{K}^{\varepsilon, \alpha} (t)$ and Eq. (\ref{P_S_2_t_alpha_matrix_expansion}) is via the $\mathbf{Q}^\alpha = \mathbf{R}^\alpha \mathbf{R}^{\alpha \dagger}$ matrix, composed of $Q^\alpha_{\alpha',\alpha''} = \langle \overline{\alpha}' | Q | \overline{\alpha}'' \rangle$ matrix elements. Hence, all the $P_{S_2} (t)$ phase control considerations from above remain the same, except that $| \gamma \rangle$ states are replaced by $|\overline{\alpha} \rangle$ states. Namely, \textit{phase control is driven both by resonance energy broadening and resonance overlap. The resonance overlap effect, providing a non-block-diagional structure of} $\mathbf{Q}^\alpha$ \textit{and} $\mathbf{K}^{\varepsilon, \alpha} (t)$, \textit{strongly enhances the effect of resonance broadening}. \section{Coherent Control of Pyrazine Internal Conversion} \label{Theory-Control} Section \ref{Theory-General-Driven-By-Exciting-Laser} above describes resonance broadening and resonance overlap, two effects related to $\mathbf{Q}$ ($\mathbf{Q}^\alpha$) and $\mathbf{K}^\varepsilon (t)$ ($\mathbf{K}^{\varepsilon, \alpha} (t)$) nondiagonality. Here, a control scheme based on resonance broadening is discussed in Sect. \ref{Theory-Control-Single-Resonance}. Section \ref{Theory-Control-Overlapping-Resonances} discusses a control scheme relying on presence of resonance overlap. \subsection{Control Associated with Single Resonance} \label{Theory-Control-Single-Resonance} In the case of pure resonance broadening without resonance overlap, one particular resonance $| \kappa \rangle$ has nonzero $R_{\gamma,\kappa} = \langle \gamma | \kappa \rangle$ terms for some specific set $\{ \gamma \}_{\kappa}$ of $| \gamma \rangle$ states. This results in the simplified expressions for $\mathbf{K}^\varepsilon (t)$ matrix elements for this $\{ \gamma \}_{\kappa}$ set, with the summation over $\kappa$ reduced to a single term \begin{equation} K^\varepsilon_{\gamma',\gamma'} (t) = |M^\varepsilon_{\kappa,\gamma'} (t)|^2, \qquad K^\varepsilon_{\gamma',\gamma''} (t) = M^{\varepsilon *}_{\kappa,\gamma'} (t) M^\varepsilon_{\kappa,\gamma''} (t) \label{K_gamma_single_resonance} \end{equation} for the diagonal and nondiagonal matrix elements, respectively. The probability $P_{S_2} (t)$ [Eq. (\ref{P_S_2_t_b_matrix_expansion-2})] is a quadratic form of complex time-dependent variables $\varepsilon_p(\omega_{\gamma,g},t)$. When the pulse is already over (at $t = T_{over}$), these values become infinite-time Fourier transforms of this laser pulse at different frequencies, $\varepsilon_p(\omega_{\gamma,g})$; they are no longer time-dependent for $t \ge T_{over}$. Here we use the so-called absolute control scheme for $P_{S_2} (t)$ optimization, with the $\mathbf{K}^\varepsilon (t)$ matrix given in Eq. (\ref{K_gamma_single_resonance}). Namely, $P_{S_2} (t)$ is optimized at a desired optimization time $t = T$, while keeping the total energy of the pulse at $2\pi E_0$: \begin{equation} \sum_{\{ \gamma \}_{\kappa}} | \varepsilon_p (\omega_{\gamma,g}) |^2 = \underline{\varepsilon}^\dagger \underline{\varepsilon} = 2 \pi E_0. \label{Absolute-Control-Energy-Constraint-gamma} \end{equation} This is done by introducing the corresponding Lagrange multiplier $\lambda^A$ (superscript $A$ denotes absolute) with the corresponding optimization function at time $T$ defined as: \begin{equation} P^{\lambda; A}_{S_2} (T, \underline{\varepsilon}) = \underline{\varepsilon}^\dagger \mathbf{K}^\varepsilon (T) \underline{\varepsilon} - \lambda^A (\underline{\varepsilon}^\dagger \underline{\varepsilon} - 2 \pi E_0). \label{P_lambda_A_function} \end{equation} We then search for $P^{\lambda; A}_{S_2} (T, \underline{\varepsilon})$ extrema with respect to $\underline{\varepsilon}$: \begin{equation} \left\{ \begin{array}{l} \displaystyle \frac{\partial P^{\lambda; A}_{S_2} (T, \underline{\varepsilon})}{\partial \, \mathrm{Re} \, [ \varepsilon_p(\omega_{\gamma,g}) ] } = 0, \nonumber \\ \displaystyle \frac{\partial P^{\lambda; A}_{S_2} (T, \underline{\varepsilon})}{\partial \, \mathrm{Im} \, [ \varepsilon_p(\omega_{\gamma,g}) ] } = 0, \qquad \gamma = 1, \ldots, N_{{\{ \gamma \}}_\kappa}, \label{partial_P_A} \end{array} \right. \end{equation} where $N_{\{ \gamma \}_\kappa}$ is the number of $| \gamma \rangle$ states in the set ${\{ \gamma \}}_\kappa$. Conditions in Eq. (\ref{partial_P_A}), applied to Eq. (\ref{P_lambda_A_function}), lead directly to an eigenvalue problem \begin{equation} \mathbf{K}^\varepsilon (T) \underline{\varepsilon} = \lambda^A \underline{\varepsilon}. \label{AbsoluteEigenvalueProblem-gamma} \end{equation} which provides a set of eigenvalues $\lambda^A$ and corresponding eigenvectors $\underline{\varepsilon}$ with a unit norm ($\underline{\varepsilon}^\dagger \underline{\varepsilon} = 1$). Multiplication of these $\underline{\varepsilon}$ eigenvectors by $\sqrt{2 \pi E_0}$ provides the required optimized solutions. The $\mathbf{K}^\varepsilon (t)$ matrix is such that all but one of its $N_{\{ \gamma \}_{\kappa}}$ eigenvalues are equal exactly to 0, while its last eigenvalue is equal to the sum of its diagonal elements: \begin{equation} \lambda^A_n = 0, \qquad n = 1, \ldots, N_{\{ \gamma \}_{\kappa}} - 1; \qquad \lambda^A_{N_{\{ \gamma \}_{\kappa}}} = \sum_{\{ \gamma \}_{\kappa}} K^\varepsilon_{\gamma,\gamma} (t) = \sum_{\{ \gamma \}_{\kappa}} |M^\varepsilon_{\kappa,\gamma} (t)|^2. \label{Eigenvalues-Absolute-gamma} \end{equation} This is an analytical property of the $\mathbf{K}^\varepsilon (t)$ matrix in Eq. (\ref{K_gamma_single_resonance}), so that a numerical solution of the eigenproblem in [Eq. (\ref{AbsoluteEigenvalueProblem-gamma})] is not required. Specifically, {\it for any time $T$, $P_{S_2} (T)$ can be set to zero, using the eigenvector corresponding to zero eigenvalue}. In terms of the coarse-grained $| \overline{\alpha} \rangle$ states, the results are the same with the $\{\gamma\}_\kappa$ set replaced by $\{\overline{\alpha}\}_\kappa$. Given the simplistic nature of this solution, numerical results are neither necessary nor are they provided below. Note, however, that this type of control is possible only if the system displays isolated resonances. This can be the case in small molecules; large molecules such as pyrazine, however, display overlapping resonances throughout the spectrum, with highly unlikely regions of isolated resonance. Such systems can be controlled via an alternate mechanism, discussed below. \subsection{Control Associated with Overlapping Resonances} \label{Theory-Control-Overlapping-Resonances} Here we consider a second different control scheme, termed relative control. Namely, we optimize the \textit{ratio} of $P_{S_2} (t)$ populations at times $T_2$ and $T_1$, where $T_2 > T_1 \ge T_{over}$: \begin{equation} \lambda^R = \frac{P_{S_2} (T_2)}{P_{S_2} (T_1)} \to \max, \min \label{RelativeControlObjective} \end{equation} (where superscript $R$ denotes relative). One can optimize the value of $P_{S_2} (T_2)$, keeping the value of $P_{S_2} (T_1)$ constant \cite{Preisig} and equal to some predefined value $P_0$. Here fixed $P_{S_2}(T_1) = P_0$ assures that enhanced (or diminished) $P_{S_2}$ at the target final time $T_2$ does not simply result from a stronger (or weaker) field that simply achieves control by affecting the amount of $S_2$ excited. To do so, we consider the optimization function \begin{equation} P^{\lambda; R}_{S_2} (T_2, T_1, \underline{\varepsilon}) = \underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_2) \underline{\varepsilon} - \lambda^R (\underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_1) \underline{\varepsilon} - P_0), \label{P_lambda_R_function} \end{equation} where $\lambda^R$ is a yet unknown Lagrange multiplier. We then find $P^{\lambda; R}_{S_2} (T_2, T_1, \underline{\varepsilon})$ extrema with respect to $\underline{\varepsilon}$ leading directly to a generalized eigenvalue problem: \begin{equation} \mathbf{K}^\varepsilon (T_2) \underline{\varepsilon} = \lambda^R \mathbf{K}^\varepsilon (T_1) \underline{\varepsilon}. \label{GeneralizedEigenvalueProblem} \end{equation} Multiplying Eq. (\ref{GeneralizedEigenvalueProblem}) by $\underline{\varepsilon}^{\dagger}$ from the left gives \begin{equation} \underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_2) \underline{\varepsilon} = \lambda^R \underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_1) \underline{\varepsilon} \label{P_T_2_eq_lambda_R_P_T_1}. \end{equation} The $\lambda^R$ is real and positive because $\underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_2) \underline{\varepsilon} = P_{S_2}(T_2)$ and $\underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_1) \underline{\varepsilon} = P_{S_2}(T_1)$ are real positive values. Dividing Eq. (\ref{P_T_2_eq_lambda_R_P_T_1}) by $\underline{\varepsilon}^{\dagger} \mathbf{K}^\varepsilon (T_1) \underline{\varepsilon}$, yields $\lambda^R = P_{S_2}(T_2)/P_{S_2}(T_1)$, i.e., $\lambda^R$ is the optimized ratio of the populations of interest [Eq. (\ref{RelativeControlObjective})]. The $\mathbf{K}^\varepsilon (t)$ matrix determinant is generally nonzero at every time $t$, so that $\mathbf{K}^\varepsilon (t)$ always has an inverse $[ \mathbf{K}^\varepsilon (t) ]^{-1}$. This allows transformation of the generalized eigenvalue problem in Eq. (\ref{GeneralizedEigenvalueProblem}) into an ordinary eigenvalue problem. To do this, we multiply the left and right sides of Eq. (\ref{GeneralizedEigenvalueProblem}) by $[\mathbf{K}^\varepsilon (T_1)]^{-1}$ from the left: \begin{eqnarray} \mathbf{R}^\varepsilon (T_2, T_1) \underline{\varepsilon} & = & \lambda^R \underline{\varepsilon}, \label{RelativeControlProblem} \\ \mathbf{R}^\varepsilon (T_2, T_1) & \equiv & [\mathbf{K}^\varepsilon (T_1)]^{-1} \mathbf{K}^\varepsilon (T_2). \label{RelativeControlMatrix} \end{eqnarray} The solution to the eigenproblem in Eq. (\ref{RelativeControlProblem}) for times $T_2 > T_1 \ge T_{over}$ is dependent only on the properties of the material system. Moreover, this solution is the best possible in the weak field case, i.e., it is optimal \cite{Kosloff-1989}. Specifically, the maximal and minimal eigenvalues $\lambda^R$ provide the entire achievable range of $P_{S_2}(T_2)/P_{S_2}(T_1)$ for a given $T_2$ and $T_1$, obtained using the corresponding eigenvectors $\underline{\varepsilon}$. In terms of coarse-grained states $| \overline{\alpha} \rangle$, $\underline{\varepsilon}$ is replaced by $\underline{\varepsilon}^\alpha$, and $\mathbf{K}^\varepsilon (t)$ is replaced by $\mathbf{K}^{\varepsilon, \alpha} (t)$, giving the following coarse-grained version of the optimization problem: \begin{eqnarray} \mathbf{R}^{\varepsilon, \alpha} (T_2, T_1) \underline{\varepsilon}^\alpha & = & \lambda^{R, \alpha} \underline{\varepsilon}^\alpha, \label{RelativeControlProblem-alpha} \\ \mathbf{R}^{\varepsilon, \alpha} (T_2, T_1) & \equiv & [\mathbf{K}^{\varepsilon, \alpha} (T_1)]^{-1} \mathbf{K}^{\varepsilon, \alpha} (T_2). \label{RelativeControlMatrix-alpha} \end{eqnarray} In addressing this problem computationally, we encountered numerical instability in Eq. (\ref{RelativeControlProblem-alpha}) if the number of $| \overline{\alpha} \rangle$ states is relatively large (150--180). Namely, the condition number of $\mathbf{K}^{\varepsilon, \alpha} (t)$ tends to become very large, resulting in an ill-conditioned matrix, preventing accurate numerical construction of $\mathbf{R}^{\varepsilon, \alpha} (T_2,T_1)$ [Eq. (\ref{RelativeControlMatrix-alpha})] and its subsequent diagonalization. To overcome this problem, we partitioned the energy axis into a limited number of $N_A$ bins in Eq. (\ref{kappa_Psi_p_Overlap}), as discussed in the Appendix, giving further broadened $|\textbf{A}\rangle$ states. Using these further broadened $|\textbf{A}\rangle$ states allows us to reformulate the eigenproblem in Eq. (\ref{RelativeControlProblem-alpha}) as \begin{eqnarray} \mathbf{R}^{\varepsilon, \mathbf{A}} (T_2, T_1) \underline{\varepsilon}^\mathbf{A} & = & \lambda^{R, \mathbf{A}} \underline{\varepsilon}^\mathbf{A}, \label{RelativeControlProblem-A} \\ \mathbf{R}^{\varepsilon, \mathbf{A}} (T_2, T_1) & \equiv & [\mathbf{K}^{\varepsilon, \mathbf{A}} (T_1)]^{-1} \mathbf{K}^{\varepsilon, \mathbf{A}} (T_2), \label{RelativeControlMatrix-A} \end{eqnarray} where the states $|\bar\alpha\rangle$ in Eqs. (\ref{RelativeControlProblem-alpha}) and (\ref{RelativeControlMatrix-alpha}) are replaced by the further broadened states $|\textbf{A}\rangle$, as described in the Appendix. \subsection{Numerical Correlation between Controllability and Resonance Overlap} \label{Theory-Control-Numerical-Correlation} In general, effects of resonance energy broadening and resonance overlap are mixed together in the structure of the $\mathbf{Q}^\mathbf{A}$ and $\mathbf{K}^{\varepsilon,\mathbf{A}} (t)$ matrices. To quantitatively estimate the $\mathbf{K}^{\varepsilon,\mathbf{A}} (t)$ nondiagonality, providing phase control, we utilize the Hadamard measure: \begin{equation} H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) = \mathrm{det} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) / \mathrm{det} \left( \mathrm{diag} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) \right), \label{H_R_K_def} \end{equation} where det denotes a determinant, and diag is the diagonal part of a matrix. Thus, $\mathrm{det} \left( \mathrm{diag} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) \right) = \prod^{N_A}_{A=1} K^{\varepsilon,\mathbf{A}}_{A,A} (t)$. Since $\mathbf{K}^{\varepsilon,\mathbf{A}} (t)$ is a Hermitian positive-definite matrix, both $\mathrm{det} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right)$ and $\mathrm{det} \left( \mathrm{diag} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) \right)$ are real and positive. Furthermore, $\mathrm{det} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) \le \mathrm{det} \left( \mathrm{diag} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) \right)$, giving \begin{equation} 0 < H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right) \le 1, \label{H_R_margins} \end{equation} where the equality applies if and only if $\mathbf{K}^{\varepsilon,\mathbf{A}} (t)$ is strictly diagonal. The determinant of $\mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1)$ can be expressed as \begin{equation} \mathrm{det} \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right) = \mathrm{det}\left[ \left[\mathbf{K}^{\varepsilon,\mathbf{A}} (T_1) \right]^{-1} \mathbf{K}^{\varepsilon,\mathbf{A}} (T_2) \right] = \mathrm{det} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_2) \right) / \mathrm{det} \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_1) \right). \label{R_det_expression} \end{equation} Hadamard-like measures of non-diagonality for $\mathbf{R}^{\varepsilon,\mathbf{A}} (t)$ are introduced in a similar manner: \begin{eqnarray} H_R \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right) & = & \frac{\mathrm{det} \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)}{\mathrm{det} \left[ \mathrm{diag} ( [ \mathbf{K}^{\varepsilon,\mathbf{A}} (T_1) \right]^{-1} ) \; \mathrm{diag}(\mathbf{K}^{\varepsilon,\mathbf{A}} (T_2)) ]} = \frac{H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_2) \right)}{\mathrm{det}[\mathrm{diag}([\mathbf{K}^{\varepsilon,\mathbf{A}} (T_1)]^{-1})] \cdot \mathrm{det}(\mathbf{K}^{\varepsilon,\mathbf{A}} (T_1))}, \label{H_R_R_def} \\ H_C \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right) & = & \frac{\mathrm{det}(\mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1))}{\mathrm{det}(\mathrm{diag}(\mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1)))} = \frac{\mathrm{det}(\mathbf{K}^{\varepsilon,\mathbf{A}} (T_2))}{\mathrm{det}(\mathrm{diag}(\mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1))) \cdot \mathrm{det}(\mathbf{K}^{\varepsilon,\mathbf{A}} (T_1))}, \label{H_C_R_def} \end{eqnarray} where Eq. (\ref{R_det_expression}) is used. The subscript $R$ denotes real, and subscript $C$ denotes complex. $H_R \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)$ is real because both its numerator and denominator are real. In order to quantitatively estimate the extent of resonance overlap, we use the same overlap matrix as in Ref. \cite{Christopher-Pyrazine-3}, but include only the $| \overline{\alpha} \rangle$ states, which are populated by the exciting laser spanning the energy range $[ E_L, E_H ]$: \begin{equation} \Omega^\alpha_{\kappa,\kappa'} = \sum_{\alpha, \\ E_\alpha \in [ E_L, E_H ]} \left| \langle \kappa | \overline{\alpha} \rangle \right| \cdot \left| \langle \overline{\alpha} | \kappa' \rangle \right|.\end{equation} The Hadamard non-diagonality measure for the $\mathbf{\Omega}^\alpha$ matrix of size $N_Q \times N_Q$, composed of $\Omega^\alpha_{\kappa,\kappa'}$ values, is introduced as \begin{equation} H \left( \mathbf{\Omega}^\alpha \right) = \mathrm{det} \left( \mathbf{\Omega}^\alpha \right) / \mathrm{det} \left( \mathrm{diag} \left( \mathbf{\Omega}^\alpha \right) \right). \label{H_Omega_def} \end{equation} The numerator in Eq. (\ref{H_Omega_def}) is shown numerically to be always real and positive, and the denominator is equal to $\prod^{N_Q}_{\kappa=1} \Omega^\alpha_{\kappa,\kappa}$, and thus also real and positive. The same inequality as in Eq. (\ref{H_R_margins}) is valid for $H \left( \mathbf{\Omega}^\alpha \right)$. \subsection{Implementation of the Shaped Laser as a Linear Combination of Gaussian Laser Pulses} \label{Theory-Control-Gaussian-Implementation} The eigenvector $\underline{\varepsilon}^\mathbf{A}$ providing the desired optimized value $\lambda^{R, \mathbf{A}}$ after the pulse is over [Eq. (\ref{RelativeControlProblem-A})] is a finite discrete set of complex values of laser amplitudes $\varepsilon_p (\omega_{A,g})$, at different frequencies. These values can be reached in multiple ways. The approach used for the IBr model \cite{IBr-model}, is also used here: namely, to obtain the desired set of $\varepsilon_p(\omega_{A,g})$ values, $A = 1, \ldots, N_A$, it is sufficient to take the same number of linearly independent functions $\varepsilon_a(\omega)$, and expand the components of $\underline{\varepsilon}^\mathbf{A}$ in terms of $\varepsilon_a(\omega)$ at all $\omega_{A,g}$ frequencies with the (as yet unknown) time-independent complex coefficients $d_a$: \begin{equation} \varepsilon_p(\omega_{A,g}) = \sum_a d_a \varepsilon_a(\omega_{A,g}), \qquad a, A = 1, \ldots, N_A, \qquad t \ge T_{over}. \label{d_inf_time_expansion} \end{equation} or, as a matrix equation: \begin{equation} \underline{\varepsilon}^\mathbf{A} = \mathbf{B} \, \mathbf{d}, \qquad B_{A,a} = \varepsilon_a(\omega_{A,g}), \qquad \mathbf{d} = (d_1, \ldots, d_{N_A})^T, \qquad t \ge T_{over}. \label{epsilon_p_eq_B_d_TI} \end{equation} The set of $\varepsilon_a(\omega)$ functions is linearly independent, the $\mathbf{B}$ determinant is nonzero, and the unique nonzero vector $\mathbf{d}$ exists as a solution of Eq. (\ref{epsilon_p_eq_B_d_TI}), found as \begin{equation} \mathbf{d} = [\mathbf{B}]^{-1} \underline{\varepsilon}^\mathbf{A}. \label{d_solution} \end{equation} The basis functions $\varepsilon_a(\omega)$ in frequency domain can be assumed to be \textit{infinite-time} Fourier transforms of the corresponding basis functions $\varepsilon_a(t)$ in time domain (the latter are all vanishing when $t \ge T_{over}$). In turn, \textit{finite-time} Fourier transforms of $\varepsilon_a(t)$ can be written as $\varepsilon_a(\omega,t)$, and at finite times Eq. (\ref{d_inf_time_expansion}) takes the form: \begin{equation} \varepsilon_p(\omega_{A,g},t) = \sum_a d_a \varepsilon_a(\omega_{A,g},t), \qquad a, A = 1, \ldots, N_A, \label{FiniteTmeFourierTransformExpansion} \end{equation} i.e., \begin{equation} \underline{\varepsilon}^\mathbf{A} (t) = \mathbf{B}(t) \mathbf{d}, \qquad B_{A,a} (t) = \varepsilon_a(\omega_{A,g},t), \qquad \mathbf{d} = (d_1, \ldots, d_{N_A})^T. \label{epsilon_p_eq_B_d_TD} \end{equation} Using Eq. (\ref{epsilon_p_eq_B_d_TD}), $P_{S_2}(t)$, Eq. (\ref{P_S_2_t_A_matrix_expansion}), can be expressed in terms of the $\mathbf{d}$ vector: \begin{equation} P_{S_2} (t) = \underline{\varepsilon}^{\mathbf{A} \dagger} (t) \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \underline{\varepsilon}^\mathbf{A} (t) = \mathbf{d}^\dagger \mathbf{B}^\dagger(t) \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \mathbf{B}(t) \mathbf{d} . \label{P_S_2_through_d} \end{equation} Thus, the $\mathbf{d}$ vector in Eq. (\ref{d_solution}) can be used for time propagation of $P_{S_2} (t)$ [Eq. (\ref{P_S_2_through_d})] at all times: before the laser is turned on, while the laser is on, and after the laser is off. Optimized populations always satisfy the condition $P_{S_2}(t = T_2) = \lambda^{R, \mathbf{A}} P_{S_2}(t = T_1)$. To perform numerical computations, we select a set of Gaussian laser pulses $\varepsilon_a(t)$, centered at different frequencies $\omega_a$: \begin{equation} \varepsilon_a (t) = \epsilon_a/(2 \sqrt{\pi} \alpha_a) \exp \left( -\left( t/(2\alpha_a) \right)^2 -i \omega_a t \right). \label{Gaussian_varepsilon_a_t} \end{equation} The finite-time Fourier transform of this Gaussian pulse, $\varepsilon_a (\omega,t)$, [Eq. (\ref{epsilon_FTFT})], can be expressed analytically \cite{Shapiro-1993,Shapiro-Femtosecond,Wolfram-Erf} as: \begin{eqnarray} \varepsilon_a (\omega,t) & = & (\epsilon_a/2) \exp \left( -\alpha_a^2 \left( \omega - \omega_a \right)^2 \right) \! \left\{ 2 \! - \! \exp \! \left[ \left( \alpha_a (\omega - \omega_a) + i t/(2\alpha_a) \right)^2 \right] \! W \! \left( \alpha_a (\omega - \omega_a) + i t/(2\alpha_a) \right) \right\}, \label{epsilon_omega_t_Gaussian} \end{eqnarray} where $W(z)$ is the complex error function \cite{Wolfram-Erf,Abramowitz-Stegun}. At times $t > T_{over} = 4 \sqrt{2 \ln 2} \, \alpha_a$ this becomes \begin{equation} \varepsilon_a (\omega) = \epsilon_a \exp \left( -\alpha_a^2 \left( \omega - \omega_a \right)^2 \right). \end{equation} Using Eq. (\ref{Gaussian_varepsilon_a_t}), the control pulse $\varepsilon_p(t)$ in time domain is \begin{equation} \varepsilon_p (t) = \sum_{a = 1}^{N_A} d_a \varepsilon_a (t) = \sum_{a=1}^{N_A} d_a \epsilon_a/(2 \sqrt{\pi} \alpha_a) \exp \left( -\left( t/(2\alpha_a) \right)^2 -i \omega_a t \right), \label{epsilon_p_t} \end{equation} with infinite-time Fourier transform \begin{equation} \varepsilon_p(\omega) = \sum_{a = 1}^{N_A} d_a \varepsilon_a (\omega) = \sum_{a = 1}^{N_A} d_a \epsilon_a \exp \left( -\alpha_a^2 \left( \omega - \omega_a \right)^2 \right). \label{ControlledPulseSpectrum} \end{equation} By construction, the $\varepsilon_p(\omega_{A,g})$ value should be constant inside the corresponding $I_A$ bin [Eq. (\ref{kappa_Psi_p_BinnedOverlap})]. The $\varepsilon_p(\omega)$ function [Eq. (\ref{ControlledPulseSpectrum})] is smooth and does not satisfy this requirement exactly. Nevertheless, if $N_A$ is large enough each $I_A$ bin becomes relatively small, and the smooth function in Eq. (\ref{ControlledPulseSpectrum}) in each bin can be approximately treated as constant. \section{Computational Results} \label{Comp-Results} Consider $S_0 \to S_2$ excitation to coherently control $S_2 \leftrightarrow S_1$ interconversion dynamics of pyrazine excited using weak light in the perturbative regime. We use the pyrazine vibronic structure of Refs. \cite{Christopher-Pyrazine-2} and \cite{Christopher-Pyrazine-3}, and partition the energy into 2000 bins, in the range 4.06--6.06 eV, where energy is referred to the ground vibrational $S_0$ state. Here, 4.06 eV is the ${S_1}$ energy at the $S_0$ nuclear equilibrium configuration \cite{Raab-1999,Batista-2006}. The $Q$ space consists of the 176 brightest (most optically accessible) $| \kappa \rangle$ resonances, having the largest values of $\langle \kappa | \mu | g \rangle$. In this case the QP-partitioning approach gives 76775 coarse-grained vibronic states $ | \overline{\alpha} \rangle$, with energies ranging from 4.06 to 6.06 eV. Thus, there are 76775$\times$176 = 13512400 $R^\alpha_{\alpha,\kappa} = \langle \overline{\alpha} | \kappa \rangle$ values. These are used together with 176 $\langle \kappa | \mu | g \rangle$ values to compute the dynamics of interest. \subsection{Uncontrolled Excitation and Decay Dynamics} Figure \ref{Figure-P_u_t_Diff_alpha_a} shows characteristic examples of $P_{S_2}(t)$ populations produced by a single Gaussian laser pulses of differing time durations, where the subscript $u$ denotes ``uncontrolled". These examples are computed with the laser center frequency corresponding to 4.84 eV. It is notable that the uppermost population curve in Fig. \ref{Figure-P_u_t_Diff_alpha_a}, produced by the pulse with a time duration $\sim$1 fs ($\alpha_a = 0.1$ fs) is, at times $t > 0.5$ fs, similar in shape to the zero-zero curve in Fig. 5, Ref. \cite{Christopher-Pyrazine-3}. This is the case because the ultrafast laser pulse behaves like $\epsilon_a \delta(t)$ on the femtosecond timescale, and its finite-time Fourier transform is nearly constant, $\approx \epsilon_a$. As a consequence, in this specific case, after the pulse is over, $P_{S_2}(t)$ in Eq. (\ref{P_S_2_t_alpha_matrix_expansion}) is the same up to a constant scaling factor as the zero-zero $P_{S_2}(t)$ in Eq. (\ref{P_S_2_t_a_matrix_expansion-2}), with $c_{\kappa'} \propto (i/\hbar) \langle \kappa' | \mu | g \rangle \epsilon_a$. Figure \ref{Figure-P_u_t_Short_Diff_omega_a} shows $P_{S_2}(t)$ populations produced by Gaussian lasers having the same short time duration $\approx$10 fs ($\alpha_a = 1.0$ fs), but different center frequencies. In this case all populations behave similarly on a short time scale, differing by the overall magnitude due to the difference in $\langle \kappa' | \mu | g \rangle$ values for different resonances $| \kappa' \rangle$. Figure \ref{Figure-P_u_t_Long_Diff_omega_a} shows $P_{S_2}(t)$ populations produced by Gaussian lasers with long time duration around 200 fs ($\alpha_a = 20.0$ fs), using different frequencies. In contrast with Fig. \ref{Figure-P_u_t_Short_Diff_omega_a}, there are significant differences in $S_2 \leftrightarrow S_1$ IC dynamics, depending on the frequency used. Figure \ref{Figure-P_u_t_Long_Diff_omega_a} shows that the laser with 4.84 eV photon energy produces a larger population, which also tends to decay slower, than in other cases, thus, marking the region of relative stability in pyrazine resonance structure. Both Figs. \ref{Figure-P_u_t_Short_Diff_omega_a} and \ref{Figure-P_u_t_Long_Diff_omega_a} qualitatively correlate well with the corresponding results for $S_0 \to S_2 \leftrightarrow S_1$ dynamics in Ref. \cite{Pyrazine-Ioannis-2}, obtained using a more general non-perturbative time-dependent dynamical approach \cite{Pyrazine-Ioannis-1}. \begin{figure}[htp] \begin{center} \includegraphics[height = 11cm, width = 12cm]{Figure1.eps} \caption{ $S_2$ populations $P_{S_2} (t)$, denoted $P_u (t)$ here, produced by Gaussian laser pulses of different time duration. Panel inset: The same data, shown on a shorter time scale.}\label{Figure-P_u_t_Diff_alpha_a} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[height = 11cm, width = 12cm]{Figure2.eps} \caption{ $S_2$ populations, $P_{S_2} (t)$, denoted $P_u (t)$ here, produced by short Gaussian laser pulses with the same $\alpha_a = 1.0$ fs, but different center frequencies.} \label{Figure-P_u_t_Short_Diff_omega_a} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[height = 11cm, width = 12cm]{Figure3.eps} \caption{Bottom: $S_2$ populations, $P_{S_2} (t)$, denoted $P_u (t)$ here, produced by long Gaussian laser pulses with the same $\alpha_a = 20.0$ fs, but different center frequencies.} \label{Figure-P_u_t_Long_Diff_omega_a} \end{center} \end{figure} \subsection{Control Involving Multiple Overlapping Resonances} Consider first sample numerical results for $H \left( \mathbf{\Omega}^\alpha \right)$, the measure of the extent of resonance overlap [Eq. (\ref{H_Omega_def})] and the quantities associated with it. These quantities are $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (t) \right)$ [Eq. (\ref{H_R_K_def})], which is the $\mathbf{K}^{\varepsilon,\mathbf{A}} (t)$ non-diagonality measure, shown at $T_1 = 150$ fs and $T_2 = 250$ fs; and two measures of the non-diagonality $\mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1)$ , $H_R \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)$ [Eq. (\ref{H_R_R_def})], and $|H_C \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)|\,$ \,[Eq. (\ref{H_C_R_def})]. In addition, we tabulate $\lambda^{R, \mathbf{A}}_{\min}$ and $\lambda^{R, \mathbf{A}}_{\max}$, denoting minimal and maximal eigenvalues of the eigenproblem in Eq. (\ref{RelativeControlProblem-A}) and which we term ``control extents". Values for 128 $I_A$ bins (degrees of freedom of the laser), are listed in Table \ref{Table-H_Omega}. Note first the enormous range of control possible for the ratio $P_{S_2}(T_2)/P_{S_2}(T_1)$ as indicated by the $\lambda^{R, \mathbf{A}}_{\min}$ and $\lambda^{R, \mathbf{A}}_{\max}$. For example, for the first energy interval, this ratio can range from $3.05 \times 10^{-6}$ to $3.90 \times 10^{+5}$, a range of over $1 \times 10^{+11}$. \begin{table}[htp] \begin{center} \caption{The (1/128) power of $H \left( \mathbf{\Omega}^\alpha \right)$, $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_1) \right)$, $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_2) \right)$, $H_R \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)$, $|H_C \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)|$, as well as $\lambda^{R, \mathbf{A}}_{\min}$ and $\lambda^{R, \mathbf{A}}_{\max}$ for different energy intervals $[E_L, E_H]$. Here, $T_1 = 150$ fs, $T_2 = 250$ fs.} \begin{tabular}{llllll} \hline \hline $[ E_L, E_H ]$, eV & $[ 4.46, 4.66 ]$ & $[ 4.66, 4.86 ]$ & $[ 4.86, 5.06 ]$ & $[ 5.06, 5.26 ]$ & $[ 5.26, 5.46 ]$ \\ \hline $H \left( \mathbf{\Omega}^\alpha \right)$ & 1.76$\times$10$^{-1}$ & 2.77$\times$10$^{-1}$ & 3.09$\times$10$^{-1}$ & 3.15$\times$10$^{-1}$ & 2.80$\times$10$^{-1}$ \\ $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_1) \right)$ & 1.08$\times$10$^{-2}$ & 2.68$\times$10$^{-2}$ & 9.05$\times$10$^{-2}$ & 1.36$\times$10$^{-1}$ & 1.13$\times$10$^{-1}$ \\ $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_2) \right)$ & 1.23$\times$10$^{-2}$ & 2.50$\times$10$^{-2}$ & 9.45$\times$10$^{-2}$ & 1.29$\times$10$^{-1}$ & 1.06$\times$10$^{-1}$ \\ $H_R \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)$ & 1.41$\times$10$^{-4}$ & 8.00$\times$10$^{-4}$ & 1.20$\times$10$^{-2}$ & 2.38$\times$10$^{-2}$ & 1.51$\times$10$^{-2}$ \\ $ \left| H_C \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right) \right| $ & 1.36$\times$10$^{-4}$ & 9.05$\times$10$^{-4}$ & 1.75$\times$10$^{-2}$ & 3.41$\times$10$^{-2}$ & 1.79$\times$10$^{-2}$ \\ $\lambda^{R, \mathbf{A}}_{\min}$ & 3.05$\times$10$^{-6}$ & 3.36$\times$10$^{-5}$ & 5.54$\times$10$^{-4}$ & 1.29$\times$10$^{-3}$ & 7.30$\times$10$^{-4}$ \\ $\lambda^{R, \mathbf{A}}_{\max}$ & 3.90$\times$10$^{+5}$ & 4.32$\times$10$^{+4}$ & 1.89$\times$10$^{+3}$ & 6.67$\times$10$^{+2}$ & 1.92$\times$10$^{+3}$ \\ \hline \hline \end{tabular} \label{Table-H_Omega} \end{center} \end{table} The measures in Table I are obtained using products of 128 matrix elements of the corresponding matrices. Since each of these values is small, we report the 1/128 power of these measures. From Table \ref{Table-H_Omega} one can see a well defined correlation between $H \left( \mathbf{\Omega}^\alpha \right)$ and the other quantities. Generally, when $H \left( \mathbf{\Omega}^\alpha \right)$ is small, so too are $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_1) \right)$, $H \left( \mathbf{K}^{\varepsilon,\mathbf{A}} (T_2) \right)$, $H_R \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right)$ and $| H_C \left( \mathbf{R}^{\varepsilon,\mathbf{A}} (T_2,T_1) \right) |\,$\, (meaning a larger extent of non-diagonality in the corresponding matrices). In particular, correlation is good with $\lambda^{R, \mathbf{A}}_{\max} - \lambda^{R, \mathbf{A}}_{\min}$; when it is large, a greater extent of coherent control is possible, in agreement with the non-diagonality measures. Numerically implementing controlled $P_{S_2}(t)$ dynamics proceeded as follows. First, the eigenvalue problem in Eq. (\ref{RelativeControlProblem-A}) is numerically solved for the particular number of bins $N_A$ in the desired energy range $[ E_L, E_H]$, providing the set of eigenvalues $\lambda^{R, \mathbf{A}}$ and corresponding eigenvectors $\underline{\varepsilon}^\mathbf{A}$, which give the $\lambda^{R, \mathbf{A}}$ as $P_{S_2}(T_2)/P_{S_2}(T_1)$ ratios during the $P_{S_2}(t)$ time propagation ($T_2 > T_1 \ge T_{over}$). Then, a set of linearly independent $N_A$ Gaussian lasers [Eqs. (\ref{Gaussian_varepsilon_a_t})--(\ref{ControlledPulseSpectrum})], is introduced (all with the same $\alpha_a$), contiguously and uniformly covering the desired energy range $[E_L,E_H]$. The eigenvectors obtained $\underline{\varepsilon}^\mathbf{A}$ are then expanded in terms of this Gaussian basis with the $\mathbf{d}$ coefficients given by Eq. (\ref{d_solution}). The dynamics are then propagated from $t \le -T_{over}$ to $t \ge T_2$ using the corresponding $\mathbf{d}$ coefficients for each $\underline{\varepsilon}^\mathbf{A}$ eigenvector; finite-time Fourier transforms of the pulses in Eq. (\ref{FiniteTmeFourierTransformExpansion}) are produced using Eq. (\ref{epsilon_omega_t_Gaussian}), and the pulse time profiles are given in Eq. (\ref{epsilon_p_t}). The perturbative nature of the dynamics makes it possible to scale $P_{S_2}(t)$ uniformly by multiplying the $\underline{\varepsilon}^\mathbf{A}$ eigenvector by a scalar constant. We utilized this scaling option to allow presentation of both the maximization and minimization results to be shown on the same figure (upper panel, Fig. \ref{Figure-P_c_t_128_bins}) below. Specifically, the maximization curve is multiplied throughout by $8.3 \times 10^{-5}$ An experimental suggestion of R. J. Gordon (University of Illinois, Chicago) prompted our using a controllable laser in the wavelength range 250--265 nm, with time duration $\sim$150--200 fs, to study the pyrazine $S_0 \to S_2 \leftrightarrow S_1$ excitation and IC dynamics. Using this as a guide, we computed control and dynamics in the corresponding energy range ($E_L$ = 4.68 eV, $E_H$ = 4.96 eV), using $T_1$ = 150 fs, $T_2$ = 250 fs, $N_A$ = 128, and all $\alpha_a$ = 21.0 fs. The resulting $S_2$ populations, together with resulting control fields in time domain, are shown in Fig. \ref{Figure-P_c_t_128_bins} where the subscript c denotes ``controlled". Corresponding control fields in the frequency domain are shown in Figs. \ref{Figure-varepsilon_min_128_bins} and \ref{Figure-varepsilon_max_128_bins}. The behavior of the controlled $P_{S_2} (t)$ (Fig. \ref{Figure-P_c_t_128_bins}), differs in magnitude in the regions when the pulse is acting, and after the pulse is over. To understand this difference, note that to obtain the controlled fields in Figs. \ref{Figure-varepsilon_min_128_bins} and \ref{Figure-varepsilon_max_128_bins} using a set of Gaussians requires that some components of $\mathbf{d}$ vector be large. After the pulse is over, these components are ``balanced'' by one another in the \textit{infinite-time} Fourier transform, to give the small desired population value $P_0$ at $t = T_1$ or $t = T_2$ and to yield the required controlled dynamics. However, while the pulse is acting, these components are ``unbalanced'' giving large transient $\varepsilon_p(\omega,t)$ values. For similar reasons the controlled pulses, being a linear combinations of single Gaussians, are effectively longer than the single Gaussian pulse (see Fig. \ref{Figure-P_c_t_128_bins}, lower panel). To examine the complex structure of the control pulses at Figs. \ref{Figure-varepsilon_min_128_bins} and \ref{Figure-varepsilon_max_128_bins}, we apply several approaches to simplify the field while monitoring the control achieved. First, we attempted a local averaging of the controlled field, where the total field in $N_A$ bins is arithmetically averaged (amplitude and phase separately) using a smaller number $N_S$ of larger bins ($N_A$ being an integer multiple of $N_S$, for example, for $N_A = 64$, $N_S$ = 32, 16, 8, 4, 2). By doing so, the resulting averaged field, however, showed virtually no control. Second, this averaged step-like field was expanded with $N_S$ Gaussians and the resulting smoothed field used for the propagation. Again, this case led to nearly complete loss of control. \begin{figure}[htp] \begin{center} \includegraphics[height = 14cm, width = 12cm]{Figure4.eps} \caption{Upper panel: Two controlled $S_2$ populations, $P_{S_2}(t)$, denoted $P_c (t)$, which either minimize and maximize $\lambda^{R, \mathbf{A}}$, i.e., the $S_2$ population ratio at times $T_2$ = 250 fs and $T_1$ = 150 fs. The $P^{max}_c$ curve has been multiplied by $8.3 \times 10^{-5}$ in order to fit on this figure. Lower panel: Time envelopes of two corresponding controlled laser pulses, $|\varepsilon_p (t)|$, together with the time envelope of the single (uncontrolled) Gaussian laser pulse, $|\varepsilon_a (t)|$.} \label{Figure-P_c_t_128_bins} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[height = 7cm, width = 10cm]{MinEigVecAmplitude_128_eV.eps} \includegraphics[height = 7cm, width = 10cm]{MinEigVecPhase_128_eV.eps} \caption{Amplitude and phase of $\underline{\varepsilon}^\mathbf{A}_p$ eigenvector, which minimizes the $P_{S_2} (T_2) / P_{S_2} (T_1)$ ratio. $\lambda^{R, \mathbf{A}}_{\min}$ = 8.28$\times$10$^{-5}$.} \label{Figure-varepsilon_min_128_bins} \end{center} \end{figure} \begin{figure}[htp] \begin{center} \includegraphics[height = 7cm, width = 10cm]{MaxEigVecAmplitude_128_eV.eps} \includegraphics[height = 7cm, width = 10cm]{MaxEigVecPhase_128_eV.eps} \caption{Amplitude and phase of $\underline{\varepsilon}^\mathbf{A}_p$ eigenvector, which maximizes the $P_{S_2} (T_2) / P_{S_2} (T_1)$ ratio. $\lambda^{R, \mathbf{A}}_{\max}$ = 8.38$\times$10$^{+3}$.} \label{Figure-varepsilon_max_128_bins} \end{center} \end{figure} An alternative simplifying approach was, however, successful. Specifically, we retained only the $N_R$ largest field amplitudes out of the total $N_A$ (with all the smaller ampitudes set to zero), keeping the phase profile intact, and monitoring the changes in control ratios. Sample results for $N_A$ = 64 are shown in Fig. 7. A total $N_A$ of 64 is used here (results with $N_A$ = 128 are qualitatively the same). It is clear from Fig. \ref{Figure-N_R_64_bins}, that this approach, retaining only the largest amplitudes, works better than the previous two since it tends to partially maintain important dynamical information. Generally, $\lambda^{R, \mathbf{A}}_{\min}$ is more robust with respect to this amplitude truncation than is $\lambda^{R, \mathbf{A}}_{\max}$. Additionally, we found that the extent of control achieved using only $N_R$ amplitudes out of $N_A$, is similar in magnitude to control extents without truncation, but using this $N_R$ as the original $N_A$. That is, the same number of degrees of freedom in both cases provides similar extents of control. \begin{figure}[t] \begin{center} \includegraphics[height = 9cm, width = 12cm]{Figure7.eps} \caption{Upper panel: Dependence of $\lambda^{R, \mathbf{A}}_{\min}$ on the number of retained amplitudes $N_R$. Lower panel: The same, but for $\lambda^{R, \mathbf{A}}_{\max}$. Total number of amplitudes $N_A$ = 64.} \label{Figure-N_R_64_bins} \end{center} \end{figure} Theoretically, maximum and minimum control limits via this approach can be reached using all coarse-grained $| \overline{\alpha} \rangle$ states accessible to the laser, \textit{i.e.}, those belonging to the interval of interest $[E_L, E_H]$. For the case presented in Figs. \ref{Figure-P_c_t_128_bins}, \ref{Figure-varepsilon_min_128_bins} and \ref{Figure-varepsilon_max_128_bins}, the number of $| \overline{\alpha} \rangle$ states, using our pyrazine description, is 11885. However, as mentioned in the Appendix, the optimization problem for $| \overline{\alpha} \rangle$ states in Eq. (\ref{RelativeControlProblem-alpha}) is numerically stable only up to dimensionality 150--180, and the control range $\lambda^{R, \alpha}_{\max} - \lambda^{R, \alpha}_{\min}$ continues to increase when the dimensionality increases from 128 to 180, reaching $\sim 10^5$. We anticipate a theoretical control range limit to be $\sim 10^{9}$--$10^{10}$, which, however, is not achieved due to the numerical limitations discussed in Appendix (see below). \section{Summary and Conclusions} \label{Summary} Coherent control of internal conversion (IC) between the first and second singlet excited electronic states of pyrazine ($S_1$ and $S_2$) is examined, using two different control objectives. The control is performed by means of shaping the laser, which excites the system from the ground electronic state $S_0$ to the second excited electronic state $S_2$. Resonance energy broadening and resonance overlap are shown to be responsible for phase control efficiency, and a correlation between resonance overlap and controllability is established. A huge range of control was obtained for the relative population of $S_2$ at long times as compared to times just after the pulse is over. Different ways to simplify the controlled fields are described, and the behavior of the control as a consequence of these simplifications is investigated. Specifically, we have found that retaining the largest field amplitudes is the best approach to field simplification. \section{Acknowledgements} This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC). This manuscript summarizes one of the last joint efforts of the Brumer and Shapiro research groups. The topic, overlapping resonance effects, was beloved by Moshe. P.B. is grateful for the opportunity to have interacted with such an outstanding scientist for over 40 years, publishing over 120 joint papers, and two books.
proofpile-arXiv_067-13869
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgments} \hspace{5mm}This work was supported in part by the National Natural Science Foundation of China under Grant No. 11275088, the Natural Science Foundation of the Liaoning Scientific Committee (No. 2014020151). \vspace{5mm}
proofpile-arXiv_067-13873
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Background} Prospects for understanding the evolution of the Galactic center region were boosted by discovery and quantitative investigation \citep{schodel} of a star designated as S2 that is bound to the Galaxy's supermassive black hole (SMBH) in a 15.8 year orbit, with additional stars in the region under similar investigation -- see also \citet{ghez}. Many papers have since added observations and ideas on individual SMBH orbiters and on the statistics and collective properties of stars in the inner Galactic center region, e.g. \citet{ghezbeck,ghezduch,alex,blum,eisenhauer,gill09,davies,gill13,witzel} and an extensive review by \citet{genzel}. Issues abound regarding the evolutionary states of these stars and how they came to be in the inner Galactic center, e.g. \citet{davies,zhang,gill09,genzel}. Stars arriving on nearly parabolic or even hyperbolic orbits could suffer major stripping on initial pericenter passage \citep{davies}, with the lost material carrying away enough orbital energy to leave the remnant in an elliptical orbit, filling its limiting lobe at pericenter. Alternatively, a star that has been trapped into a tight orbit while well detached from its lobe could later undergo evolutionary expansion and attain lobe filling. Perhaps a lost binary companion may carry off the requisite orbital energy at first encounter with the SMBH and leave the remaining star bound. \citet{eisenhauer} found most of the brighter inner orbiters to be B0 to B9 main sequence stars, with S2 in the range O8 to B0, and with rotation velocities typical of B stars in the Galactic disk. \citet{davies} argued that they are actually tidally stripped remnants of AGB stars that now superficially resemble main sequence stars and estimated S2's mass at below a solar mass, specifically about 0.8 $M_\odot$. All in all, mass estimates for S2 that have been published or correspond to observed (main sequence) spectral types range from 0.8 to more than 20 $M_\odot$. Accordingly, we simply adopt 10 $M_\odot$ for exploratory lobe size computations. \section{Brightening of Star S2 Due to Lobe Overflow} \label{s2_sec} S2 has continued as the most thoroughly discussed SMBH orbiter, largely due to its having been observed over more than a full orbit, and having brightened by about $0.^m40$ in K band, coincident with pericenter passage \citep{gill09}. \citeauthor{gill09} offered seven ideas for explaining the brightening, however they then ruled out four of the ideas and argued against likelihood of the other three. Consequences of lobe overflow -- a very common player in a variety of close binary issues -- were not among the seven ideas. Lobe overflow might not be considered if one were thinking only of the huge orbital scale (of order 1000 AU) compared to the size of a main sequence star, but it turns out that S2's limiting lobe is actually similar in size to a main sequence star of, say, 10 $M_\odot$, as shown in \S\ref{loberad}. Lobe overflow is an attractive idea for S2 brightening -- one could postulate that S2 exceeded its limiting lobe at the 2002 pericenter passage and ejected a strong puff of material that quickly expanded in vacuum so as to appear as a rather large cloud of brightly emitting gas. The lobe size issue will now be addressed. \section{Lobe Size Essentials} \label{lobesize} To place the present work in context, we review the formal relations in estimates for (1) tidal radii and (2) limiting lobes. Although these two terms are quite distinct, both often go by the name 'Roche limit' and some recent papers treat them as equivalent, thereby leading to considerable confusion and perhaps even wrong conclusions. Item 1, tidal radius, concerns the distance from a mass at which an idealized fluid mass (usually a small satellite) is disrupted when tidal stretching matches or exceeds the satellite's cohesion due to self-gravity. The simple tidal radius concept considers test particles on the surface of a self-gravitating sphere of mass $m$ and radius $r$ at a distance $d$ from an object of mass $M$. The test particles, located on opposite ends of a diameter of $m$ on the line of centers, suffer a stretching force (surface to center) per unit particle mass due to external mass $M$ of $2GMr/d^3$, assuming $r$ to be very small. The effective compressional force per unit particle mass between surface and center that can result in a static configuration is the object's surface gravity, $Gm/r^2$. Quantity $d$ is the tidal radius and marks the distance from $M$ at which a very small object is disrupted, so the final relation pertains to the test particles being arbitrarily close together. If $r$ is not small compared to $d$, then the simple relation may still give the tidal radius approximately, with only gravitation considered, although the full relation is then more complicated. A flaw in this picture with regard to stretched objects of finite size is that the satellite is presumed spherical, whereas tidally stretched stars lack front-to-back symmetry (i.e. have "teardrop" shapes). But more important for the case of star S2 and probably other SMBH orbiters is that rotation of $m$ is not considered in the traditional tidal limit development. In summation, the tidal limit relation between $d$ and $r$, $d=r(2M/m)^{1/3}$, can be inverted with $d$ set to the orbital separation to give roughly correct limiting size where the assumptions apply, namely \textit{for small, synchronously rotating\footnote{In this context, 'synchronous rotation' means that angular star rotation and mean orbital angular rotation are equal or, equivalently, that the star rotates once per orbit period. Note that there are other meanings of 'synchronous rotation', each valid in its own context.} satellites}, but may be wrong by orders of magnitude for fast rotating stars such as S2. Item 2 is commonly called a Roche lobe, although it was not originated or even considered by Roche, who did however consider a special case of the potential utilized today. As the idea is to specify a size limit set by the condition that material not be spilled from a star, the descriptive term 'limiting lobe' serves well. The insight that spawned the concept came from \citet{kuiper}, who realized that tidal force is an unnecessary complication with regard to the lobe size limit, as only one point, not two, need be considered, and only ordinary effective gravity at that point, not differences between two points, need be computed. The procedure is (step 1) to find the point along the line of centers where ordinary gravitational (not differential tide raising) forces due to $M$ and $m$, along with local rotational force, add to zero. Material that is stationary in a frame that co-rotates with the star is not bound to the star at this special point, so an ejection nozzle forms. Asynchronous examples are now treated via a factor $F^2$ (see \S\ref{loberad}) that alters the centrifugal term without affecting the basic idea of locating the effective gravity null point \citep{plavec58,limber}. A definite equipotential that defines the star surface passes through the special point of null effective gravity, so (step 2) numerically integrate the volume, $V_{lobe}$, enclosed by that equipotential and thereby find the equivalent-sphere mean radius as \begin{equation} R_{mean}=(3V_{lobe}/{4\pi})^{1/3}. \label{rlobe} \end{equation} Kuiper assumed synchronous rotation, which is the expected and observationally indicated case for very close binaries (due to tidal locking). Conditions that lead to small limiting lobes for SMBH orbiters are the enormous mass ratio, large orbital eccentricity, and -- not previously emphasized in the literature -- fast rotation. For fixed SMBH mass\footnote{We adopt $M_{SMBH}= 4.31\times 10^6$ $M_{\odot }$ \citep{gill09} and thus a mass ratio, $M_{SMBH}/M_{10}$, of $4.31\times 10^5$ for a $10$ $M_{\odot }$ star.}, lobe size decreases with decreasing star mass, decreasing orbit size, increasing eccentricity, and increasing star rotation. S2 is known to be a fast rotator, while any kind of tidal locking would produce exceedingly slow rotation in view of the 15.8 year \citep{ghezduch} orbit period, $P_{orb}$. The fastest locked rotations would be for locking to the pericenter orbital angular rate and give $P_{rot}$ around half a year for S2, whereas the \textit{measured} $V_{rot} \sin i$ is $220\pm40$ $km$ $sec^{-1}$ \citep{ghezduch}, so there is no tidal locking of any kind. Whether the star's equator is aligned with the orbit plane in not known but the orbital $\sin i$ is $0.7040\pm0.0058$ \citep{gill09} so, under the assumption of alignment, $V_{rot} \approx312\pm57$ $km$ $sec^{-1}$. The corresponding angular rotation, assuming $R_{eq}=5.0 R_\odot$, is $\approx 7100$ times the mean orbital angular rate, and rotational force becomes important in setting local effective gravity. It will be shown below that the problem of main sequence SMBH orbiters being too small to exceed their limiting lobes can disappear if they rotate at typical B star rates, as does S2. \subsection{The Eggleton Approximation} Approximation formulas are often used to estimate lobe size, most commonly one by \citet{egg83} that reproduces accurately computed mean lobe radii, based on the Kuiper logic, to better than 1 percent over the full range of mass ratio, from $0$ to $\infty$\footnote{Incidentally, we checked the 1 percent accuracy statement in \citet{egg83} at the request of Prof. Eggleton, finding no discrepancies as large as $0.8$ percent among 18 widely spread mass ratios, of which only two exceeded half a percent. Column 2 of Eggleton's Table 1 (mean lobe radii from integrated volumes) was reproduced to all printed digits except for two differences of 1 in the last place. Column 6, the Eggleton approximation, was reproduced exactly. The checks were done with the WD \citep{wd71} computer model.}. Although a rather small computer program can generate such lobe radii with negligible error, and some public binary star programs list lobe radii as incidental output, the Eggleton approximation has provided a one-line lobe calculation in many evolutionary programs where 1 percent accuracy may be sufficient. However note that the Eggleton formula is specifically for synchronous rotation and not meant for stars that rotate faster or slower than synchronously. It will give limiting lobe radii that are too large by orders of magnitude if applied to fast rotators such as star S2, and may be responsible for misleading conclusions where rotation rates are unknown. An algorithm that follows the Kuiper logic, enhanced to handle arbitrary rotation and eccentricity, is not difficult to program and avoids the approximations of fitted formulas. Most \textit{accurate} limiting lobe computations now adopt Kuiper's strategy, usually via one of the commonly used binary system light/velocity curve programs, although another option can be the collection of intricate approximation formulas by \citet{sepinski} that account for asynchronism and eccentricity. \section{Quantitative Estimate of S2's Lobe Size} \label{loberad} Computation of a binary component's limiting lobe geometry begins with solution for a point of null effective gravity along the line of star centers (x-axis), thereby locating the nozzle from which matter flows if the lobe is filled or slightly overfilled. The relevant equation for the S2 problem must account for orbital eccentricity and the star's rotation in addition to the gravity of both objects, as does eqn. 3 of \citet{wils79} for the derivative of potential\footnote{The potential is a modified version according to the convention in \citet{kopal}.} in the x-direction, which is zero at the null point. That equation is \begin{equation} \frac{d\Omega}{dx}=-\frac{x}{(x^2+y^2+z^2)^{3/2}} + \frac{q(D-x)}{([D-x]^2+y^2+z^2)^{3/2}} + F^2(1+q)x -q/D^2. \label{dodx} \end{equation} \noindent Rotation enters via a parameter $F$, the ratio of rotational angular velocity $\omega_{rot}$ to mean (i.e. time-averaged) orbital angular velocity $\omega_{orb}$. Other input quantities are the component mass ratio ($q=M2/M1$), momentary separation of star centers ($D$), and $x,y,z$ rectangular coordinates of a point at which $d\Omega/dx$, and subsequently $\Omega$, are to be computed. Here S2 is taken to be star 1 and the SMBH is object 2, so the mass ratio is a large number rather than its reciprocal. The unit for $x$, $y$, and $z$ is $D$ while $D$ is in unit $a$, the semi-major axis of the relative orbit, in computations with equation \ref{dodx}, with $D=1-e$ at periastron or pericenter. The location of the null point along the line of centers is found by setting $y=z=0$ and $d\Omega/dx$ also to $0$, setting the dimensionless angular rotation $F$ and eccentricity $e$ to values of interest, and then solving for $x$ by numerical inversion (such as Newton-Raphson iteration). The potential at the null point then establishes the lobe surface's 3-dimensional form as an equipotential that includes the null point (see eqn. 1 of \citet{wils79} for the generalized defining potential). The equipotential's enclosed volume ($V$) can then be integrated numerically via the defining equation and a mean lobe radius found from eqn. \ref{rlobe}. A final step computes equatorial rotation velocity, $V_{eq}$, from angular velocity. That calculation is simplified by the star being almost axially symmetric and its equator circular at these fast rotation rates, so there is no issue of where along the equator the result applies. Accordingly \begin{equation} V_{eq}=R_{eq}\omega_{orb}F, \end{equation} \noindent with length in km, time in seconds, and mean orbital angular velocity, $\omega_{orb}=2\pi/P_{orb}$, in $radians/sec$. The binary star modeling and analysis program (WD program\footnote{The WD program's most recent public version, with documentation and sample input files, can be downloaded from anonymous FTP site ftp.astro.ufl.edu. Go to sub-directory pub/wilson/lcdc2013.}) applied here has refinements that allow reliable operation in difficult circumstances. For example, its Newton-Raphson iterations (for inversion of equation \ref{dodx} to find the effective gravity null point) evaluate several Taylor series terms beyond the usual first derivative term. This point is mentioned so that readers who may write their own inversion program to check our results are not disappointed by failed computations. A relatively simple inversion scheme can converge well for ordinary mass ratios but not for ultra-large mass ratios such as the $4.31\times 10^5$ of the present problem Also important for fractionally tiny lobes (large $q$, large $F$) is to begin iterations already close to the null point, so as to avoid an initial jump \textit{beyond} the proper range between the star centers, from which recovery is difficult. Fortunately such a configuration admits particularly good starting estimates of the null point's location. To see this readily, write eqn. \ref{dodx} as it applies along the line of centers at the null point, \begin{equation} 0= -\frac{1}{x^2} + \frac{q}{(D-x)^2} +F^2 (1+q) x -\frac{q}{D^2}. \end{equation} \noindent This form is a quintic equation in $x$, soluble only iteratively, but with $x$ very small the second and fourth terms on the right side very nearly cancel so that the remaining terms (also replacing 1+$q$ with the very large $q$) lead to a simple result, \begin{equation} x\approx F^{-2/3} q^{-1/3}. \end{equation} \noindent The approximation is reasonably accurate only for quite small $x$, although very accurate for SMBH orbiters and perhaps usefully accurate for $M_2/M_1$ of a few hundred or more. Inputs to the lobe size computation for S2 were $e=0.88$ \citep{gill09} and $M_{SMBH}/M_{S2}=4.31\times 10^5$ (mass ratio), along with a few well spaced $F$'s. One of the $F$'s is close to the nominal value of 7100 that goes with our rough estimate of $V_{eq}$ that assumed alignment of the equatorial and orbit planes in \S\ref{lobesize}. The resulting mean lobe radius is $6.5 R_\odot$, which is larger than a 10 $M_\odot$ main sequence star (about 3 $R_\odot$ on the ZAMS to 5 $R_\odot$ at the TAMS), although the spectral type estimate by \citet{eisenhauer} extends to O8, for which a main sequence radius can exceed $6.5 R_\odot$. A stripped highly evolved star that \textit{resembles} a main sequence star, as in \citet{davies}, remains a candidate. With either kind of star, the idea of lobe overflow at pericenter passage now becomes a real possibility. Table \ref{lobe} has mean lobe radii\footnote{Note that these are 'equivalent sphere' radii, not distances to the effective gravity null point.} for four assumed angular rotation velocities (F's) of the 10 $M_\odot$ model orbiter to give a sense of how steeply lobe size depends on rotation rate. A check to see if the program gives the right order of lobe size is provided by calculation of the equatorial radius of a 10 $M_\odot$ isolated star (no SMBH) that is marginally unbound at the equator while rotating at one of the table values, $307$ $km$ $sec^{-1}$. If the magnitudes of rotational and gravitational force are then equated, the equatorial radius will be given by $R_{eq}=GM/V^2$, which evaluates to $20.2 R_{\odot}$ for a 10 $M_\odot$ star. The corresponding \textit{mean} radius will be smaller since $R_{pole}$ is smaller than $R_{eq}$, so rotation alone produces a limiting size only about three times greater than do the combined effects of rotation and the SMBH gravity. The purely gravitational lobe radius for a slowly rotating star, with $e=0.88$ and the present problem's adopted masses, is $\approx 100 R_{\odot}$, so the effect of fast rotation on lobe size is not small. \subsection{Why Such Large Scale Ejection?} A remaining issue is why a \textit{huge} puff would be ejected at pericenter passage. The ordinary context of lobe overflow is the synchronous-circular case that is commonly encountered in close binary systems, where gas leaks out quiescently and is usually difficult or impossible to detect photometrically. S2, being a very fast rotator, will not undergo the gentle process of the synchronous-circular case with its low ejection velocity. The supersynchronous case is very different, with an ejection velocity close to the star's equatorial velocity, which is of order 300 $km$ $sec^{-1}$ for our model of S2. And why would a large amount of gas be ejected? Suppose the \cite{davies} proposal, that the close-in orbiters are tidally stripped highly evolved stars, is correct, and that S2 is typical. Well known (e.g. \citet{plavec68}) is that radii of highly evolved (i.e. chemically stratified) stars increase with loss of envelope matter, in contrast with shrinkage for unevolved and modestly evolved stars. S2 has 15.8 years between pericenter passages to expand following each pass and could arrive at pericenter not just marginally filling its lobe but substantially overfilling it. Although a quantitative estimate of the overfilling will require reasonably good estimates of S2's internal structure that are not now in hand, the qualitative picture is that S2 may reach pericenter ready to send very fast moving gas through a large open nozzle, leading to a very large ejection event. One test of this idea, waiting for the next pericenter passage, is that emission lines should appear as the ejected gas expands and becomes optically thin. Naturally some or all of these expectations may be anticipated as other SMBH orbiters pass through their pericenters.
proofpile-arXiv_067-14057
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The state of the art in Agent-based Modelling (ABM) is to identify phenomena of interest around which small models are developed to support specific explanations. Examples include the emergence of cooperation in competitive environments, analysed by games of Iterated Prisoner's Dilemma (IPD), or the dynamics of opinion formation in networks, using variations on the Voter model. Often the design of such models is ad-hoc, in particular when it comes to choosing and validating their ingredient features and calibrating their parameters. For IPD we may consider features such as network structure, decision making, learning, and network change. Opinion formation models can address the impact of media and external events, network structure, network change, cognitive dissonance, psychological factors, strength of conviction, etc. A scientific approach requires both \begin{enumerate} \item a systematic analysis of the choices faced when creating models, including a methodology to compare and assess their impact; \item a way to calibrate a model's parameters and validate its results against empirical observations. \end{enumerate} Without addressing these requirements, models will only be useful in answering isolated questions, making little contribution to a comprehensive understanding of the phenomena themselves, nor their relation with reality. In this paper we consider an approach to answering the first question, the systematic analysis of the combinations of features that could be addressed in ABMs. We propose techniques from software engineering and semantics of modelling and programming languages to support the design and assessment of such features, in particular \begin{itemize} \item graph transformation, as semantic representation for agent-based models \item feature diagrams, to identify ingredients under consideration and state their interdependencies \item extensions extension relations between graph transformation systems, to represent model fragments expressing features \end{itemize} \section{Background} Agent-Based Modelling~\cite{chattoe-brown2013}, hereafter ABM, is a technique for understanding social phenomena, distinct from both statistical approaches (like regression analysis) and those based on narrative accounts (like ethnography). Its distinctiveness arises from the use of a computer program explicitly representing the cognition and (inter)action of ``agents'' (simulated social actors) to explore the aggregate effects of these. This gives the usual advantages of formal approaches (when compared to the risk of incomplete, faultily reasoned or contradictory narratives) without the implausible simplifying assumptions often required of such approaches. The approach also gives rise to a distinctive methodology in which assumptions about human behaviour (based on qualitative interviews or experiments for example) can be calibrated independently of the match between real aggregate data and its simulated equivalents (validation). This approach is importantly different from the more common ``fitting'' of statistical models in which match is not achieved as a falsifiable hypothesis (as it is in ABM) but by deliberate adjustment of model parameters without a robust social interpretation. (The slope of a regression line tells us about a pattern in data. It is not clear if it tells us anything about individual behaviour.) ABM are also important because systems of interacting agents are often complex, leading to counter-intuitive outcomes in aggregate even when individual behaviours are understood. This makes casual inference from statistical regularities to individual behaviour and ``grossing up'' from individual behaviour narratives to aggregate outcomes potentially unreliable. Despite the compelling logic of this methodology, however, large numbers of non-empirical (not calibrated or validated) ABM are still published \cite{angus2015}. These often select elements of a social phenomenon arbitrarily (for example social influence, social networks, geography and rationality) which makes them potentially non-comparable as well as non-empirical (see for example \cite{chiang2013,izquierdo2008,power} as typical examples from the huge ABM literature on the Prisoner's Dilemma). Obviously the most effective way to evaluate these non-empirical models would be using data \cite{chattoe-brown2014}. Unfortunately, for a variety of reasons (at least some of which are scientifically legitimate) this is not always feasible. Under these circumstances, new tools that allow us to systematically explore and evaluate the space of alternative models are a valuable alternative. \section{Example: The SIR Model} An SIR model is an epidemiological model for the spread of a disease in a population of agents predicting the numbers infected with a contagious illness over time. Fig.~\ref{fig:SIRbasic} shows the most basic version of such a model. Individual agents change state from $S$usceptible to $I$nfected before becoming $R$esistant. Modelled as graph transformation system, agents and their data are specified by a type graph and their actions by graph transformation rules, as shown in Fig.~\ref{fig:SIRbasic}. The type graph defines a single node type \emph{Agent} with an attribute \emph{s} for state that can assume values $S, I$ or $R$. Rule \emph{infect} describes how an agent can change state from $S$ to $I$ in the presence of another $I$nfected agent. Rule \emph{recover} states that an infected agent can become resilient. \begin{figure}[h] \centerline{\includegraphics[scale=0.3]{SIRbasic.pdf}} \caption{Basic SIR model\label{fig:SIRbasic}} \end{figure} The basic SIR model may be able to predict levels infections in situations where agents have equal probability of interaction, but this is not in general the case. In order to get more accurate predictions we have to incorporate information about the conditions under which infections happen. Obvious factors are location and social connections. To account for these, the model can be extended as shown in Fig.~\ref{fig:SIRfeatures}. The \emph{location} model adds a location attribute $l$ to each agent and limits the \emph{infect} rule to such cases where both agents are in the same location. In addition, rules are added for moving in different directions. Rule \emph{north} is shown as an example. The \emph{network} model extends the base model by allowing agents to be linked. There is no movement, but the \emph{infect} rule is restricted to agents that are connected. In addition to network structure, we can introduce a \emph{dynamics} as shown in the last model. A rule is added to allow agent $a1$ to switch connection, changing their behaviour to avoid infection through $a2$. Apart from making predictions, agent-based models represent hypotheses about the mechanisms and factors affecting social processes. For example, we can compare predictions of different versions of the model with the observed behaviour to understand which of the conditions (location, static or changeable connections) are significant to the spread of certain diseases. It is worth stressing that ABMs used for prediction or as hypotheses about real processes will be much more sophisticated than the minimal examples chosen for this paper. \begin{figure} \centerline{\includegraphics[scale=0.3]{SIRfeatures.pdf}} \caption{SIR model features\label{fig:SIRfeatures}} \end{figure} \section{Feature Modelling and Composition} To support the use of feature modelling in ABMs, features need to be identified, specified, composed and assessed for their relevance. We also consider the extraction of reusable features into model libraries. \paragraph{Feature Identification} % For a given phenomenon (e.g., opinion formation or disease propagation) we want to know \begin{itemize} \item What are the features worth considering? \item What configurations of features are meaningful? \end{itemize} These questions are best answered studying existing literature or models, which is beyond the scope of this discussion. The result of such an analysis however can be expressed as a feature diagram $FD = (F, T)$ consisting of a set of features $F$ and a tree-like diagram $T$ over $F$ as nodes describing valid configurations $C \subseteq F$. \paragraph{Feature Specification and Composition.} A feature model $FM = (FD, M, m)$ is made up of a feature diagram $FD$, an underlying model $M$ incorporating all features, usually called the 150\% model to indicate that it is not in itself a meaningful model but one requiring further restrictons, and a mapping $m$ identifying features $F$ in $M$. From this we can derive a variant $MC$ of $M$ for every valid configuration $C$. \begin{figure} \centerline{\includegraphics[scale=0.25]{feature-model.pdf}} \caption{SIR feature model\label{fig:feature-model}} \end{figure} The feature model for the SIR model and its extensions are shown in Fig~\ref{fig:feature-model}. The tree-like diagram on the left shows that the base feature \emph{SIR} can be extended by optional features \emph{location} and \emph{network} such that the latter admits a further extension by \emph{dynamics}. The mapping of features to model elements (types and rules) is illustrated in the 150\% model shown on the right. All unlabelled (black) elements belong to the base model. Elements in light blue, such as the $l$ attribute and the movement rules, belong to the \emph{location} feature, etc. That means, $M$ provides a combination of all possible features of the model, some of which may be mutually exclusive or redundant. There is no consideration for complex interaction of features at this stage, i.e., separate features, when added jointly, are assumed to be orthogonal. The mapping of features to model elements provides an interpretation of the feature tree, where nodes represent graph transformation systems and edges are extension morphisms between systems. The semantic idea is that the behaviour of the extended model can be projected onto the smaller one, i.e., extension reflects behaviour. Graph transformation systems can be composed along suitable morphisms. A configuration (set of features) defines a sub-tree of the feature diagram. If it exists, the colimit of this sub-diagram represents a system combining all features from this configuration~\cite{DBLP:journals/ijseke/EngelsHTE97}. Formally, each configuration $C$ defines a graph transformation system $GTS_C$ such that $C \subseteq D$ implies $GTS_C \subseteq GTS_D$. The inclusion is \emph{conservative} if there are no new effects on old types. That means, if rules in the extended system are reduced to the base types, their effects coincide with those of the corresponding rules in the base system. In this case, behaviours in $GTS_D$ can be projected onto behaviours of $GTS_C$, i.e., $GTS_C$ is a view of $GTS_D$. This is important because it allows us to compare two models in order to assess the relevance of their distinguishing features: If two models differing for a given feature do not show significantly different behaviour (as assessed, for example, by simulation), then the feature is not considered relevant to the behaviour of the model. \begin{figure} \centerline{\includegraphics[scale=0.3]{merge.pdf}} \caption{Variant models merging\label{fig:merge}} \end{figure} Fig.~\ref{fig:merge} shows how the systems representing the different extensions of the base model are composed. Orthogonal extensions of the type graphs are merged by adding both extensions independently to the new model, while rules are simply copied so the new model inherits all rules of both given models. It turns out that the extensions in the first merge operation are all conservative, but the extension of $GTS_{network}$ by $GTS_{dynamics}$ is not. The reason is that the \emph{desert} rule creates a new effect on instances of the existing link type, so $GTS_{dynamics}$ behaviour cannot be mapped to that of $GTS_{network}$. However, $GTS_{dynamics}$ is a conservative extension of the basic $SIR$ model. \section{Discussion and Future Work} We have outlined an approach to use graph transformation and feature models to create and organise agent-based models in the social sciences. So far we did not address the implementation of ABMs in a simulation language nor the question of calibrating a model's parameters and initial conditions, or validating its findings. In order to judge if a given feature is relevant in the context of a model, we can compare model versions with or without an optional feature or containing one or another alternative feature. Based on the construction of the graph transformation systems corresponding to the relevant configurations, they are extensions of common base systems, and therefore their behaviours can be projected onto and compared from the perspective of this shared base. Combined with a suitable simulation approach this will allows us to compare results across different models and thus assess which feature configurations are relevant to understanding the phenomenon at hand. The feature-oriented composition of systems is supported by tools such as FeatureMapper\footnote{http://featuremapper.org} which obtain a model configuration by filtering a 150 \% model. A solution specifically for graph transformation systems using the Henshin tool\footnote{https://www.eclipse.org/henshin/} is the work on variability-based graph transformations~\cite{DBLP:conf/se/0001RACTP17} supporting optional or alternative structures within rules over the same type graph. While we follow a composition-based approach where each features are mapped to separate graph transformation system related by morphisms, the variability-based solution uses annotations on model elements to map features. The difference is mainly one of notation. In both cases a composed model such as $GTS_{location, network}$ would contain one rule which propagates an infection if the agents are both at the same location and related in the network. Once features have been identified and assessed, we may want to reuse the components implementing them in other models. That means to abstract them from the 150\% model they were originally embedded with by specifying the assumptions by which the components can operate as parts of other models and parameterising the components to make them more widely reusable. Component concepts for graph transformation models have been studied and may be applicable here, if they can be extended to stochastic or probabilistic models. In order to address the calibration of parameters and validation of models needed to link models to real data, notions of stochastic equivalence or similarity of models could be investigated. A range of techniques exist to develop, analyse and maintain feature models in software engineering. For example, reverse engineering techniques can be used to support the extraction of feature models~\cite{DBLP:conf/icse/SheLBWC11}. These are left to be explored for the usefulness in the case of agent-based modelling. \nocite{*} \bibliographystyle{eptcs}
proofpile-arXiv_067-14147
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} The origin of the solar wind is a long-standing problem \citep{parker58} that continues to receive considerable attention. A leading model for the origin of the fast solar wind appeals to Alfv\'en waves (AWs) that are launched by photospheric motions. As these AWs propagate away from the Sun, they undergo partial reflection due to the radial variation of the Alfv\'en speed \citep{heinemann80}. Nonlinear interactions between counter-propagating AWs then cause AW energy to cascade to small scales and dissipate, heating the plasma \citep{velli89,zhou89,cranmer05,verdini12,perez13,vanballegooijen17}. This heating increases the plasma pressure, which, in conjunction with the wave pressure, accelerates the plasma to high speeds \citep{suzuki05,cranmer07,verdini10,chandran11,vanderholst14}. Although non-compressive AWs are the primary mechanism for energizing the solar wind in this model, a number of considerations indicate that compressive fluctuations have a significant impact on the dynamics of turbulence in the corona and solar wind. Observations of the tail of Comet-Lovejoy reveal that the background plasma density~$\rho_0$ at $r= 1.2 R_{\odot}$ (where $R_{\odot}$ is the radius of the Sun) varies by a factor of~$\sim 6$ over distances of a few thousand~km measured perpendicular to the background magnetic field~$\bm{B}_0$ \citep{raymond14}. These density variations (denoted $\delta \rho$) lead to phase mixing of AWs, which transports AW energy to smaller scales measured perpendicular to~$\bm{B}_0$ \citep{heyvaerts83}. Farther from the Sun, where $\delta \rho/\rho_0$ is significantly smaller than $ |\delta \bm{B}|/B_0$ \citep{tumarsch95,hollweg10}, AWs still couple to slow magnetosonic waves (``slow waves'') through the parametric instability, in which outward-propagating AWs decay into outward-propagating slow waves and inward-propagating AWs\footnote{The terms outward-propagating and inward-propagating refer to the propagation direction in the plasma rest frame. Beyond the Alfv\'en critical point, all AWs propagate outward in the rest frame of the Sun.} \citep{galeev63,sagdeev69,goldstein78,spangler86,spangler89,spangler90,hollweg94,dorfman16}. This instability and its nonlinear evolution are the focus of the present work. A number of studies have investigated the parametric instability in the solar wind within the framework of magnetohydrodynamics (MHD) \citep[e.g.,][]{malara00,delzanna01a,shi17}, while others have gone beyond MHD to account for temperature anisotropy \citep{tenerani17} or kinetic effects such as the Landau damping of slow waves \citep[e.g.][]{inhester90,vasquez95,araneda08,maneva13}. \cite{cohen74}, for example, derived the growth rate of the parametric instability in the presence of strong slow-wave damping and randomly phased, parallel-propagating AWs. \cite{terasawa86} carried out 1D hybrid simulations and found that Landau damping reduces the growth rate of the parametric instability and that the parametric instability leads to an inverse cascade of AWs to smaller frequencies. In this paper, weak turbulence theory is used to investigate the nonlinear evolution of the parametric instability assuming a randomly phased collection of AWs at wavelengths much greater than the proton inertial length~$d_{\rm i}$ in a low-$\beta$ plasma, where $\beta$ is the ratio of plasma pressure to magnetic pressure. The fluctuating fields are taken to depend on all three spatial coordinates, but the wave kinetic equations are integrated over the perpendicular (to $\bm{B}_0$) wave-vector components, yielding equations for the 1D power spectra that depend only on the parallel wavenumber and time. The starting point of the analysis is the theory of weak compressible MHD turbulence. Collisionless damping of slow waves is incorporated in a very approximate manner analogous to the approach of \cite{cohen74}, by dropping terms containing the slow-wave energy density in the wave kinetic equations that describe the evolution of the AW power spectra. The remainder of the paper is organized as follows. Section~\ref{sec:WKE} reviews results from the theory of weak compressible MHD turbulence, and Section~\ref{sec:linear} uses the weak-turbulence wave kinetic equations to recover the results of \cite{cohen74} in the linear regime. Section~\ref{sec:inverse} shows how the wave kinetic equations imply that AW quanta undergo an inverse cascade towards smaller parallel wavenumbers, and Section~\ref{sec:exact} presents several exact solutions to the wave kinetic equations. The main results of the paper appear in Section~\ref{sec:SW}, which uses a numerical solution and an approximate analytic solution to the wave kinetic equations to investigate the parametric decay of an initial population of randomly phased AWs propagating in the same direction with negligible initial power in counter-propagating AWs. The numerical results are compared with observations from the {\em Helios} spacecraft at a heliocentric distance of 0.3~AU. Section~\ref{sec:applicability} critically revisits the main assumptions of the analysis and the relevance of the analysis to the solar wind. Section~\ref{sec:conclusion} summarizes the key findings of the paper, including predictions that will be tested by NASA's {\em Parker Solar Probe}. \section{The Wave Kinetic Equations for Alfv\'en Waves Undergoing Parametric Decay} \label{sec:WKE} In weak turbulence theory, the quantity $\omega_{\rm nl}/\omega_{\rm linear}$ is treated as a small parameter, where $\omega_{\rm nl}$ is the inverse of the timescale on which nonlinear interactions modify the fluctuations, and $\omega_{\rm linear} $ is the linear wave frequency. Because \begin{equation} \omega_{\rm nl} \ll \omega_{\rm linear}, \label{eq:wto} \end{equation} the fluctuations can be viewed as waves to a good approximation. The governing equations lead to a hierarchy of equations for the moments of various fluctuating quantities, in which the time derivatives of the second moments (or second-order correlation functions) depend upon the third moments, and the time derivatives of the third moments depend upon the fourth moments, and so on. This system of equations is closed via the random-phase approximation, which allows the fourth-order correlation functions to be expressed as products of second-order correlation functions \citep[see, e.g.,][]{galtier00}. The strongest nonlinear interactions in weak MHD turbulence are resonant three-wave interactions. These interactions occur when the frequency and wavenumber of the beat wave produced by two waves is identical to the frequency and wavenumber of some third wave, which enables the beat wave to drive the third wave coherently in time. If the three waves have wavenumbers $\bm{p}$, $\bm{q}$, and $\bm{k}$ and frequencies $\omega_p$, $\omega_q$, and $\omega_k$, respectively, then a three-wave resonance requires that \begin{equation} \bm{k} = \bm{p} + \bm{q} \label{eq:kres} \end{equation} and \begin{equation} \omega_k = \omega_p + \omega_q. \label{eq:omegares} \end{equation} An alternative interpretation of Equations~(\ref{eq:kres}) and (\ref{eq:omegares}) arises from viewing the wave fields as a collection of wave quanta at different wavenumbers and frequencies, restricting the frequencies to positive values, and assigning a wave quantum at wavenumber~$\bm{k}$ and frequency~$\omega_k$ the momentum~$\hbar \bm{k}$ and energy~$\hbar \omega_k$. Equations~(\ref{eq:kres}) and (\ref{eq:omegares}) then correspond to the momentum-conservation and energy-conservation relations that arise when either one wave quantum decays into two new wave quanta or two wave quanta merge to produce a new wave quantum. In the parametric instability in a low-$\beta$ plasma, a parent AW (or AW quantum) at wavenumber~$\bm{k}$ decays into a slow wave at wavenumber~$\bm{p}$ propagating in the same direction and an AW at wavenumber~$\bm{q}$ propagating in the opposite direction. Regardless of the direction of the wave vector, the group velocity of an AW is either parallel or anti-parallel to the background magnetic field \begin{equation} \bm{B}_0 = B_0 \bm{\hat{z}}, \label{eq:B0} \end{equation} and the same is true for slow waves when \begin{equation} \beta \ll 1, \label{eq:lowbeta} \end{equation} which is henceforth assumed. At low~$\beta$ slow waves travel along field lines at the sound speed~$c_{\rm s}$, which is roughly $\beta^{1/2}$ times the Alfv\'en speed~$v_{\rm A}$. Thus, regardless of the perpendicular components of~$\bm{k}$, $\bm{p}$, and~$\bm{q}$, the frequency-matching condition (Equation~(\ref{eq:omegares})) for the parametric instability is \begin{equation} k_z v_{\rm A} = p_z c_{\rm s} - q_z v_{\rm A}. \label{eq:omegares2} \end{equation} Combining the $z$ component of Equation~(\ref{eq:kres}) with Equation~(\ref{eq:omegares2}) and taking $c_{\rm s} \ll v_{\rm A}$ yields \begin{equation} p_{\rm z} \simeq 2 k_z \label{eq:pz1} \end{equation} and \begin{equation} q_z \simeq -k_z \left(1 - \frac{2 c_{\rm s}}{v_{\rm A}}\right). \label{eq:omegares3} \end{equation} Equation~(\ref{eq:omegares3}) implies that the frequency $|q_z v_{\rm A}|$ of the daughter AW is slightly smaller than the frequency $|k_z v_{\rm A}|$ of the parent~AW \citep{sagdeev69}. Thus, the energy of the daughter AW is slightly smaller than the energy of the parent AW. This reduction in AW energy is offset by an increase in slow-wave energy. \cite{chandran08b} derived the wave kinetic equations for weakly turbulent AWs, slow waves, and fast magnetosonic waves (``fast waves'') in the low-$\beta$ limit. The resulting equations were expanded in powers of~$\beta$, and only the first two orders in the expansion (proportional to $\beta^{-1}$ and $\beta^0$, respectively) were retained. Slow waves are strongly damped in collisionless low-$\beta$ plasmas~\citep{barnes66}. \cite{chandran08b} neglected collisionless damping during the derivation of the wave kinetic equations, but incorporated it afterward in an ad hoc manner by assuming that the slow-wave power spectrum~$S^\pm_k$ was small and discarding terms~$\propto S^\pm_k$ unless they were also proportional to~$\beta^{-1}$.\footnote{The one exception to this rule was that \cite{chandran08b} retained the term representing turbulent mixing of slow waves by AWs, since this term can dominate the evolution of slow waves at small~$k_z$ \citep[][]{lithwick01,schekochihin16}.} (The $\pm$ sign in $S^\pm_k$ indicates slow waves propagating parallel ($+$) or anti-parallel ($-$) to~$\bm{B}_0$.) In the present paper, the wave kinetic equations derived by \cite{chandran08b} are used to investigate the nonlinear evolution of the parametric instability. It is assumed that slow-wave damping is sufficiently strong that all terms $\propto S_k^\pm$, even those~$\propto \beta^{-1}$, can be safely discarded. All other types of nonlinear interactions are neglected, including resonant interactions between three AWs, phase mixing, and resonant interactions involving fast waves. Given these approximations, Equation~(8) of \cite{chandran08b} becomes \begin{equation} \frac{\partial A^\pm_k}{\partial t} = \frac{\pi }{v_{\rm A}} \int d^3p\,d^3q\, \delta(\bm{k} - \bm{p} - \bm{q}) \delta (q_z + k_z) k_z^2 A^\pm_k \frac{\partial}{\partial q_z} \left(q_z A^\mp_q\right), \label{eq:Aweak} \end{equation} where $A^+_k$ ($A^-_k$) is the 3D wavenumber spectrum of AWs propagating parallel (anti-parallel) to~$\bm{B}_0$, $\delta(x)$ is the Dirac delta function, and the integral over each Cartesian component of $\bm{p}$ and $\bm{q}$ extends from $-\infty$ to $+\infty$. The 3D AW power spectra depend upon all three wave-vector components and time. The $\delta (\bm{k} - \bm{p} - \bm{q})$ term enforces the wavenumber-resonance condition (Equation~(\ref{eq:kres})), and the $\delta (q_z + k_z)$ term enforces the frequency-resonance condition (Equation~(\ref{eq:omegares3})) to leading order in~$\beta$. The integral over the components of~$p$ in Equation~(\ref{eq:Aweak}) can be carried out immediately, thereby annihilating the first delta function. Equation~(\ref{eq:Aweak}) can be further simplified by introducing the 1D wavenumber spectra \begin{equation} E^\pm(k_z,t) = \int_{-\infty}^\infty dk_x \int_{-\infty}^\infty dk_y A^\pm_k \label{eq:defEpm} \end{equation} and integrating Equation~(\ref{eq:Aweak}) over~$k_x$ and~$k_y$, which yields \begin{equation} \frac{\partial E^\pm}{\partial t} = \frac{\pi }{v_{\rm A}} k_z^2 E^\pm \frac{\partial}{\partial k_z} \left( k_z E^\mp\right). \label{eq:dEpmdt} \end{equation} Equation~(\ref{eq:dEpmdt}) describes how the 1D (parallel) power spectra~$E^\pm$ evolve and forms the basis for much of the discussion to follow. Given the aforementioned assumptions, the evolution of the 1D power spectra $E^\pm$ is not influenced by the way that~$A^\pm$ depends on $k_x$ and~$k_y$. For future reference, the normalization of the power spectra is such that \begin{equation} \int_{-\infty}^\infty dk_z E^\pm = \frac{1}{2} \left\langle \left|\delta \bm{v}_{\rm AW} \mp \frac{\delta \bm{B}_{\rm AW}}{\sqrt{4 \pi \rho}} \right|^2\right\rangle, \label{eq:Epmnorm} \end{equation} where $\delta \bm{v}_{\rm AW}$ and $\delta \bm{B}_{\rm AW}$ are the velocity and magnetic-field fluctuations associated with AWs, and $\langle \dots \rangle$ indicates an average over space and time \citep[][]{chandran08b}. \begin{figure} \centerline{ \includegraphics[width=12cm]{fig1.eps} } \caption{Physical interpretation of the wave kinetic equation for parametric decay when slow waves are strongly damped (Equation~(\ref{eq:dEpmdt})). The mathematical expressions next to the arrows represent the contributions to $\partial E^+(k_{z2})/\partial t$ from the parametric decay of AWs at~$k_{z3}$, which acts to increase $E^+(k_{z2})$, and the parametric decay of AWs at~$k_{z2}$, which acts to decrease~$E^+(k_{z2})$. In these expressions, $E^+_2 = E^+(k_{z2})$, $E^-_1 = E^-(k_{z1})$, and~$E^-_3 = E^-(k_{z3})$. \label{fig:color_spectrum}} \end{figure} \subsection{Physical Interpretation of the Wave Kinetic Equation} \label{sec:interpretation} Figure~\ref{fig:color_spectrum} offers a way of understanding Equation~(\ref{eq:dEpmdt}). The horizontal color bars in this figure represent the spectra of outward-propagating and inward-propagating AWs, with red representing longer-wavelength waves and violet representing shorter-wavelength waves. AWs propagating in the $-\bm{B}_0$ direction at $|k_z| = k_{z3}$ decay into slow waves propagating anti-parallel to~$\bm{B}_0$ at $|k_z| \simeq 2 k_{z3}$ and AWs propagating parallel to~$\bm{B}_0$ at $|k_z| = k_{z2}$. AWs propagating parallel to~$\bm{B}_0$ at $|k_z| = k_{z2}$ decay into slow waves propagating parallel to $\bm{B}_0$ at $|k_z| \simeq 2 k_{z2}$ and AWs propagating anti-parallel to $\bm{B}_0$ at $|k_z| = k_{z1}$. Equation~(\ref{eq:dEpmdt}) is approximately equivalent to the statement that the rate at which $E_2^+ = E^+(k_{z2})$ increases via the decay of AWs at $|k_z|= k_{z3}$ is \begin{equation} R_{3\rightarrow 2}\sim \frac{k_{z2} E^+_2k_{z3} E^-_3}{\beta^{1/2} v_{\rm A}}, \label{eq:R32} \end{equation} where $E^-_3 = E^-(k_{z3})$, while the rate at which $E_2^+ $ decreases via the decay of AWs at $|k_z|= k_{z2}$ is \begin{equation} R_{2\rightarrow 1}\sim \frac{k_{z2} E^+_2k_{z1} E^-_1}{\beta^{1/2} v_{\rm A}}, \label{eq:R21} \end{equation} where $E^-_1 = E^-(k_{z1})$. The time derivative of $E_2^+$ is $R_{3\rightarrow 2} - R_{2 \rightarrow 1}$, or \begin{equation} \frac{\partial E^+_2}{\partial t} \sim \frac{k_{z2} E^+_2 (k_{z3}E^-_3 - k_{z1}E^-_1)}{\beta^{1/2} v_{\rm A}} . \label{eq:diff} \end{equation} Equation~(\ref{eq:omegares3}) implies that $k_{z3} - k_{z1} \sim k_{z2} c_{\rm s}/v_{\rm A} \sim \beta^{1/2} k_{z2}$. A Taylor expansion of $k_{z3} E^-_3$ and $k_{z1} E^-_1$ about $k_{z2}$ in Equation~(\ref{eq:diff}) thus allows this equation to be rewritten as \begin{equation} \frac{\partial E^+_2}{\partial t} \sim \frac{k_{z2}^2 E^+_2}{v_{\rm A}} \frac{\partial}{\partial k_z} \left(k_z E^-)\right|_{k_z = k_{z2}}, \label{eq:approx} \end{equation} which is the same as Equation~(\ref{eq:dEpmdt}) to within a factor of order unity. To be clear, no independent derivation is being presented for Equations~(\ref{eq:R32}) and (\ref{eq:R21}). The foregoing discussion merely points out that Equations~(\ref{eq:R32}) and (\ref{eq:R21}) are equivalent (up to a factor of order unity) to Equation~(\ref{eq:dEpmdt}), which is derived on the basis of weak turbulence theory. It is worth pointing out, however, that several features of Equations~(\ref{eq:R32}) and (\ref{eq:R21}) make sense on a qualitative level. If either $E^+=0$ or $E^-=0$, then $R_{3\rightarrow 2} =R_{2\rightarrow 1} = 0$, because the parametric instability is a stimulated decay, which ceases if initially all the AWs travel in the same direction. For fixed $E^+$ and~$E^-$, $R_{3\rightarrow 2}$ and $R_{2\rightarrow 1}$ vanish as $v_{\rm A} \rightarrow \infty$, since the fractional nonlinearities vanish in this limit. Also, $R_{3\rightarrow 2}$ and $R_{2 \rightarrow 1}$ are proportional to~$\beta^{-1/2}$ (when $S^\pm_k$ is negligibly small, as assumed) because the parametric-decay contribution to $\partial A_k^\pm /\partial t$ is an integral (over $\bm{p}$ and~$\bm{q}$) of third-order correlation functions such as $\langle \delta \bm{v}_k \cdot \delta \bm{B}_q \delta n_{p}\rangle$, where $\delta \bm{v}_k$ and $\delta \bm{B}_q$ are the velocity and magnetic-field fluctuations associated with AWs at wave vectors~$\bm{k}$ and~$\bm{q}$, and $\delta n_p$ is the density fluctuation associated with the slow waves at wave vector~$\bm{p}$ that are driven by the beating of the AWs at wave vectors~$\bm{k}$ and~$\bm{q}$. For fixed AW amplitudes and fixed~$B_0$ and~$v_{\rm A}$, this driven density fluctuation is proportional to $\beta^{-1/2}$, because as $\beta$ decreases the thermal pressure is less able to resist the compression along~$\bm{B}_0$ resulting from the Lorentz force that arises from the beating of the AWs. \section{Linear Growth of the Parametric Instability} \label{sec:linear} In the linear regime of the parametric instability, the spectrum of AWs propagating in one direction, say~$E^+$, is taken to be fixed, and $E^- \ll E^+$. Equation~(\ref{eq:dEpmdt}) then implies that $E^-$ increases exponentially in time with growth rate \begin{equation} \gamma^- = \frac{\pi k_z^2 }{v_{\rm A}} \frac{\partial }{\partial k_z} \left( k_z E^+\right). \label{eq:gammalin} \end{equation} Equation~(\ref{eq:gammalin}) is equivalent to Equation~(18) of \cite{cohen74} given the different normalizations of the AW power spectra in the two equations. For example, Equation~(\ref{eq:Epmnorm}) implies that $\int_{0}^\infty E^+ dk_z = (1/2)\int_{-\infty}^\infty E^+ dk_z = \langle |\delta \bm{B}|^2\rangle /4\pi \rho$ when $E^- \ll E^+$, which can be compared with the un-numbered but displayed equation under Equation~(9) of \cite{cohen74}. As in the present paper, \cite{cohen74} assumed that slow waves are strongly damped and that the AWs satisfy the random-phase approximation. The present paper builds upon the results of \cite{cohen74} by investigating the coupled nonlinear evolution of $E^+$ and $E^-$. Also, whereas \cite{cohen74} took the wave vectors to be parallel or anti-parallel to~$\bm{B}_0$, the derivation of Equation~(\ref{eq:dEpmdt}) in the present paper allows for obliquely propagating waves. \section{Conservation of Wave Quanta and Inverse Cascade} \label{sec:inverse} To simplify the presentation, it is assumed that \begin{equation} k_z > 0. \label{eq:kzpos} \end{equation} No generality is lost, because $E^\pm$ is an even function of~$k_z$, and thus it is sufficient to solve for the spectra at positive~$k_z$ values. Equation~(\ref{eq:dEpmdt}) can be rewritten as the two equations \begin{equation} \frac{\partial N}{\partial t} + \frac{\partial \Gamma}{\partial k_z}= 0 \label{eq:dNdt} \end{equation} and \begin{equation} \frac{\partial \Gamma}{\partial t} = \pi \hbar k_z^2 \Gamma \frac{\partial}{\partial k_z} \left(k_z^2 N \right), \label{eq:dGammadt} \end{equation} where \begin{equation} N = \frac{E^+ + E^-}{\hbar k_z v_{\rm A}} \label{eq:defN} \end{equation} is the number of wave quanta per unit $k_z$ per unit mass and \begin{equation} \Gamma = - \frac{\pi k_z^2 E^+ E^-}{\hbar v_{\rm A}^2} \label{eq:defGamma} \end{equation} is the flux of wave quanta in $k_z$-space. Equation~(\ref{eq:dNdt}) implies that the number of wave quanta per unit mass, \begin{equation} N_{\rm tot} = \int_{-\infty}^\infty N dk_z, \label{eq:Ntot} \end{equation} is conserved. The fact that $\Gamma$ is negative indicates that there is an inverse cascade of wave quanta from large~$k_z$ to small~$k_z$ \citep[c.f.][]{terasawa86}. The wavenumber drift velocity of the wave quanta, \begin{equation} \left \langle\frac{d k_z}{dt} \right \rangle \equiv \frac{\Gamma}{N} = - \frac{\pi k_z^3}{v_{\rm A}}\left(\frac{1}{E^+} + \frac{1}{E^-}\right)^{-1}, \label{eq:dkzdt} \end{equation} is determined primarily by the smaller of~$E^+$ and~$E^-$. \section{Exact Solutions to the Wave Kinetic Equations} \label{sec:exact} In this section, several exact solutions to Equation~(\ref{eq:dEpmdt}) are presented under the assumption that $k_z > 0$. The spectra at negative $k_z$ follow from the relation $E^\pm(-k_z,t) = E^\pm(k_z,t)$. \subsection{Decaying, Balanced Turbulence} \label{sec:Decaying} One family of exact solutions to Equation~(\ref{eq:dEpmdt}) follows from setting \begin{equation} E^\pm(k_z,t) = f^\pm(k_z, t) H\big(k_z - b(t)\big) \label{eq:Efb} \end{equation} in Equation~(\ref{eq:dEpmdt}), where \begin{equation} H(x) = \left\{\begin{array}{ll} 0 & \mbox{ if $x<0$} \\ 1 & \mbox{ if $x\geq 0$} \end{array} \right. \label{eq:heaviside} \end{equation} is the Heaviside function. When Equation~(\ref{eq:Efb}) is substituted into Equation~(\ref{eq:dEpmdt}), each side of Equation~(\ref{eq:dEpmdt}) becomes the sum of terms proportional to~$\delta(k_z-b)$ and terms that contain no delta function. By separately equating the two groups of terms, one can show that Equation~(\ref{eq:Efb}) is a solution to Equation~(\ref{eq:dEpmdt}) if \begin{equation} \frac{\partial }{\partial t}f^\pm(k_z,t) = \frac{\pi}{v_{\rm A}}k_z^2 f^\pm(k_z,t) \frac{\partial}{\partial k_z}\left[k_z f^\mp(k_z,t)\right] \label{eq:feq} \end{equation} and \begin{equation} \frac{1}{b^3} \frac{db}{dt} = -\frac{\pi f^+(b,t)}{2v_{\rm A}} = -\frac{\pi f^-(b,t)}{2v_{\rm A}} . \label{eq:dbdt0} \end{equation} Equation~(\ref{eq:dbdt0}) makes use of the relation $[H(x)]^2 = H(x)$ and its derivative, $2 H(x) \delta (x) = \delta(x)$. In Appendix~\ref{ap:BL} it is shown that Equation~(\ref{eq:dbdt0}) can be recovered by adding a small amount of nonlinear diffusion to Equation~(\ref{eq:dEpmdt}) and replacing the discontinuous jump in the spectrum at $k_z = b(t)$ with a boundary layer. Equation~(\ref{eq:dbdt0}) implies that, for solutions of the form given in Equation~(\ref{eq:Efb}), the mean-square amplitudes of forward and backward-propagating AWs must be equal just above the break wavenumber~$b$. An exact solution to Equations~(\ref{eq:feq}) and (\ref{eq:dbdt0}) corresponding to decaying turbulence is \begin{equation} f^+(k_z,t) = f^-(k_z,t) = \frac{a(t)}{k_z^2}, \label{eq:fpdecay} \end{equation} \begin{equation} a(t) = a_0 \left(1 + \frac{\pi a_0 t}{v_{\rm A}}\right)^{-1}, \label{eq:adecay} \end{equation} and \begin{equation} b(t) = b_0 \left(1 + \frac{\pi a_0 t}{v_{\rm A}}\right)^{-1/2}, \label{eq:bdecay} \end{equation} where $a_0$ and $b_0$ are the values of $a$ and~$b$ at $t=0$. This solution can be further truncated at large~$k_z$ by setting \begin{equation} E^+(k_z,t) = E^-(k_z,t) = \frac{a(t) H(k_z - b(t)) H(q(t) - k_z)}{k_z^2} \label{eq:trunc2} \end{equation} with \begin{equation} q(t) = q_0\left(1+ \frac{\pi a_0 t}{v_{\rm A}}\right)^{-1/2}, \label{eq:defq} \end{equation} where $q_0$ is the value of~$q$ at $t=0$, which is taken to exceed~$b_0$. Equations~(\ref{eq:fpdecay}) through (\ref{eq:defq}) can be recovered numerically by solving Equation~(\ref{eq:dEpmdt}) for freely decaying AWs. Whether the spectra satisfy Equations (\ref{eq:Efb}) and (\ref{eq:fpdecay}) through~(\ref{eq:bdecay}) or, alternatively, Equations~(\ref{eq:adecay}) through~(\ref{eq:defq}), the number of wave quanta $N_{\rm tot}$ defined in Equation~(\ref{eq:Ntot}) is finite and independent of time. \subsection{Forced, Balanced Turbulence} \label{sec:Forced} An exact solution to Equations~(\ref{eq:feq}) and (\ref{eq:dbdt0}) corresponding to forced turbulence is \begin{equation} f^+(k_z,t) = f^-(k_z,t) = \frac{c}{k_z} \label{eq:fsteady} \end{equation} and \begin{equation} b(t) = \left( \frac{\pi c t}{2v_{\rm A}} + \frac{1}{b_0}\right)^{-1}, \label{eq:bsteady} \end{equation} where $c$ is a constant and $b_0$ is the value of~$b$ at~$t=0$. In this solution, the number of wave quanta~$N_{\rm tot}$ is not constant, because there is a nonzero influx of wave quanta from infinity. A version of this solution can be realized in a numerical solution of Equation~(\ref{eq:dEpmdt}) by holding $E^\pm$ fixed at some wavenumber $k_{\rm f}$, which mimics the effects of energy input from external forcing. In this case, the numerical solution at $k_z < k_{\rm f}$ is described by Equations~(\ref{eq:Efb}), (\ref{eq:fsteady}), and~(\ref{eq:bsteady}), with $b(t) < k_{\rm f}$. The solution in Equations~(\ref{eq:fsteady}) and (\ref{eq:bsteady}) can be truncated at large~$k_z$ in a manner analogous to Equation~(\ref{eq:trunc2}), but with $q = [(\pi c t/2v_{\rm A}) + (1/q_0)]^{-1}$, where $q_0$ is the value of~$q$ at $t=0$. In this solution, $N_{\rm tot}$ is independent of time. Numerical solutions of Equation~(\ref{eq:dEpmdt}) show, however, that this solution is unstable. If the spectra initially satisfy $E^\pm = (c/k_z) H(k_z - b)H(q-k_z)$, then they evolve towards the solution described by Equations~(\ref{eq:fpdecay}) through (\ref{eq:defq}). \subsection{Exact Solutions Extending over All~$k_z$} \label{sec:Allkz} In addition to the truncated solutions described in Sections~\ref{sec:Decaying} and~\ref{sec:Forced}, Equation~(\ref{eq:dEpmdt}) possesses several exact solutions that extend over all~$k_z$. These solutions are unphysical, because they correspond to infinite AW energy and neglect dissipation (which becomes important at sufficiently large~$k_z$) and finite system size (which becomes important at sufficiently small~$k_z$). However, they illustrate several features of the nonlinear evolution of the parametric instability, which are summarized at the end of this section. The simplest solution to Equation~(\ref{eq:dEpmdt}) spanning all~$k_z$ is \begin{equation} E^\pm(k_z,t) = \frac{c^\pm}{k_z}, \label{eq:Epsteady} \end{equation} where $c^\pm$ is a constant. It follows from Equation~(\ref{eq:defGamma}) that Equation~(\ref{eq:Epsteady}) corresponds to a constant flux of AW quanta to smaller~$k_z$. In contrast to the truncated $E^\pm \propto k_z^{-1}$ forced-turbulence solution in Section~\ref{sec:Forced}, $E^+$ and $E^-$ need not be equal in Equation~(\ref{eq:Epsteady}). A second, non-truncated, exact solution to Equation~(\ref{eq:dEpmdt}) is given by \begin{equation} E^\pm(k_z,t) = \frac{a^\pm(t)}{k_z^2} \label{eq:Eunbdec} \end{equation} and \begin{equation} a^\pm(t) = \frac{a_0^\pm (a_0^\pm - a_0^\mp)}{a_0^\pm - a_0^\mp e^{-\pi(a_0^\pm - a_0^\mp)t/v_{\rm A}}} , \label{eq:apm1} \end{equation} where $a_0^+$ and $a_0^-$ are the initial values of $a^+$ and~$a^-$. In this solution, \begin{equation} a^+(t) - a^-(t) = a_0^+ - a_0^-. \label{eq:adiff} \end{equation} If $a^+_0 > a^-_0$, then $E^-$ decays faster than $E^+$, and, after a long time has passed, $E^-$ decays to zero while $a^+$ decays to the value $a^+_0 - a^-_0$. Conversely, if $a^-_0 > a_0^+$, then $E^+$ decays faster than~$E^-$, and the turbulence decays to a state in which $E^+= 0$. In the limit that $a_0^+ \rightarrow a_0^-$, \begin{equation} a^\pm(t) \rightarrow a_0\left(1 + \frac{\pi a_0 t}{v_{\rm A}}\right)^{-1}, \label{eq:abal} \end{equation} where $a_0 = a^+_0 = a^-_0$. Equations~(\ref{eq:Eunbdec}) and (\ref{eq:abal}) are a non-truncated version of the decaying-turbulence solution presented in Section~\ref{sec:Decaying}. Equations~(\ref{eq:Epsteady}) and (\ref{eq:Eunbdec}) can be combined into a more general class of solution, \begin{equation} E^\pm(k_z,t) = a^\pm(t) \left(\frac{1}{k_z^2} + \frac{d^\pm}{k_z}\right), \label{eq:comb1} \end{equation} where $d^+$ and $d^-$ are constants and $a^\pm(t)$ is given by Equation~(\ref{eq:apm1}). Another type of solution combining $k_z^{-1}$ and $k_z^{-2}$ scalings is \begin{eqnarray} E^+(k_z,t) & = & \frac{c_0 e^{-\pi c_2 t/v_{\rm A}}}{k_z}, \\ E^-(k_z,t) & = & \frac{c_1}{k_z} + \frac{c_2 }{k_z^2}, \label{eq:12} \end{eqnarray} where $c_0$, $c_1$, and~$c_2$ are constants. The exact solutions presented in this section illustrate three properties of the nonlinear evolution of the parametric instability at low~$\beta$ when slow waves are strongly damped. First, when $E^\pm \propto k_z^{-1}$, $\partial E^\mp/\partial t$~vanishes. Second, if $E^\pm \propto k_z^{-2}$, then $(\partial/\partial t) \ln E^\mp$ is negative and independent of~$k_z$, and $E^\mp(k_z, t)$ can be written as the product of a function of $k_z$ and a (decreasing) function of time. (More general principles describing the evolution of~$E^\pm$ are summarized in Figure~\ref{fig:slope_evolution} and Equation~(\ref{eq:slope_evol}).) Third, the parametric instability does not necessarily saturate with $E^+ = E^-$. For example, in Equations~(\ref{eq:Eunbdec}) and (\ref{eq:apm1}), when $a_0^+ \neq a_0^-$, the AWs decay to a maximally aligned state reminiscent of the final state of decaying cross-helical incompressible MHD turbulence \citep{dobrowolny80}. \section{Nonlinear Evolution of the Parametric Instability When Most of the AWs Initially Propagate in the Same Direction} \label{sec:SW} This section describes a numerical solution to Equation~(\ref{eq:dEpmdt}) in which, initially, \begin{equation} E^+ \gg E^-. \label{eq:EpggEm} \end{equation} As in Section~\ref{sec:exact}, $k_z$ is taken to be positive, and the spectra at negative $k_z$ can be inferred from the fact that $E^\pm(-k_z) = E^\pm(k_z)$. The spectra are advanced forward in time using a second-order Runge-Kutta algorithm on a logarithmic wavenumber grid consisting of 2000 grid points. To prevent the growth of numerical instabilities, a nonlinear diffusion term \begin{equation} D^\pm = \nu E^\mp k_z^2 \frac{\partial^2}{\partial k_z^2}E^\pm \label{eq:defDpm} \end{equation} is added to the right-hand side of Equation~(\ref{eq:dEpmdt}), where $\nu$ is a constant. The value of $\nu$ is chosen as small as possible subject to the constraint that the diffusion term suppress instabilities at the grid scale. To represent the solution in a way that can be readily compared with spacecraft measurements of solar-wind turbulence, the wavenumber spectra are converted into frequency spectra, \begin{equation} e^\pm(f,t) = \frac{2\pi E^\pm(k_z, t)}{U}, \label{eq:defef} \end{equation} where $U$ is the solar-wind velocity, and \begin{equation} f = \frac{k_z U}{2\pi} \label{eq:deff} \end{equation} is the frequency in the spacecraft frame that, according to Taylor's~(\citeyear{taylor38}) hypothesis, corresponds to wavenumber~$k_z$ when the background magnetic field is aligned with the nearly radial solar-wind velocity. The Alfv\'en speed is taken to be the approximate average of the observed values of~$v_{\rm A}$ in three fast-solar-wind streams at $r=0.3 \mbox{ AU}$ (see Table~1 of \cite{marsch82a} and Table 1a of \cite{marsch90}), \begin{equation} v_{\rm A} = 150 \mbox{ km/s}. \label{eq:va} \end{equation} In order to compare directly with Figure 2-2c of \cite{tumarsch95}, the solar-wind velocity is taken to be \begin{equation} U = 733 \mbox{ km/s}. \label{eq:U} \end{equation} The power spectra are initialized to the values \begin{equation} e^+(f, t=0) = \frac{\sigma^+(f/f_0)^{-0.5}}{1 + (f/f_0)^{1.5}} \label{eq:Epinit} \end{equation} and \begin{equation} e^-(f, t=0) = \sigma^-, \label{eq:Eminit} \end{equation} where $\sigma^+$, $\sigma^-$, and~$f_0$ are constants. The values of~$f_0$ and the corresponding wavenumber~$k_{z0}$ are chosen so that \begin{equation} f_0 = \frac{k_{z0} U}{2\pi} = 10^{-2} \mbox{ Hz}, \label{eq:valkz0} \end{equation} consistent with the arguments of \cite{vanballegooijen16} about the dominant frequency of AW launching by the Sun. The minimum and maximum wavenumbers of the numerical domain are chosen so that $k_{z\rm max} = 10^3 k_{z0} = 10^7 k_{z\rm min}$. The motivation for the scaling $e^+(f, t=0) \propto f^{-0.5}$ at small~$f$ is the similar scaling observed by \cite{tumarsch95} in the aforementioned fast-solar-wind stream at $10^{-5} \mbox{ Hz} < f < 10^{-4} \mbox{ Hz}$. The numerical results shown below suggest that the parametric instability has little effect on $e^+$ at these frequencies at $r=0.3 \mbox{ AU}$. The observed $f^{-0.5}$ scaling in this frequency range is thus presumably inherited directly from the spectrum of AWs launched by the Sun. Like the scaling $e^+ \propto f^{-0.5}$, the value of~$\sigma^+$ is chosen to match the observed spectrum of outward-propagating AWs at 0.3~AU at small~$f$. The reason for the $f^{-2}$ scaling in $e^+$ at large~$f$ is that a (parallel) $k_z^{-2}$ spectrum is observed in the solar wind~\citep{horbury08, podesta09c,forman11} and predicted by the theory of critically balanced MHD turbulence \citep[see, e.g.,][]{goldreich95,mallet15}. The value of $\sigma^-$ is set equal to a minuscule value ($10^{-12} \sigma^+$), so that the only source of dynamically important, inward-propagating AWs is the parametric decay of outward-propagating AWs. Figure~\ref{fig:num_wide} summarizes the results of the calculation. Between $t=0$ and $t= 4 \mbox{ hr}$, $e^+$ changes little while $e^-$ grows rapidly between roughly 2 and 5~mHz, where the growth rate~$\gamma^-$ given in Equation~(\ref{eq:gammalin}) peaks. Between $t=4 \mbox{ hr}$ and $t=8 \mbox{ hr}$, $e^+$ develops a broad $\sim 1/f$ scaling between $f= 3 \times 10^{-4} \mbox{ Hz}$ and $f = 3 \times 10^{-2} \mbox{ Hz}$, which shuts off the growth of $e^-$ at these frequencies. At the same time, $e^-$ acquires an $\sim f^{-2}$ scaling over much of this same frequency range. Between $t= 8 \mbox{ hr}$ and $t=16 \mbox{ hr}$, the low-frequency limit of the $1/f$ range of~$e^+$ decreases to $\sim 10^{-4} \mbox{ Hz}$, and the high-frequency limit of the $1/f$ range of~$e^+$ increases to $\sim 0.1 \mbox{ Hz}$. \begin{figure} \centerline{ \includegraphics[width=6cm]{fig2a.eps} \includegraphics[width=6cm]{fig2b.eps} } \centerline{ \includegraphics[width=6cm]{fig2c.eps} \includegraphics[width=6cm]{fig2d.eps} } \caption{Solid lines show the AW power spectra in a numerical solution of Equation~(\ref{eq:dEpmdt}) with plasma parameters and turbulence parameters chosen to model conditions in the fast solar wind at a heliocentric distance of 0.3~AU. The wavenumber spectra $E^\pm(k_z)$ appearing in Equation~(\ref{eq:dEpmdt}) have been converted, using Equations~(\ref{eq:defef}) and (\ref{eq:deff}), into the frequency spectra~$e^\pm(f)$. The dotted lines in the upper left corners of each plot show the evolutionary tracks of the values of~$e^+$ and $e^-$ at the low-frequency end of the frequency range in which $e^+ \propto f^{-1}$ in the approximate analytic solution to Equation~(\ref{eq:dEpmdt}) presented in Appendix~\ref{ap:approx}. \label{fig:num_wide}} \end{figure} The dotted lines in the upper left corner of each panel in Figure~\ref{fig:num_wide} show the tracks followed by the values of $e^+$ and~$e^-$ at the low-frequency end of the frequency range in which $e^+ \propto f^{-1}$ in the approximate analytic solution to Equation~(\ref{eq:dEpmdt}) that is described in Appendix~\ref{ap:approx}. In this solution, $E^+$ and $E^-$ are expanded in negative powers of~$k_z$ at wavenumbers exceeding a time-dependent break wavenumber~$b(t)$. Below this wavenumber, $E^-=0$ and $E^+ = \eta k_z^p$, where $\eta$ and~$p$ are constants, and $-1 < p < 1$. At $k_z > b$, the dominant term in the expansion of~$E^+$~($E^-$) scales like $k_z^{-1}$~($k_z^{-2}$), and the ratios of~$E^+(b_+)$ to~$\eta b_+^p$ and $E^+(b_+)$ to~$E^-(b_+)$ are fixed functions of~$p$, where $b_+$ is a wavenumber infinitesimally larger than~$b$. For~$p=-0.5$, $E^+(b_+)/\eta b_+^p = 5/3$ and $E^+(b_+)/E^-(b_+) = 10$, in approximate agreement with the numerical results (see also the right panel of Figure~\ref{fig:num_narrow}). \subsection{Heuristic Explanation of the $e^+ \propto f^{-1}$ and $e^- \propto f^{-2}$ Scalings} \label{sec:model} In order to understand the time evolution illustrated in Figure~\ref{fig:num_wide}, it is instructive to first consider the case in which \begin{equation} E^\pm = c^\pm k_z^{\alpha^\pm} \label{eq:alphapm} \end{equation} within some interval $(k_{z1}, k_{z2})$, where $c^\pm$ and $\alpha^\pm$ are constants. Equation~(\ref{eq:dEpmdt}) implies that, within this interval, \begin{equation} \frac{\partial}{\partial t} \ln E^\pm = \frac{\pi c^\mp}{v_{\rm A}}\left( 1 + \alpha^\mp\right) k_z^{\alpha^\mp + 2}. \label{eq:slope_evol} \end{equation} If $\alpha^\mp > -1$, then $\ln E^\pm$ grows at a rate that increases with~$k_z$, causing $E^\pm$ to increase and ``harden,'' in the sense that the best-fit value of $\alpha^\pm$ within the interval $(k_{z1}, k_{z2})$ increases. If $\alpha^\mp = -1$, then $E^\pm$ does not change. If $-2 < \alpha^\mp < -1$, then $\ln E^\pm$ decreases at a rate that increases with~$k_z$, which causes the best-fit value of~$\alpha^\pm$ within the interval $(k_{z1}, k_{z2})$ to decrease. If $\alpha^\mp = -2$, then $\ln E^\pm$ decreases at the same rate at all~$k_z$, and $\alpha^\pm$ remains unchanged. Finally, if $\alpha^\mp < 2$, then $E^\pm$ decreases at a rate that decreases with~$k_z$, which causes the best-fit value of~$\alpha^\pm$ in the interval $(k_{z1}, k_{z2})$ to increase. These rules are summarized in Figure~\ref{fig:slope_evolution} and apply to $e^\pm \propto f^{\alpha^\pm} $ as well as $E^\pm \propto k_z^{\alpha^\pm}$. \begin{figure} \centerline{ \includegraphics[width=10cm]{fig3.eps} } \caption{In this figure, it is assumed that the frequency spectra are initially power laws of the form $e^\pm \propto f^{\alpha^\pm}$, and that $\alpha^+$ and $\alpha^-$ are both negative. According to Equation~(\ref{eq:slope_evol}), parametric decay alters both the amplitude and slope of $e^\pm$ in the manner shown. For example, if $e^- \propto f^{-1.5}$, then $E^+ \propto k_z^{-1.5}$, and Equation~(\ref{eq:slope_evol}) implies that $E^+$ decreases at a rate that increases with~$k_z$. This in turn implies that $e^+$ decreases at a rate that increases with~$f$, so that $e^+$ steepens. \label{fig:slope_evolution}} \end{figure} Returning to Figure~\ref{fig:num_wide}, in the early stages of the numerical calculation, $e^-$ grows most rapidly at those frequencies at which $\gamma^-$ in Equation~(\ref{eq:gammalin}) is largest --- namely, the high-$f$ end of the $f^{-0.5}$ range of $e^+$. By the time $e^-$ reaches a sufficient amplitude that $e^+$ and $e^-$ evolve on the same timescale, $e^-$ develops a peaked frequency profile extending from some frequency $f = f_{\rm low}$ to some larger frequency $f = f_{\rm high}$, as illustrated in the upper-right panel of Figure~\ref{fig:num_wide}. Near $f_{\rm low}$, $de^-/df > 0$ (i.e., $\alpha^- > 0$), which causes $e^+$ to grow.\footnote{Although the caption of Figure~\ref{fig:slope_evolution} excludes positive~$\alpha^\mp$ to allow use of the words ``flattens'' and ``steepens'' in the figure, Equation~(\ref{eq:slope_evol}) applies for positive~$\alpha^\mp$.} Near $f_{\rm high}$, $\alpha^- < - 1$, which causes $e^+$ to decrease. Thus, $e^+$ steepens across the interval $(f_{\rm low}, f_{\rm high})$ until it attains a $1/f$ scaling, at which point $e^-$ stops growing between $f_{\rm low}$ and $f_{\rm high}$. However, at frequencies just below $f_{\rm low}$, $e^+$ and $e^-$ both continue to grow, causing $f_{\rm low}$ to decrease. At the same time, $e^+$ continues to decrease at larger $f$ where $\alpha^- < -1$. Together, the growth of $e^+$ just below $f_{\rm low}$ and the damping of $e^+$ at larger~$f$ cause the $f^{-1}$ range of $e^+$ to broaden in both directions, i.e., towards both smaller and larger frequencies. The unique scaling of $e^-$ consistent with an $e^+$ spectrum $\propto f^{-1}$ that is a decreasing function of time is $e^- \propto f^{-2}$. Moreover, the scalings $e^+ \sim f^{-1}$ and $e^- \sim f^{-2}$ are, in a sense, stable, as can be inferred from Figure~\ref{fig:slope_evolution}. For example, if $\alpha^-$ increases from $-2$ to a slightly larger value, then $e^+$ decreases at a rate that increases with~$f$, causing $\alpha^+$ to decrease to a value slightly below~$-1$. This causes $e^-$ to decrease at a rate that increases with~$f$, thereby causing $\alpha^-$ to decrease back towards~$-2$. A similar ``spectral restoring force'' arises for any other small perturbation to the values $\alpha^+ = -1$ and $\alpha^- = -2$. It is worth emphasizing, in this context, that the analytic solution presented in Appendix~\ref{ap:approx} is approximate rather than exact. As the spectral break frequency decreases past some fixed frequency $f_{3}$, the values of $e^+$ and $e^-$ at $f_{3}$ suddenly jump, but they do not jump to the precise values needed to extend the $e^+ \sim f^{-1}$ and $e^- \sim f^{-2}$ scalings to smaller~$f$. Instead, the spectra need further ``correcting'' after the break frequency has swept past in order to maintain the scalings $e^+ \sim f^{-1}$ and $e^- \sim f^{-2}$ in an approximate way. Also, the decrease in $e^-$ that occurs after $t=4 \mbox{ hr}$ is a consequence of the sub-dominant $f^{-2}$ component of~$e^+$. This component of~$e^+$ becomes increasingly prominent near the break frequency as time progresses, leading to the pronounced curvature in the plot of $e^+$ near $f=10^{-4} \mbox{ Hz}$ in the right panel of Figure~\ref{fig:num_narrow}. \begin{figure} \centerline{ \includegraphics[trim = 4cm 0cm 0cm 0cm, width=4.5cm]{fig4a.eps} \includegraphics[trim = 4cm 0cm 0cm 0cm, width=4.5cm]{fig4b.eps} \includegraphics[trim = 4cm 0cm 0cm 0cm, width=4.5cm]{fig4c.eps} } \caption{The left panel and middle panel of this figure reproduce the $t=4 \mbox{ hr}$ and $t= 8 \mbox{ hr}$ panels of Figure~\ref{fig:num_wide} but with the axis ranges used in Figure~2-2c of \cite{tumarsch95}. The right panel is from a later time ($t=32 \mbox{ hr}$) in the same numerical solution. \label{fig:num_narrow}} \end{figure} \subsection{Comparison with {\em Helios} Measurements} \label{sec:comp} In the (average) plasma rest frame, the equations of incompressible MHD can be written in the form \begin{equation} \frac{\partial \bm{z}^\pm}{\partial t} + ( \bm{z}^\mp\pm \bm{v}_{\rm A}) \cdot \nabla \bm{z}^\pm = - \nabla \Pi, \label{eq:elsasser} \end{equation} where $\bm{z}^\pm = \delta \bm{v} \mp \delta \bm{B}/\sqrt{4\pi \rho}$ are the Elsasser variables, $\delta \bm{v}$ and $\delta \bm{B}$ are the velocity and magnetic-field fluctuations, $\rho$ is the mass density, $\bm{v}_{\rm A} = \bm{B}_0/\sqrt{4\pi \rho}$ is the Alfv\'en velocity, and $\Pi$ is the total pressure divided by~$\rho$ \citep{elsasser50}. Although the solar wind is compressible, Equation~(\ref{eq:elsasser}) provides a reasonable approximation for the non-compressive, AW-like component of solar-wind turbulence. As Equation~(\ref{eq:elsasser}) shows, the advection velocity of a $\bm{z}^\pm$ fluctuation is $\bm{z}^\mp \pm \bm{v}_{\rm A} $. This implies, as shown by \cite{maron01}, that $\bm{z}^\pm$ fluctuations propagate along magnetic field lines perturbed by $\bm{z}^\mp$. As a consequence, in the solar wind, when the rms magnetic-field fluctuation $\delta B_{\rm in}$ associated with inward-propagating AWs~($\bm{z}^-$) is much smaller than the background magnetic field~$B_0$, the outward-propagating AWs~($\bm{z}^+$) propagate to a good approximation along the direction of~$\bm{B}_0$. This is true even if the rms magnetic-field fluctuation~$\delta B_{\rm out}$ associated with $\bm{z}^+$ is comparable to~$B_0$. In the fast solar wind at $r< 0.3 \mbox{ AU}$, the (fractional) cross helicity is high (i.e., $E^+ \gg E^-$), and $\delta B_{\rm in}$ is indeed small compared to~$B_0$ \citep{bavassano00,cranmer05}. Moreover, the background magnetic field at $r=0.3 \mbox{ AU}$ is nearly in the radial direction, because the Parker-spiral magnetic field begins to deviate appreciably from the radial direction only at larger~$r$ in the fast wind \citep{verscharen15}. Hence, in high-cross-helicity fast-wind streams at $r=0.3 \mbox{ AU}$, the function $e^+$ defined by Equations~(\ref{eq:defef}) and (\ref{eq:deff}) corresponds to a good approximation to the frequency spectrum of outward-propagating AWs observed by a spacecraft in the solar wind. It is not clear, however, how well $e^-$ corresponds to the observed spectrum of inward-propagating AWs, because the inward-propagating AWs follow field lines perturbed by the outward-propagating AWs, which can be inclined relative to the radial direction by a substantial angle. Figure~\ref{fig:num_narrow} reproduces the $t=4 \mbox{ hr}$ and $t=8 \mbox{ hr}$ panels of Figure~\ref{fig:num_wide}, but with the same axis ranges as those in Figure 2-2c of \cite{tumarsch95} to facilitate comparison. Figure~\ref{fig:num_narrow} also includes a third panel that shows the spectra at $t=32 \mbox{ hr}$. The $e^+$ spectrum in the $t=8 \mbox{ hr}$ panel of Figure~\ref{fig:num_wide} shares a number of properties with the $e^+$ spectrum in Figure 2-2c of \cite{tumarsch95}, in addition to the $f^{-0.5}$ scaling at small~$f$ that was built in to the numerical calculation as an initial condition. In particular, $e^+ \gg e^-$ at all frequencies, $e^+ \sim f^{-1}$ at $f\gtrsim 3 \times 10^{-4} \mbox{ Hz}$, and there is a bump in the $e^+$ spectrum at the transition between the $f^{-0.5}$ and $f^{-1}$ scaling ranges of~$e^+$. Although this comparison is suggestive, it is not entirely clear how to map time in the numerical calculation to heliocentric distance in the solar wind, because the plasma parameters in the numerical calculation are independent of position and time, whereas they depend strongly upon heliocentric distance in the solar wind. For example, the turbulence is weaker (in the sense of smaller $\delta v_0/v_{\rm A}$) the closer one gets to the Sun. (See also the discussion following Equation~(\ref{eq:kzLperp}).) Also, the choice of initial conditions in the numerical calculation artificially prolongs the linear stage of evolution, since in the solar wind there are sources of inward-propagating waves other than parametric instability, such as non-WKB reflection \citep{heinemann80,velli93}. Nevertheless, as a baseline for comparison, the travel time of an outward-propagating AW from the photosphere to $0.3 \mbox{ AU}$ in the fast-solar-wind model developed by \cite{chandran09c} is approximately 12~hr. \section{Discussion of Approximations and Relevance to the Solar Wind} \label{sec:applicability} This section critically assesses the assumptions underlying the results in Sections~\ref{sec:WKE} through~\ref{sec:SW} and the degree to which these assumptions apply to the fast solar wind between $r=10 R_{\odot}$ (the approximate perihelion of the {\em Parker Solar Probe}) and $r=0.3 \mbox{ AU}$. \subsection{The Weak Turbulence Approximation} \label{sec:weak} A central assumption of the analysis is the weak-turbulence criterion in Equation~(\ref{eq:wto}). Since $E^+$ and $E^-$ differ in the solar wind, Equation~(\ref{eq:wto}) is really two conditions, \begin{equation} \omega_{\rm nl}^\pm \ll |k_z| v_{\rm A}, \label{eq:wto1} \end{equation} where $\omega_{\rm nl}^+$ ($\omega_{\rm nl}^-$) is the inverse of the timescale on which nonlinear interactions modify outward-propagating (inward-propagating) AWs. The contribution to $\omega_{\rm nl}^\pm$ from the parametric instability is \begin{equation} \omega_{\rm nl, PI}^\pm \sim \frac{1}{E^\pm} \left|\frac{\partial E^\pm}{\partial t}\right| \sim \frac{k_z^2 E^\mp}{v_{\rm A}} . \label{eq:omegaPI} \end{equation} The contribution to $\omega_{\rm nl}^\pm$ from one other type of nonlinear interaction is estimated in Section~\ref{sec:NL}. The estimate of $\partial E^\pm/\partial t$ in Equation~(\ref{eq:omegaPI}) follows from Equation~(\ref{eq:dEpmdt}) and setting $E^\mp \sim |k_z|^{\alpha^\mp}$ with $\alpha^\mp$ not very close to~$-1$. A rough upper limit on $\omega_{\rm nl, PI}^\pm$ results from replacing $k_z E^\mp$ in Equation~(\ref{eq:omegaPI}) with $(\delta v^\mp)^2$, where $(\delta v^+)^2$ is the mean-square velocity fluctuation associated with outward-propagating AWs, and $(\delta v^-)^2$ is the mean-square velocity fluctuation associated with inward-propagating AWs. This leads to a rough upper limit on $\omega_{\rm nl, PI}^\pm$ because $(\delta v^\pm)^2$ includes contributions from all wavenumbers and is much larger than the value of $k_z E^\pm$ at some~$k_z$. Equation~(\ref{eq:wto1}), with $\omega_{\rm nl} \sim \omega_{\rm nl, PI}$, is thus satisfied provided \begin{equation} (\delta v^\mp)^2\ll v_{\rm A}^2. \label{eq:wto2} \end{equation} \cite{bavassano00} analyzed {\em Helios} measurements of fluctuations in the fast solar wind at $r= 0.4 \mbox{ AU}$ and found that $(\delta v^-)^2 \ll (\delta v^+)^2 \simeq (60 \mbox{ km/s})^2$. As mentioned above, the typical value of $v_{\rm A}$ in the fast solar wind at $r= 0.3 \mbox{ AU}$ is $\sim 150 \mbox{ km/s}$ \citep{marsch82a,marsch90}. Near $r=0.3 \mbox{ AU}$, $B_0 \sim 1/r^2$, $\rho \sim 1/r^2$, and $v_{\rm A} \sim 1/r$, and so the typical value of $v_{\rm A}$ in fast-solar-wind streams at $r=0.4 \mbox{ AU}$ is $\sim 112.5 \mbox{ km/s}$. These measurements indicate that \begin{equation} (\delta v^-)^2 \ll (\delta v^+)^2 \simeq 0.28 v_{\rm A}^2 \label{eq:wto3} \end{equation} in the fast solar wind at $r=0.4 \mbox{ AU}$. Since $\delta v^\pm/v_{\rm A}$ decreases as $r$ decreases below 0.4~AU \citep{cranmer05,chandran09c}, the condition $\omega_{\rm nl, PI}^+ \ll |k_z| v_{\rm A}$ is well satisfied at $r< 0.4 \mbox{ AU}$, and the condition $\omega_{\rm nl, PI}^- \ll |k_z| v_{\rm A}$ is at least marginally satisfied at $r< 0.4 \mbox{ AU}$. It is worth noting that weak turbulence theory fails when applied to resonant interactions between three AWs, because such interactions occur only when one of the AWs has zero frequency, violating the weak-turbulence ordering~\citep{schekochihin12,meyrand15}. In contrast, the AW/slow-wave interactions in parametric decay do not involve a zero-frequency mode. Weak turbulence theory is thus in principle a better approximation for the nonlinear evolution of the parametric instability than for incompressible MHD turbulence. \subsection{The Low-$\beta$ Assumption} \label{sec:lowbeta} The assumption that $\beta \ll 1$ is not satisfied at $r \gtrsim 0.3 \mbox{ AU}\simeq 65 R_{\odot}$, where $\beta$ is~typically $\sim 1$, but is reasonable at $r \lesssim 20 R_{\odot}$ \citep{chandran11}. It is possible that the $\beta \ll 1$ theory presented here applies at least at a qualitative level provided $\beta$ is simply~$\lesssim 1$, and indeed this possibility motivates the comparison of the present model with {\em Helios} observations. However, further work is needed to investigate how the results of this paper are modified as $\beta$ increases to values $\sim 1$. \subsection{Neglect of Other Types of Nonlinear Interactions} \label{sec:NL} Another approximation in Sections~\ref{sec:WKE} through~\ref{sec:SW} is the neglect of all nonlinear interactions besides parametric decay. One of the neglected interactions is the shearing of inward-propagating AWs by outward-propagating AWs, which makes a contribution to $\omega^-_{\rm nl}$ that depends on the perpendicular length scale of the AWs. At the perpendicular outer scale~$L_\perp$ (the overall correlation length of the AWs measured perpendicular to~$\bm{B}_0$), the contribution to $\omega^-_{\rm nl}$ from shearing is approximately \begin{equation} \omega_{\rm nl, \perp}^- \sim \frac{ \chi \delta v^+}{L_\perp}, \label{eq:omegaperp1} \end{equation} where \begin{equation} \chi = \frac{\delta v^+}{k_z L_\perp v_{\rm A}} \label{eq:defchi} \end{equation} is the critical-balance parameter \citep{goldreich95,ng96,lithwick07}. Equation~(\ref{eq:omegaperp1}) does not apply when $\chi$ is much larger than~1, but direct numerical simulations suggest that $\chi \lesssim 1$ at $r \gtrsim 10 R_{\odot}$ for the bulk of the AW energy (J. Perez, private communication). Thus, at $r \gtrsim 10 R_{\odot}$, \begin{equation} \frac{\omega_{\rm nl, PI}^-}{\omega_{\rm nl, \perp}^-} \simeq (k_z L_\perp)^2. \label{eq:nlcomp} \end{equation} As AWs propagate away from the Sun, they follow magnetic field lines, which leads to the approximate scaling $L_\perp \propto B_0^{-1/2}$. In the WKB limit, the AW frequency in the Sun's frame $k_z (U+v_{\rm A})$ is independent of~$r$. The scaling $k_z \sim 1/(U+ v_{\rm A})$ thus serves as a rough approximation for outward-propagating AWs in the turbulent solar wind. At the coronal base (just above the transition region), where $k_z$ and $L_\perp$ have the values $k_{z\rm b}$ and $L_{\perp \rm b}$, the value of $ k_{z\rm b} L_{\perp \rm b}$ for the energetically dominant AWs launched by the Sun can be estimated (in essence from the critical-balance condition) as $\delta v_{\rm b}^+/v_{\rm Ab}$, where $\delta v_{\rm b}^+$ and $v_{\rm A b}$ are the values of $\delta v^+$ and $v_{\rm A}$ at the coronal base \citep{goldreich95,vanballegooijen16}. Together, these scalings lead to the estimate \begin{equation} k_z L_\perp \simeq \sqrt{\frac{B_{0\rm b}}{B_0}} \left(\frac{ U_{\rm b} + v_{\rm A b}}{U + v_{\rm A}}\right)\frac{\delta v^+_{\rm b}}{v_{\rm A b}}, \label{eq:est1} \end{equation} where $B_{0\rm b}$ and $U_{\rm b}$ are the values of the background magnetic field and solar-wind outflow velocity at the coronal base. Between $r=10 R_{\odot}$ and $r= 60 R_{\odot}$, $B_{0\rm b}/B_0 \simeq f_{\rm max} (r/R_{\odot})^2$, where $f_{\rm max}$ is the super-radial expansion factor \citep{kopp76}. In the fast solar wind within this range of radii, $U+v_{\rm A} \simeq 700 - 800 \mbox{ km/s}$, which is comparable to $U_{\rm b} + v_{\rm A b} \simeq v_{\rm Ab} \simeq 10^3 \mbox{ km/s}$. Equation~(\ref{eq:est1}) is thus approximately equivalent to \begin{equation} k_z L_\perp \simeq \sqrt{f_{\rm max}}\, \left(\frac{r}{R_{\odot}}\right) \left(\frac{\delta v_{\rm b}^+}{v_{\rm A b}}\right) \label{eq:est2} \end{equation} for the energetically dominant fluctuations launched by the Sun. If we set $f_{\rm max} = 9$, $\delta v_{\rm b} = 30 \mbox{ km/s}$, and $v_{\rm A b} = 900 \mbox{ km/s}$, then Equation~(\ref{eq:est2}) becomes \begin{equation} k_z L_\perp \sim \frac{r}{10 R_{\odot}} \label{eq:kzLperp} \end{equation} for the energetically dominant AWs launched by the Sun. Equations~(\ref{eq:nlcomp}) and (\ref{eq:kzLperp}) suggest that it is reasonable to neglect the shearing of inward-propagating AWs by outward-propagating AWs at $r \gtrsim 10 R_{\odot}$. On the other hand, at smaller radii, shearing could suppress the growth of inward-propagating AWs that would otherwise result from the parametric instability. Also, the requirement that $k_z L_\perp > 1$ in order for~$\omega_{\rm nl, PI}^-$ to exceed~$\omega_{\rm nl, \perp}^-$ could prevent the $f^{-1}$ range from spreading to frequencies below some minimum ($r$-dependent) value. The other nonlinearities in the weak-turbulence wave kinetic equations that are neglected in this paper include interactions involving fast magnetosonic waves, the turbulent mixing of slow waves by AWs, phase mixing of AWs by slow waves, and the shearing of outward-propagating AWs by inward-propagating AWs \citep{chandran08b}. In-situ measurements indicate that fast waves account for only a small fraction of the energy in compressive fluctuations at 1~AU \citep{yao11,howes12,klein12}. Also, fast waves propagating away from the Sun undergo almost complete reflection before they can escape into the corona \citep{hollweg78}. These findings suggest that nonlinear interactions involving fast waves have little effect upon the conclusions of this paper. The turbulent mixing of slow waves by AWs acts as an additional slow-wave damping mechanism and is thus unlikely to change the conclusions of this paper, which already assume strong slow-wave damping. Phase mixing of AWs by slow waves transports AW energy to larger~$k_\perp$ at a rate that increases with~$|k_z|$~\citep{chandran08b}. Although the fractional density fluctuations between $r = 10 R_{\odot}$ and $r= 0.3 \mbox{ AU}$ are fairly small \citep[see, e.g.,][]{tumarsch95,hollweg10}, phase mixing could affect the parallel AW power spectra, and further work is needed to investigate this possibility. The shearing of outward-propagating AWs by inward-propagating AWs is enhanced by non-WKB reflection, which makes this shearing more coherent in time \citep{velli89}. The resulting nonlinear timescale for outward-propagating AWs is roughly $r/(U+v_{\rm A})$ \citep{chandran09c}, where $U$ is the solar-wind outflow velocity. This timescale is comparable to the AW propagation time from the Sun to heliocentric distance~$r$, and hence to the parametric-decay timescale at the small-$f$ end of the $1/f$ range of~$e^+$. How this shearing modifies~$E^+(k_z)$, however, is not clear. For example, shearing by inward-propagating AWs may transport outward-propagating-AW energy to larger~$k_\perp = \sqrt{k_x^2 + k_y^2}$ at a rate that is independent of~$|k_z|$, in which case this shearing would reduce $E^+(k_z)$ by approximately the same factor at all~$k_z$, leaving the functional form of~$E^+(k_z)$ unchanged. \subsection{Neglect of Spatial Inhomogeneity} \label{sec:expansion} In this paper, it is assumed that the background plasma is uniform and stationary. In the solar wind, however, as an AW propagates from the low corona to 0.3~AU, the properties of the ambient plasma seen by the AW change dramatically, with $\beta$ increasing from~$\sim 10^{-2}$ to~$\sim 1$ and $\delta v_{\rm rms}/v_{\rm A}$ increasing from $\sim 0.02$ to~$\sim 0.5$ \citep[][]{bavassano00,cranmer05,chandran11}. Further work is needed to determine how this spatial inhomogeneity affects the nonlinear evolution of the parametric instability. \subsection{Approximate Treatment of Slow-Wave Damping} \label{sec:damping} A key assumption in Sections~\ref{sec:WKE} through~\ref{sec:SW} is that slow waves are strongly damped, and this damping is implemented by neglecting terms in the wave kinetic equations that are proportional to the slow-wave power spectrum~$S^\pm_k$. There are two sources of error in this approach. First, damping could modify the polarization properties of slow waves, thereby altering the wave kinetic equations. Second, even if $S^\pm_k$ is much smaller than the AW power spectrum~$A^\pm_k$, the neglected parametric-decay terms in the wave kinetic equations for AWs that are proportional to~$S^\pm_k$ could still be important, because they contain a factor of $\beta^{-1}$, which is absent in the terms that are retained. This factor arises from the fact that the fractional density fluctuation of a slow wave is $\sim \beta^{-1/2}$ times larger than the fractional magnetic-field fluctuation of an AW with equal energy. These neglected terms act to equalize the 3D AW power spectra $A^+$ and~$A^-$, and hence to equalize $E^+$ and~$E^-$. If these neglected terms were in fact important, they could invalidate the solutions presented in Section~\ref{sec:SW}, in which~$E^+ \gg E^-$. However, in situ observations indicate that $E^+ \gg E^-$ in the fast solar wind at $r= 0.3 \mbox{ AU}$ \citep{marsch90,tumarsch95}, which suggests that the neglect of these terms is reasonable. Further work is needed to investigate these issues more carefully. \section{Conclusion} \label{sec:conclusion} In this paper, weak turbulence theory is used to investigate the nonlinear evolution of the parametric instability in low-$\beta$ plasmas. The analysis starts from the wave kinetic equations describing the interactions between AWs and slow waves in weak compressible MHD turbulence. To account for the strong damping of slow waves in collisionless plasmas, terms containing the slow-wave energy density are dropped. The equations allow for all wave-vector directions, but are integrated over the wave-vector components perpendicular to the background magnetic field~$\bm{B}_0$ ($k_x$ and $k_y$), which leads to equations for the 1D power spectra $E^+$ and~$E^-$ that depend only on the parallel wavenumber~$k_z$ and time. During parametric decay in a low-$\beta$ plasma, an AW decays into a slow wave propagating in the same direction and a counter-propagating AW with a frequency slightly smaller than the frequency of the initial AW. The total number of AW quanta is conserved, and the reduction in AW frequencies leads to an inverse cascade of AW quanta towards smaller $\omega$ and~$k_z$. The energy of each AW quantum is $\hbar \omega$, and the decrease in $\omega$ during each decay corresponds to a decrease in the AW energy, which is compensated for by an increase in the slow-wave energy. The subsequent damping and dissipation of slow-wave energy results in plasma heating. The main results of this paper concern the parametric decay of a population of AWs propagating in one direction, say parallel to~$\bm{B}_0$, when the counter-propagating AWs start out with much smaller amplitudes. If the initial frequency spectrum~$e^+$ of the parallel-propagating AWs has a peak frequency~$f_0$ (at which $f e^+$ is maximized) and an ``infrared'' scaling $f^p$ at smaller~$f$ with $-1 < p < 1$, then $e^+$ acquires a $1/f$ scaling throughout a range of frequencies that spreads out in both directions from~$f_0$. At the same time, the anti-parallel-propagating AWs acquire a $1/f^2$ spectrum within this same frequency range. If the plasma parameters and infrared $e^+$ spectrum are chosen to match conditions in the fast solar wind at a heliocentric distance of 0.3~AU, and the AWs are allowed to evolve for a period of time that is roughly two-thirds of the AW travel time from the Sun to 0.3~AU, the resulting form of $e^+$ is similar to the form observed by the {\em Helios} spacecraft in the fast solar wind at 0.3~AU. Because the background plasma parameters are time-independent in the analysis of this paper but time-dependent in the plasma rest frame in the solar wind, it is not clear how to map the time variable in the present analysis to heliocentric distance. Nevertheless, the similarity between the spectra found in this paper and the spectra observed by {\em Helios} suggests that parametric decay plays an important role in shaping the AW spectra observed in the fast solar wind at 0.3~AU, at least for wave periods $\lesssim 1 \mbox{ hr}$. The frequency $f^\ast$ that dominates the AW energy is the maximum of~$(fe^+ + fe^-)$. At the beginning of the numerical calculation presented in Section~\ref{sec:SW}, $f^\ast$ is approximately $f_0 = 0.01 \mbox{ Hz}$. At $t = 8 \mbox{ hr}$ in this numerical calculation, $f^\ast$ is the smallest frequency at which~$e^+ \sim f^{-1}$ and $e^- \sim f^{-2}$, which is~$\sim 3 \times 10^{-4} \mbox{ Hz}$. This decrease in~$f^\ast$ is a consequence of the aforementioned inverse cascade, which transports AW quanta from the initial peak frequency to smaller frequencies. Inverse cascade offers a way to reconcile the observed dominance of AWs at hour-long timescales at 0.3~AU with arguments that the Sun launches most of its AW power at significantly shorter wave periods \citep{cranmer05,vanballegooijen16}. Further work is needed to relax some of the simplifying assumptions in this paper, including the low-$\beta$ approximation, the assumption of spatial homogeneity, the simplistic treatment of slow-wave damping, and the neglect of nonlinear interactions other than parametric decay. Further work is also needed to evaluate the relative contributions of parametric decay and other mechanisms to the generation of $1/f$ spectra in the solar wind. For example, \cite{matthaeus86} argued that the $f^{-1}$ spectrum seen at $r=1 \mbox{ AU}$ at $3\times 10^{-6} \mbox{ Hz} < f < 8\times 10^{-5} \mbox{ Hz}$ is a consequence of forcing at the solar surface, and \cite{velli89} argued that the shearing of outward-propagating AWs by the inward-propagating AWs produced by non-WKB reflection causes the outward-propagating AWs to acquire an $f^{-1}$ spectrum. NASA's {\em Parker Solar Probe} (PSP) has a planned launch date in the summer of 2018 and will reach heliocentric distances less than~$10 R_{\odot} $. The FIELDS \citep{bale16} and SWEAP \citep{kasper15} instrument suites on PSP will provide the first-ever in-situ measurements of the magnetic-field, electric-field, velocity, and density fluctuations in the solar wind at $r< 0.29 \mbox{ AU}$. Although the issues mentioned in the preceding paragraph are sources of uncertainty, the results of this paper lead to the following predictions that will be tested by PSP. First, the $1/f$ range of~$e^+$ in fast-solar-wind streams at $r < 0.3 \mbox{ AU}$ and $f\gtrsim 3 \times 10^{-4} \mbox{ Hz}$ is produced in situ by parametric decay. As a consequence, the $1/f$ range of~$e^+$ in fast-solar-wind streams will be much more narrow at small~$r$ than at $r=0.3 \mbox{ AU}$. As AWs propagate away from the Sun, the frequency range $(f_{\rm min}, f_{\rm max})$ in which $e^+ \sim 1/f$ spreads out in both directions from the near-Sun peak frequency (the maximum of $f e^+$). Thus, $f_{\rm min}$ will be larger closer to the Sun, and $f_{\rm max}$ will be~smaller. Finally, during epochs in which the local magnetic field is aligned with the relative velocity between the plasma and the spacecraft (see the discussion in Section~\ref{sec:comp}), the spectrum of $e^-$ will scale like~$1/f^2$ in the frequency interval $(f_{\rm min}, f_{\rm max})$. I thank Phil Isenberg for discussions about the work of \cite{cohen74} and the three anonymous reviewers for helpful comments that led to improvements in the manuscript. This work was supported in part by NASA grants NNX15AI80, NNX16AG81G, and NNX17AI18G, NASA grant NNN06AA01C to the Parker Solar Probe FIELDS Experiment, and NSF grant PHY-1500041.
proofpile-arXiv_067-14196
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Note} \section{Introduction}\label{sec:intro} We explore scaling of the standard distributed Tensorflow~\cite{TFPub} with GRPC primitives on up to 512 Intel\textsuperscript{\textregistered}\xspace Xeon Phi\texttrademark\xspace (KNL) nodes of Cori supercomputer~\cite{Cori} with synchronous stochastic gradient descent (SGD), and identify causes of scaling inefficiency at higher node counts. To our knowledge, this is the first exploration of distributed GRPC Tensorflow's scalability on a HPC supercomputer at such large scale with synchronous SGD. We studied scaling of two convolution neural networks - ResNet-50~\cite{resnet50}, a state-of-the-art deep network for classification with roughly 25.5 million parameters, and HEP-CNN~\cite{hepGithub}~\cite{cite15PF}, a shallow topology with less than 1 million parameters for common scientific usages. For ResNet-50, we achieve >80\% scaling efficiency on up to 128 workers, using 32 parameter servers (PS tasks) with a steep decline down to ~23\% for 512 workers using 64 PS tasks. Our analysis of the efficiency drop points to low network bandwidth utilization due to combined effect of three factors. (a) Heterogeneous distributed parallelization algorithm which uses PS tasks as centralized servers for gradient averaging is suboptimal for utilizing interconnect bandwidth. (b) Load imbalance among PS tasks hinders their efficient scaling. (c) Underlying communication primitive GRPC is currently inefficient on Cori's high-speed interconnect. The HEP-CNN demands less interconnect bandwidth, and shows >80\% weak scaling efficiency for up to 256 nodes with only 1 PS task. Our findings are applicable to other deep learning networks. Big networks with millions of parameters stumble upon the issues discussed here. Shallower networks like HEP-CNN with relatively lower number of parameters can efficiently enjoy weak scaling even with a single parameter server. \section{Configuration} We used Intel\textsuperscript{\textregistered}\xspace Xeon Phi\texttrademark\xspace ``Knights Landing'' (KNL) processors from Cori supercomputer (Phase II) with GPFS file system. We use dummy data and do not perform disk I/O during the runs. The interconnect used here is Cray ``Aries'' high speed inter-node network with Dragonfly topology with 45.0 TB/s global peak bisection bandwidth for 9,688 KNL compute nodes (Phase II). The single node thread and affinity settings used are: KMP\_BLOCKTIME = 0, KMP\_SETTINGS = 0, KMP\_AFFINITY = ``granularity=fine,noverbose,compact,1,0'' and OMP\_NUM\_THREADS = 66 Num\_inter\_threads = 66 and Num\_intra\_threads = 3. For ResNet-50 experiments, we use standard tf\_cnn\_benchmarking scripts~\cite{TFBM}. We do minimal changes to the script to enable distributed Tensorflow run on Cori with the Slurm schedular and running on CPU only mode. These include defining local\_parameter\_device and device parameters as `cpu' and setting force\_gpu\_compatible and use\_nccl to false. For the HEP-CNN benchmark~\cite{hepGithub}, we update the code to use the tf.train.Supervisor API. The Tensorflow~\cite{TFCode} code was compiled from sources with MKL~\cite{mkl} optimizations on August 29. Tensorflow version is 1.3 and compiler used is gcc/6.3.0. \section{Experiments} Our experiments\footnote{\BenchmarkDisclaimer} do not aim to research scaling algorithms, but rather to assess the scaling issues of GRPC distributed TensorFlow in production on Cori at large scale as experienced by its users. We (a) study weak scaling (fixed minibatch size per worker) with synchronous SGD algorithm; (b) study the distributed parameter update algorithm with PS tasks and workers using GRPC communication primitives; (c) choose two networks with different communication characteristics; (d) choose a relatively large batch-size of 128 images per worker to keep the fraction of communication time low at low worker counts; (e) use dummy data to avoid any potential I/O bottlenecks. We are using Tensorflow compiled with MKL~\cite{mkl} which is optimized\footnote{\CompilerDisclaimer} for Intel\textsuperscript{\textregistered}\xspace CPUs. The two benchmarks chosen ResNet-50 and HEP-CNN perform classification tasks for images. \begin{figure*}[!ht] \includegraphics[width=\textwidth]{figures/paper_figure} \caption{\label{fig:exp} Scaling efficiency of GRPC distributed Tensorflow on Cori KNL nodes with 128 batch size per worker. Single KNL performance is chosen as the baseline All PS tasks and workers execute on different nodes. Both benchmarks use (224, 224, 3) size and NCHW format for input images. (a) Efficiency for ResNet-50 vs. number of workers. (b) Efficiency for ResNet-50 vs. number of PS tasks (c) Efficiency of HEP-CNN vs. number of workers.} \end{figure*} Figure~\ref{fig:exp}~(a) shows the efficiency of scaling ResNet-50 on up to 512 workers. The number of PS tasks are chosen for highest per worker efficiency, shown in Fig.~\ref{fig:exp}~(b). We find that the aggregate performance of all workers increases with an increase in the number of PS tasks, and then flattens or decreases. Figure~\ref{fig:exp}~(a) shows that higher than 80\% per worker efficiency can be obtained with up to 128 workers supported by 32 PS nodes. However, the efficiency drops to a maximum of 56\% for 256 workers and 23\% for 512 workers; and it does not improve with higher number of PS tasks. A similar trend is observed using 2012 ILSVRC competition dataset~\cite{ilsvrc2012}. For the HEP-CNN, 1 PS task is capable of supporting up to 256 worker nodes with >80\% efficiency, Fig.~\ref{fig:exp}~(c). \section{Analysis} The two neural networks show different scaling characteristics. ResNet-50 is a 50 layer deep network and contains millions of parameters, whereas HEP-CNN is a light weight network with a total of 6 layers and roughly 593K parameters. Our analysis suggests that at large node counts, scaling of ResNet-50 network becomes bottlenecked by the achievable interconnect bandwidth. This is due to various inefficiencies in the current GRPC based distributed parameter update algorithm which result in suboptimal use of the interconnect bandwidth. The parameter update algorithm distributes trainable variables of each layer to centralized servers. Each server is responsible for combining updates of a set of variables from all workers. This scheme has two problems. (1) Due to the synchronous nature of the training process, this can introduce hotspots in the interconnect when all workers send updates to the same server at roughly the same time. The amount of traffic to each server increases linearly with the number of workers. At some point, the interconnect bandwidth of PS tasks becomes the bottleneck of scaling. (2) Load imbalance among the PS tasks, as the number of PS tasks is limited to the number of disjoint parameter sets in the current benchmarks. One can reduce the interconnect bandwidth bottleneck by allocating more nodes to PS tasks. In our experiments, we need to dedicate as many as 1/4 additional nodes to PS tasks to achieve >80\% per worker scaling efficiency for ResNet-50 (Fig.~\ref{fig:exp}~(b)). However, this reduces per-node efficiency as shown in Fig.~\ref{fig:exp}~(a). HEP-CNN puts less pressure on the interconnect bandwidth and we do not see significant performance improvement with the increase in PS tasks. Also, load imbalance among PS tasks does not allow efficient weak scaling to continue beyond 128 workers for ResNet-50 network, even with 32 or higher PS tasks. In ResNet-50, 99\% of the ~25.5M parameters are contained in 54 two or higher dimensional tensors. Tensors are assigned to PS tasks using a greedy load balancing strategy based on their sizes. Each tensor variable is assigned to a single PS task only. Given this type of allocation strategy, scaling PS tasks beyond 54 results in heavy load imbalance among the PS nodes. Our experiments clearly show (Fig.~\ref{fig:exp}~b) that the gain from increasing PS tasks from 32 to 64 results in insignificant performance improvement. The current GRPC protocol limits the maximum amount of bandwidth that can be utilized by each node. Our crude estimates show roughly 5-6x gap in the communication time for ResNet50 for 1 PS and 16 workers compared to the peak achievable. Our conjecture is that improving the communication protocol (such as using MPI) would improve overall scaling. \section{Outlook} Our analysis indicates suboptimal use of high-speed interconnect bandwidth by the current distributed algorithm and implementation of Tensorflow in production at Cori, which uses the GPRC protocol. There are more optimal ways of implementing all-reduce operation, such as tree-reduction or the ring method~\cite{mpichCollective}~\cite{BiaduRing} which has lower theoretical complexity than a linear dependence on the number of nodes. In future work, we plan to explore those and MPI communication primitives. \section*{Acknowledgment} We thank Jeongnim Kim, Bhavani Subramanian, Mahmoud Abuzaina, Lawrence Meadows, Elmoustapha Ould-ahmed-vall and AG Ramesh for their help in discussions and set up. This work is a part of of the Intel/NERSC Big Data Center collaboration. \bibliographystyle{IEEEtran}
proofpile-arXiv_067-14277
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Higgs mechanism of Electro-Weak Symmetry Breaking (EWSB) seems to be the one chosen by Nature to assign mass to fermions and weak gauge bosons. In its minimal realisation, through a single Higgs {\it doublet}, it implies the existence of a single Higgs boson, as discovered in 2012 at the Large Hadron Collider (LHC). Indeed such a minimal Standard Model (SM) is compatible with a myriad of experimental results. However, the well known unanswered questions such as the origin of flavour, with three families of quarks and leptons, as well as Dark Matter (DM), suggest that some extension beyond the SM (BSM) is necessary. Given the existence of three families of quarks and leptons, it is not so far fetched to imagine that there might also be three families of Higgs doublets, where, as for the fermions, the replication is not prescribed by the SM gauge group. Indeed, it is possible that the three families of quarks and leptons could be described by the same symmetries that describe the three Higgs doublets. In such scenarios, this generation/family symmetry could be spontaneously broken along with the EW symmetry, although some remnant subgroup could survive, thereby stabilising a possible scalar DM candidate. For certain symmetries, it is finally possible to find a Vacuum Expectation Value (VEV) alignment that respects the original symmetry of the potential which will then be responsible for the stabilisation of the DM candidate. In such 3-Higgs-Doublet Models (3HDMs), amongst the various symmetries which can govern them \cite{Ivanov:2011ae}--\cite{Keus:2013hya}, a simple possibility is a single $Z_2$, referred to here as Higgs parity, which can prevent Flavour Changing Neutral Currents (FCNCs) and possible charge breaking vacua. In the present paper, we shall focus on the phenomenology of one of these 3HDMs, namely, the one in which the third scalar doublet is even and the first and second inert\footnote{A doublet is termed ``inert'', or at times ``dark" or simply ``scalar'', since it does not develop a VEV, nor does it couple to fermions, so as to distinguish it from one which develops a VEV, i.e., an ``active'' Higgs doublet.} doublets are odd under the $Z_2$ parity. We assume a vacuum alignment in the 3HDM space of $(0,0,v)$ that preserves the $Z_2$ symmetry (i.e., the Higgs parity). Thus we are led to consider a model with two inert doublets plus one Higgs doublet (I(2+1)HDM). This model may be regarded as an extension of the model with one inert doublet plus one Higgs doublet (I(1+1)HDM)\footnote{This model is known in the literature as the Inert Doublet Model (IDM), herein, we refer to it as I(1+1)HDM, thus clarifying the number of inert and active Higgs doublets.} proposed in 1976 \cite{Deshpande:1977rw} and studied extensively for the last few years (see, e.g., \cite{Ma:2006km}--\cite{LopezHonorez:2006gr}), by the addition of an extra inert scalar doublet. The lightest neutral scalar or pseudoscalar field amongst the two inert doublets, which are odd under the $Z_2$ parity, provides a viable DM candidate which is stabilised by the conserved $Z_2$ symmetry, displaying phenomenological characteristics notably different from the candidate emerging from the I(1+1)HDM case \cite{Grzadkowski:2010au}, both in the CP-Conserving (CPC) and CP-Violating (CPV) cases, as noted in Refs.~\cite{Keus:2014jha}--\cite{Cordero-Cid:2016krd}. Within this framework, we study some new SM-like Higgs decay channels offered by the extra inert fields, with the intent of isolating those which would enable one to distinguish between the I(2+1)HDM and I(1+1)HDM, assuming CP conservation throughout. The analysis of the CPV I(2+1)HDM is postponed to a future publication. In particular, we shall focus on the loop induced decay of the next-to-lightest scalar, $H_2 \to H_1 f \bar f$ ($f=u,d,c,s,b,e,\mu,\tau$), mediated by loops involving both dark CP-odd and charged scalars. This decay chain occurs in the I(2+1)HDM but not in the I(1+1)HDM, so it enables the two models to be distinguished. In practice, the loop decay can be observed in the cascade decay of the SM-like Higgs boson into two DM particles and a fermion-antifermion pair, $h\to H_1 H_2\to H_1 H_1 f \bar f$, wherein the $h$ state is produced from gluon-gluon Fusion (ggF) (i.e., $gg\to h$) or Vector Boson Fusion (VBF) (i.e., $ q q^{(')}\to q q^{(')} h$). Notice, however, that this mode competes with the tree-level channel $q\bar q\to H_1H_1Z^*\to H_1H_1f \bar f$ present also in the I(1+1)HDM. The resulting detector signature, $\cancel{E}_{T}\; f \bar f$, with the $ f \bar f$ invariant mass well below the $Z$ mass, would indicate the presence of such a loop decay onset by a small difference between $H_2$ and $H_1$ which would in turn identify a region of I(2+1)HDM parameter space largely precluded to the tree-level process. Indeed, we will show that such a distinctive signature can possibly be extracted at the LHC during Run 2 and/or Run 3. In fact, amongst the possible $f\bar f$ cases, a particularly spectacular one would be the one in which an electron-positron pair is produced, eventually yielding an isolated mono-shower signal of QED nature, owing to the fact that the dominant component (over the box topologies) of the loop signal is the $H_2\to H_1\gamma^*$ one, where the photon is (necessarily, because of spin conservation) off-shell, yet eventually producing the $e^+e^-$ pair in configurations where the fermions are soft and/or collinear. In assessing the scope of the LHC in accessing this phenomenology, we shall consider all available theoretical \cite{Keus:2014isa,Moretti:2015cwa} and experimental constraints affecting the I(2+1)HDM parameter space, so as to eventually define some benchmark scenarios which can be tested at the CERN machine. The layout of the paper is as follows. In the next section we describe the CPC I(2+1)HDM. In Sect.~\ref{cascades}, we introduce and discuss the aforementioned loop cascade decays. In Sect.~\ref{sec-loop} we perform all necessary calculations, both at tree and loop level, including analytic formulae for the $H_2 \to H_1 f\bar f$ case. In Sect.~\ref{results}, we present our results. We then conclude in Sect.~\ref{summa}. Finally, two appendices will collect some key formulae. \section{The CP conserving I(2+1)HDM} \label{3HDM} \subsection{The potential with a $Z_2$ symmetry } It is known \cite{Ivanov:2011ae} that, in a model with several Higgs doublets, the scalar potential which is symmetric under a group $G$ of phase rotations can be written as the sum of $V_0$, the phase invariant part, and $V_G$, a collection of extra terms ensuring the symmetry group $G$. Here, we study a 3HDM symmetric under a $Z_2$ symmetry with generator \begin{equation} \label{generator} g= \mathrm{diag}\left(-1, -1, +1 \right), \end{equation} where the doublets, $\phi_1,\phi_2$ and $\phi_3$, have odd, odd and even $Z_2$ quantum numbers, respectively. Note that this $Z_2$ generator forbids Flavour Changing Neutral Currents (FCNCs) and is respected by the vacuum alignment $(0,0,v)$, since the fermions which only couple to the active scalar doublet, $\phi_3$, are assigned an even $Z_2$ charge. The potential symmetric under the $Z_2$ symmetry in (\ref{generator}) can be written as \begin{eqnarray} \label{V-3HDM} V &=& V_0 + V_{Z_2}, \\ V_0 &=& - \mu^2_{1} (\phi_1^\dagger \phi_1) -\mu^2_2 (\phi_2^\dagger \phi_2) - \mu^2_3(\phi_3^\dagger \phi_3) \nonumber\\ &&+ \lambda_{11} (\phi_1^\dagger \phi_1)^2+ \lambda_{22} (\phi_2^\dagger \phi_2)^2 + \lambda_{33} (\phi_3^\dagger \phi_3)^2 \\ && + \lambda_{12} (\phi_1^\dagger \phi_1)(\phi_2^\dagger \phi_2) + \lambda_{23} (\phi_2^\dagger \phi_2)(\phi_3^\dagger \phi_3) + \lambda_{31} (\phi_3^\dagger \phi_3)(\phi_1^\dagger \phi_1) \nonumber\\ && + \lambda'_{12} (\phi_1^\dagger \phi_2)(\phi_2^\dagger \phi_1) + \lambda'_{23} (\phi_2^\dagger \phi_3)(\phi_3^\dagger \phi_2) + \lambda'_{31} (\phi_3^\dagger \phi_1)(\phi_1^\dagger \phi_3), \nonumber\\ V_{Z_2} &=& -\mu^2_{12}(\phi_1^\dagger\phi_2)+ \lambda_{1}(\phi_1^\dagger\phi_2)^2 + \lambda_2(\phi_2^\dagger\phi_3)^2 + \lambda_3(\phi_3^\dagger\phi_1)^2 + {\rm h.c.} \end{eqnarray} This potential has only a $Z_2$ symmetry and no larger accidental symmetry\footnote{Note that adding extra $Z_2$-respecting terms, $(\phi_3^\dagger\phi_1)(\phi_2^\dagger\phi_3)$, $(\phi_1^\dagger\phi_2)(\phi_3^\dagger\phi_3)$, $(\phi_1^\dagger\phi_2)(\phi_1^\dagger\phi_1)$, $(\phi_1^\dagger\phi_2)(\phi_2^\dagger\phi_2)$, does not change the phenomenology of the model. The coefficients of these terms, therefore, have been set to zero for simplicity. }. We shall not consider CP violation in this paper, therefore we require all parameters of the potential to be real. The full Lagrangian of the model is as follows: \begin{equation} { \cal L}={ \cal L}^{\rm SM}_{ gf } +{ \cal L}_{\rm scalar} + {\cal L}_Y(\psi_f,\phi_{3}) \,, \quad { \cal L}_{\rm scalar}=T-V\, , \label{lagrbas} \end{equation} where ${\cal L}^{\rm SM}_{gf}$ is the boson-fermion interaction as in the SM, ${ \cal L}_{\rm scalar}$ describes the scalar sector of the model and ${\cal L}_Y(\psi_f,\phi_{3})$ describes the Yukawa interaction with $\phi_3$ the only active doublet to play the role of the SM-Higgs doublet. The kinetic term in ${ \cal L}_{\rm scalar}$ has the standard form of $ T = \sum_i \left(D_{\mu} \phi_{i}\right)^{\dagger} \left( D^{\mu} \phi_{i} \right)$ with $D^\mu$ being the covariant derivative for an $SU(2)$ doublet. \subsection{Mass eigenstates} \label{section-masses} The minimum of the potential is realised for the following point: \begin{equation} \phi_1= \doublet{$\begin{scriptsize}$ \phi^+_1 $\end{scriptsize}$}{\frac{H^0_1+iA^0_1}{\sqrt{2}}},\quad \phi_2= \doublet{$\begin{scriptsize}$ \phi^+_2 $\end{scriptsize}$}{\frac{H^0_2+iA^0_2}{\sqrt{2}}}, \quad \phi_3= \doublet{$\begin{scriptsize}$ G^+ $\end{scriptsize}$}{\frac{v+h+iG^0}{\sqrt{2}}}, \label{explicit-fields} \end{equation} with $ v^2= \frac{\mu^2_3}{\lambda_{33}} . $ \vspace{5mm} \noindent The mass spectrum of the scalar particles are as follows. \begin{itemize} \item \textbf{The fields from the active doublet}\\ The third doublet, $\phi_3$ plays the role of the SM-Higgs doublet, hence, the fields $G^0,G^\pm$ are the would-be Goldsone bosons and $h$ the SM-like Higgs boson with mass-squared \begin{equation} m^2_{h}= 2\mu_3^2, \end{equation} which has been set to $(125~{\ensuremath\rm \,GeV})^2$ in our numerical analysis. \item \textbf{The CP-even neutral inert fields}\\ The pair of inert neutral scalar gauge eigenstates, $H^0_{1},H^0_{2}$, are rotated by \begin{equation} R_{\theta_h}= \left( \begin{array}{cc} \cos \theta_h & \sin \theta_h \\ -\sin \theta_h & \cos \theta_h\\ \end{array} \right), \qquad \mbox{with} \quad \tan 2\theta_h = \frac{2\mu^2_{12}}{\mu^2_1 -\Lambda_{\phi_1} - \mu^2_2 + \Lambda_{\phi_2}}, \label{diagH} \end{equation} into the mass eigenstates, $H_1, H_2$, with squared masses \begin{eqnarray} && m^2_{H_1}= (-\mu^2_1 + \Lambda_{\phi_1})\cos^2\theta_h + (- \mu^2_2 + \Lambda_{\phi_2}) \sin^2\theta_h -2\mu^2_{12} \sin\theta_h \cos\theta_h, \nonumber\\ && m^2_{H_2}= (-\mu^2_1 + \Lambda_{\phi_1})\sin^2\theta_h + (- \mu^2_2 + \Lambda_{\phi_2}) \cos^2\theta_h + 2\mu^2_{12} \sin\theta_h \cos\theta_h, \nonumber\\ && \mbox{where} \quad \Lambda_{\phi_1}= \frac{1}{2}(\lambda_{31} + \lambda'_{31} + 2\lambda_3)v^2, \quad \Lambda_{\phi_2}= \frac{1}{2}(\lambda_{23} + \lambda'_{23} +2\lambda_2 )v^2 . \qquad \qquad \end{eqnarray} \item \textbf{The charged inert fields}\\ The pair of inert charged gauge eigenstates, $\phi^\pm_{1}, \phi^\pm_{2}$, are rotated by \begin{equation} R_{\theta_c}= \left( \begin{array}{cc} \cos \theta_c & \sin \theta_c \\ -\sin \theta_c & \cos \theta_c\\ \end{array} \right), \qquad \mbox{with} \quad \tan 2\theta_c = \frac{2\mu^2_{12}}{\mu^2_1 - \Lambda'_{\phi_1} - \mu^2_2 + \Lambda'_{\phi_2}}, \nonumber \end{equation} into the mass eigenstates, $H^\pm_1, H^\pm_2$, with squared masses \begin{eqnarray} && m^2_{H^\pm_1}= (-\mu^2_1 + \Lambda'_{\phi_1})\cos^2\theta_c + (- \mu^2_2 + \Lambda'_{\phi_2}) \sin^2\theta_c -2\mu^2_{12} \sin\theta_c \cos\theta_c, \nonumber\\ && m^2_{H^\pm_2}= (-\mu^2_1 + \Lambda'_{\phi_1})\sin^2\theta_c + (- \mu^2_2 + \Lambda'_{\phi_2}) \cos^2\theta_c + 2\mu^2_{12} \sin\theta_c \cos\theta_c, \nonumber\\ && \mbox{where} \quad \Lambda'_{\phi_1}= \frac{1}{2}(\lambda_{31})v^2 , \quad \Lambda'_{\phi_2}= \frac{1}{2}(\lambda_{23} )v^2. \qquad \qquad \qquad \qquad \qquad \qquad \qquad \quad \end{eqnarray} \item \textbf{The CP-odd neutral inert fields}\\ The pair of inert pseudo-scalar gauge eigenstates, $A^0_{1}, A^0_{2}$, are rotated by \begin{equation} R_{\theta_a}= \left( \begin{array}{cc} \cos \theta_a & \sin \theta_a \\ -\sin \theta_a & \cos \theta_a\\ \end{array} \right), \qquad \mbox{with} \quad \tan 2\theta_a = \frac{2\mu^2_{12}}{\mu^2_1 - \Lambda''_{\phi_1} - \mu^2_2 + \Lambda''_{\phi_2}},\nonumber \end{equation} into the mass eigenstates, $A_1, A_2$, with squared masses \begin{eqnarray} && m^2_{A_1}= (-\mu^2_1 + \Lambda''_{\phi_1})\cos^2\theta_a + (- \mu^2_2 + \Lambda''_{\phi_2}) \sin^2\theta_a -2\mu^2_{12} \sin\theta_a \cos\theta_a, \nonumber\\ && m^2_{A_2}= (-\mu^2_1 + \Lambda''_{\phi_1})\sin^2\theta_a + (- \mu^2_2 + \Lambda''_{\phi_2}) \cos^2\theta_a + 2\mu^2_{12} \sin\theta_a \cos\theta_a, \nonumber\\ && \mbox{where} \quad \Lambda''_{\phi_1}= \frac{1}{2}(\lambda_{31} + \lambda'_{31} - 2\lambda_3)v^2 , \quad \Lambda''_{\phi_2}= \frac{1}{2}(\lambda_{23} + \lambda'_{23} -2\lambda_2 )v^2. \qquad \qquad \end{eqnarray} \end{itemize} \noindent (The model is CP conserving, therefore there is no mixing between CP-even and CP-odd states in the inert sector.) We can separate the inert particles into two families, or generations, with the second generation being heavier than the respective fields from the first generation. We will refer to the set of $(H_1,A_1,$ $H^\pm_1)$ as the fields from the first generation and to $(H_2,A_2,H^\pm_2)$ as the fields from the second generation. Each of the four neutral particles could, in principle, be the DM candidate, provided it is lighter than the other neutral states. In what follows, without loss of generality, we assume the CP-even\footnote{Other neutral scalars could also play the role of DM candidate, e.g., $A_1$ would be the lightest particle after transformation $\lambda_{2,3} \to - \lambda_{2,3}$. We could also choose $H_2$ to be the lightest particle with $\mu_{12}^2 \to -\mu_{12}^2$, or $A_2$ if both $\lambda_{2,3} \to - \lambda_{2,3}$ and $\mu_{12}^2 \to -\mu_{12}^2$. Hence, the results of our analysis are also applicable to all neutral scalars following suitable sign changes.} neutral particle $H_1$ from the first generation to be lighter than all other inert particles, that is: \begin{equation} m_{H_1} < m_{H_2}, m_{A_{1,2}},m_{H^\pm_{1,2}}. \end{equation} In the remainder of the paper the notations $H_1$ and DM particle will be used interchangeably and so will be their properties, e.g., $m_{H_1}$ and $m_{\rm DM}$. \subsection{Simplified couplings in the I(2+1)HDM}\label{simplified} Due to the large number of free parameters in the I(2+1)HDM, which makes it impractical to analyse the model in the general case, we focus on a simplified case where the parameters related to the first inert doublet are $n$ times the parameters related to the second doublet \cite{Keus:2014jha}: \begin{equation} \label{lambda-assumption} \mu^2_1 = n \mu^2_2, \quad \lambda_3 = n \lambda_2, \quad \lambda_{31} = n \lambda_{23}, \quad \lambda_{31}' = n \lambda_{23}', \end{equation} resulting in \begin{equation} \Lambda_{\phi_1} = n \Lambda_{\phi_2}, \quad \Lambda'_{\phi_1} = n\Lambda'_{\phi_2}, \quad \Lambda''_{\phi_1} = n \Lambda''_{\phi_2}, \end{equation} without introducing any new symmetry to the potential. The motivation for this simplified scenario is that in the $n=0$ limit the model reduces to the well-known I(1+1)HDM. We assume no specific relation among the other parameters of the potential. It is important to note that the remaining quartic parameters, $(\lambda_{1,11,22,12}, \lambda'_{12})$, do not influence the discussed DM phenomenology of the model and thus their values have been fixed in agreement with the constraints discussed in Sect. \ref{constraints} and compliant with the results on unitarity obtained in \cite{Moretti:2015cwa}. With this simplification, it is possible to obtain analytical formulae for the parameters of the potential in terms of chosen physical parameters. In this study, we choose the set $(m_{H_1}, m_{H_2}, g_{H_1 H_1 h}, \theta_a, \theta_c, n)$ as the input parameters where $g_{H_1 H_1 h}$ is the Higgs-DM coupling. The meaningful parameters of the model are then defined as follows: \begin{eqnarray} && \mu_2^2 = \Lambda_{\phi_2} - \frac{m_{H_1}^2+m_{H_2}^2}{1+n}, \\ && \mu_{12}^2 = \frac{1}{2} \sqrt{(m_{H_1}^2-m_{H_2}^2)^2 - (-1+n)^2 (\Lambda_{\phi_2} - \mu_2^2)^2 }, \\ && \lambda_2 = \frac{1}{2v^2} (\Lambda_{\phi_2} - \Lambda''_{\phi_2} ),\\ && \lambda_{23} = \frac{2}{v^2} \Lambda'_{\phi_2}, \\ && \lambda'_{23} = \frac{1}{v^2} (\Lambda_{\phi_2} + \Lambda''_{\phi_2} - 2 \Lambda'_{\phi_2} ), \\ && \Lambda_{\phi_2} = \frac{v^2 g_{H_1 H_1 h}}{4(\sin^2 \theta_h + n \cos^2 \theta_h)},\\ && \Lambda'_{\phi_2} = \frac{2 \mu_{12}^2}{(1-n) \tan 2 \theta_c}, \\ && \Lambda''_{\phi_2} = \frac{2 \mu_{12}^2}{(1-n) \tan 2 \theta_a }. \end{eqnarray} The mixing angle in the CP-even sector, $\theta_h$, is given by the masses of $H_1$ and $H_2$ and the dark hierarchy parameter $n$: \begin{equation} \tan^2 \theta_h = \frac{m_{H_1}^2 - n m_{H_2}^2}{n m_{H_1}^2 - m_{H_2}^2}. \end{equation} Notice that we restore the $n=1$ limit of dark democracy discussed in \cite{Keus:2014jha, Keus:2015xya,Cordero-Cid:2016krd} with $\theta_h = \pi/4$. For the correct definition of $\tan^2 \theta_h$, the following two relations need to be satisfied: $m_{H_1}^2 < n m_{H_2}^2$ and $m_{H_1}^2 < \frac{1}{n} m_{H_2}^2$. Without loss of generality, we can limit ourselves to $n<1$, which will correspond to $\tan2\theta>0$ for $\theta_h < \pi/4$. Reaching other values of $n$ is a matter of reparametrisation of the potential. \subsection{Theoretical and experimental constraints}\label{constraints} As discussed in \cite{Keus:2014jha, Keus:2015xya,Cordero-Cid:2016krd}, the I(2+1)HDM is subject to various theoretical and experimental constraints. In \cite{Keus:2014jha}, we have studied in detail the theoretical constraints, namely the positivity of the mass eigenstates, boundedness of the potential and positive-definiteness of the Hessian. Our parameter choice is also compliant with the EW Precision Test (EWPT) bounds \cite{Keus:2014jha, Keus:2015xya}. These limits have been taken into account in the present paper. The second set of experimental constraints comes from the relic abundance of DM as well as dedicated direct and indirect searches for DM particles. The Planck experiment provides a DM relic density limit of \cite{Ade:2015xua}: \begin{equation} \Omega_{\rm DM}h^2= 0.1197 \pm 0.0022. \label{omegaplanck} \end{equation} In this work, we do not focus on the details of DM annihilation (for detailed discussions see Refs. \cite{Keus:2014jha, Keus:2015xya,Cordero-Cid:2016krd}). However, we require that the DM candidate of the I(2+1)HDM is in agreement with the upper limit from Planck (\ref{omegaplanck}) for all considered points. If relation (\ref{omegaplanck}) is exactly satisfied, then $H_1$ provides 100\% of the DM in the Universe. This is a case in benchmark scenario A50 discussed in later sections \cite{Keus:2014jha, Keus:2015xya}. We also consider cases where $H_1$ has a subdominant contribution and the missing relic density is to be provided by an extension of the model. This usually happens where mass splittings between $H_1$ and other inert particles are small, i.e., in the forthcoming benchmarks I5 and I10. In these two cases, the coannihilation channels of $H_1 A_i \to Z \to ff'$ are strong and reduce DM relic density to values below the Planck value, even for very small values of Higgs-DM coupling. Benchmark scenario A50 (for $53 \textrm{ GeV } \lesssim m_{H_1} \lesssim 73 \textrm{ GeV}$) is in agreement with the most recent direct \cite{Aprile:2017iyp} and indirect \cite{Ahnen:2016qkx} detection limits. However, for completeness, we show a larger mass region ($40 \textrm{ GeV } \lesssim m_{H_1} \lesssim 90 \textrm{ GeV}$) in our cross section plots, and highlight the surviving regions. For benchmarks I5 and I10, which -- as mentioned -- correspond to relic density below the Planck value, detection limits should be rescaled, leading to the (relic density dependent) limit of: \begin{equation} \sigma(m_{H_1}) < \sigma^{\rm LUX}(m_{H_1}) \frac{\Omega^{\rm Planck}}{\Omega_{H_1}}. \end{equation} We ensure this limit is satisfied for all studied points. The detailed analysis of astrophysical signals in benchmarks I5 and I10 is beyond the scope of this paper. However, for all masses in these benchmarks, relic density is within 10\% -- 90\% of the observed relic density. The missing relic density can be easily augmented by late-stage decays of an additional particle. The natural candidate here for the completion of the model would be a heavy right handed neutrino in the same vein as the scotogenic model \cite{Deshpande:1977rw}, which would decay into DM after the thermal freeze-out of DM and bring back the under-abundant DM relic into the observed range. Finally, we take into account collider data from LEP and the LHC (including the Higgs total decay width \cite{Khachatryan:2016ctc}, Higgs invisible decays \cite{Khachatryan:2016vau}, direct searches for additional scalars and the Branching Ratio (BR) for $h\,\rightarrow\,\gamma\,\gamma$ \cite{Khachatryan:2016vau}), as discussed in \cite{Keus:2014jha,Keus:2015xya,Cordero-Cid:2016krd}. In all cases, the mass splittings are large enough not to influence the decay widths of the weak gauge bosons, forbidding the on-shell decays $Z \to H_{1,2} A_{1,2}$ and $W^\pm\to H^\pm_{1,2} H_{1,2}/A_{1,2}$. If the Higgs-DM coupling is small enough, i.e., $g_{h H_1 H1} \lesssim 0.02$, then both the Higgs invisible decay BR and Higgs total decay width are in agreement with measured values. For benchmark scenario A50, exclusions obtained from applying the LHC constraints are similar to those from dedicated DM experiments, excluding $m_{H_1} \lesssim 53 \textrm{ GeV}$ for a large Higgs-DM coupling. Benchmarks I5 and I10 are in agreement with these constrains for all studied masses. Charged scalars in all cases are significantly heavier and short-lived than the neutral particles, therefore bounds from long-lived charged particle searches do not apply here. In all benchmarks, in particular I5 and I10, where all mass splittings are of the order of a few GeV, all heavier inert particles decay inside the detector. \section{Inert cascade decays} \label{cascades} In the model studied here, there is one absolutely stable particle, $H_1$, as its decays into SM particles are forbidden by the conservation of the $Z_2$ symmetry. By construction, all other inert particles, which are also odd under the $Z_2$ symmetry, are heavier than $H_1$ and hence unstable. The decays of these heavier inert particles may provide striking experimental signals for the I(2+1)HDM. Access to the inert sector can be obtained through the SM-like Higgs particle, $h$, and/or the massive gauge bosons, $Z$ and $W^\pm$, with the heavy inert particle subsequently decaying into $H_1$ and on- or off-shell $W^\pm/Z/\gamma$ states. In fact, in this model, $h$ can decay into various pairs of inert particles, leading to different signatures. We will consider here $h\to H_2 H_1$ decays. In such a case, as intimated, we will consider Higgs production at the LHC through ggF and VBF. The interesting production and decay patterns may occur both at tree- and loop-level. In the former case, the colliding protons produce an off-shell gauge boson $Z^*$, which can in turn give us a $H_1 A_i$ pair ($i=1,2$), followed by the decay of $A_i$ into $H_1 Z^{(*)}\to H_1 f \bar f$. In the latter case, one would produce a $h$ state decaying into $H_1 H_2\to H_1H_1f \bar f$, via the loop decay $H_2 \to H_1 f \bar f$. In both cases, one ends up with a $\cancel{E}_{T} f \bar f$ signature (possibly accompanied by a resolved forward and/or backward jet in case of VBF and an unresolved one in ggF), i.e., a di-lepton/di-jet pair, which would generally be captured by the detectors, alongside missing transverse energy, $\cancel{E}_{T}$, induced by the DM pair. Here, $f=u,d,c,s,b,e,\mu,\tau$. For the cases in which the mass difference $m_{A_i}-m_{H_1}$ or $m_{H_2}-m_{H_1}$ is small enough (i.e., $\approx 2m_e$), only the electron-positron signature would emerge, thus leading to the discussed Electro-Magnetic (EM) shower. It is important here to notice that the loop decay chain initiated by $h\to H_1 H_2$ is specific to the I(2+1)HDM case, while the one induced by $A_1\to H_1 Z^{(*)}$ may also pertain to the I(1+1)HDM case. (In fact, neither $H_2$ nor $A_2$ exists in the I(1+1)HDM, unlike $A_1$.) Moreover, when the decays are non-resonant, there is no way of separating the two $A_i$ ($i=1,2$) patterns. In contrast, the extraction and observation of the decay $h\to H_1 H_2$ (followed by the loop decay $H_2 \to H_1 f \bar f$) would represent clear evidence of the I(2+1)HDM. In the upcoming subsections, we will discuss the aforementioned tree- and loop-level decay modes of inert states into the DM candidate in all generality, then we will dwell on the features of the $\cancel{E}_{T} f \bar f$ signature. \subsection{Tree-level decays of heavy inert states} CP-odd and charged scalars can decay at tree-level into a lighter inert particle in association with a real(virtual) gauge boson $W^{\pm(*)}$ or $Z^{(*)}$. Assuming the mass ordering $m_{H_{1,2}} < m_{A_{1,2}} < m_{H^\pm_{1,2}}$, the following tree-level decays appear (only diagrams with $H_1$ in the final state are shown in Fig. \ref{tree-decays}, diagrams (A) and (B)): \begin{equation} A_i \to Z^{(*)} H_j, \quad H^\pm_i \to W^{\pm(*)} H_j, \quad H^\pm_i \to W^{\pm(*)} A_j, \qquad (i,j=1,2). \end{equation} The leptonic decays(splittings) of real(virtual) massive gauge bosons will result in $f \bar f$ pairs for $Z^{(*)}$ and $f \bar f'$ for $W^{\pm(*)}$. The above processes are governed by the gauge couplings and therefore lead to small decay widths, of order $10^{-2}-10^{-4}$ GeV, of heavy inert particles. However, these decay widths could grow if the mass splitting between $H_1$ and other particles is large. Note that, even if all particle masses are relatively close (of the order of 1 GeV), they all still decay inside the detector. The heavy CP-even scalar, $H_2$, cannot couple to $H_1$ through $Z^{(*)}$, since CP symmetry is conserved in our model. It can decay into the $H_1$ particle plus a Higgs boson (diagram (C) in Fig. \ref{tree-decays}), which will then decay via the established SM patterns. Depending on the mass splitting between $H_1$ and $H_2$, the Higgs particle can be highly off-shell (recall that its SM-like nature requires its width to be around 4 MeV), thus leading to a relatively small decay width of $H_2$ and its relatively long lifetime. However, in all studied points, this width is not smaller than $10^{-11}$ GeV, ensuring the decay of $H_2$ inside the detector\footnote{Notice that the last diagram in the discussed figure is the one enabling the $h\to H_1 H_2$ decay that we discussed previously.}. Therefore, the $H_1$ is the only truly invisible dark particle in the benchmark scenarios we consider in the I(2+1)HDM. \begin{minipage}{\linewidth} \vspace*{0.15truecm} \begin{figure}[H] \hspace{1.25cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$A_{1,2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$Z^{(*)}$} (3,1.0); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$H_1$} (3,-0.75); \node at (1.5,-1.5) {(A)}; \end{tikzpicture} \hspace{0.75cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H^\pm_{1,2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=-0.2cm,yshift=0cm] {$W^{\pm(*)}$} (3,1.0); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$H_1$} (3,-0.75); \node at (1.5,-1.5) {(B)}; \end{tikzpicture} \hspace{0.75cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[dashed] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$h^{(*)}$} (3,1.0); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$H_1$} (3,-0.75); \node at (1.5,-1.5) {(C)}; \end{tikzpicture} \vspace{0.25cm} \caption{Tree-level decays of heavy inert states into $H_1$ and on-shell or off-shell $Z$, $W^\pm$ and $h$ bosons.} \label{tree-decays} \end{figure} \end{minipage}\\[2mm] \subsection{Loop-level decays of heavy inert states} Apart from the above tree-level decays there is also the possibility of loop-mediated ones for a heavy neutral inert particle, denoted in Fig. \ref{radiative} as $H_2$, into the lightest inert state, $H_1$, and a virtual photon, which then would split into a light $f\bar f$ pair\footnote{Details of the calculation of the complete $H_2\to H_1 f\bar f$ decay, including all topologies, will be presented in Sect. \ref{sec-loop}.}. \begin{minipage}{1.0\linewidth} \centering \begin{figure}[H] \centering \begin{tikzpicture}[thick,scale=1.0] \draw[dashed] (0,0) -- node[black,above,xshift=-1.2cm,yshift=0.0cm] {$H_2$} (2,0); \draw[dashed] (2,0) -- node[black,above,xshift=0.8cm,yshift=0.0cm] {$H_1$} (4,0); \draw[photon] (2,0) -- node[black,above,yshift=-0.8cm,xshift=-0.3cm] {$\gamma^*$} (3,-1.5); \draw[particle](3,-1.5) -- node[black,above,xshift=0.8cm,yshift=0.0cm] {$f$} (4,-1); \draw[antiparticle](3,-1.5) -- node[black,above,xshift=0.8cm,yshift=-0.6cm] {$\bar f$} (4,-2); \draw[xshift=-0cm] (2,0) node[circle,fill,inner sep=4pt](A){} -- (2,0); \end{tikzpicture} \caption{Radiative decay of the heavy neutral particle $H_2 \to H_1 \gamma^* \to H_1 f \bar f$.} \label{radiative} \end{figure} \end{minipage}\\[2mm] The corresponding loops go through triangle and bubble diagrams with $H^\pm_i$ and $W^\pm$ entering, see Figs \ref{triangle-decays}-\ref{bubble-decays}. Note that there are also box diagrams which contribute to the process $H_2 \to H_1 f \bar f$, presented in Fig. \ref{box}. Here, the $f \bar f$ pair is produced through the SM gauge-fermion tree-level vertices, without producing an intermediate off-shell photon. The corresponding topologies also see the contribution of inert, both charged and neutral (pseudo)scalars. However, due to the mass suppression, the contribution from the box diagrams is small, of order 10\%, and it leaves the results practically unaffected. For reasons of optimisation then, we do not show the results of these box diagrams in the numerical scans and we may refer to this one-loop process as a radiative decay. Before moving on to study the latter, we would like to stress at this point that one could attempt constructing analogous diagrams to those in Figs. \ref{triangle-decays}-\ref{bubble-decays} with $H_2$ replaced by $A_1$ or $A_2$, leading to $A_i \to H_1 \gamma^*, i=1,2$. Notice, however, that this decay would lead to a CPV process, while the model we analyse here is explicitly CPC. Indeed, further notice that spin conservation requires that it is only the scalar polarisation of the virtual photon that contributes to the $H_2\to H_1\gamma^*$ transition. To check the correctness of the calculations we have explicitly verified this to be the case, as there are cancellations between diagrams that lead to the amplitude being equal to zero otherwise, as discussed in Sect. \ref{sec-loop}. Also note that the process $A_i\to H_1 Z^*$ does exist at tree-level in both the I(2+1)HDM (for $i=1,2$) and I(1+1)HDM (for $i=1$) and contributes to the $\cancel{E}_{T} f \bar f$ signature, as discussed previously. However, in the interesting regions of the parameter space where the invariant mass of the $f \bar f$ pair is small, i.e., $<<m_Z$, this process is sub-dominant. In short, the only (effective) loop-level decay to consider is \begin{equation} H_2 \to H_1 \gamma^* \end{equation} and this does not exist in the I(1+1)HDM, as CP-conservation prevents the only possibly similar radiative decay in its inert sector (i.e., $A_1\to H_1\gamma^*$). Therefore, as intimated, this signature can be used to distinguish between the I(1+1)HDM and models with extended inert sectors, such as the I(2+1)HDM. \begin{minipage}{\linewidth} \begin{figure}[H] \hspace{1.5cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[dashed] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$H^+_{1,2}$} (3,1.0); \draw[dashed] (4.5,-0.75) -- node[black,above,yshift=-0.1cm,xshift=0cm] {$H_1$} (3,-0.75); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$W^+$} (3,-0.75); \draw[dashed](3,1) -- node[black,above,yshift=-0.4cm,xshift=-0.4cm] {$H^+_{1,2}$} (3,-0.75); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}](3,1) -- node[black,above,yshift=0.1cm,xshift=0cm] {$\gamma^*$} (4.5,1); \node at (1.5,-1.5) {(A)}; \end{tikzpicture} \hspace{1.5cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$W^+$} (3,1.0); \draw[dashed] (4.5,-0.75) -- node[black,above,yshift=-0.1cm,xshift=0cm] {$H_1$} (3,-0.75); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$H^+_{1,2}$} (3,-0.75); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}](3,1) -- node[black,above,yshift=-0.4cm,xshift=-0.4cm] {$W^+$} (3,-0.75); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}](3,1) -- node[black,above,yshift=0.1cm,xshift=0cm] {$\gamma^*$} (4.5,1); \node at (1.5,-1.5) {(B)}; \end{tikzpicture} \vspace{0.5cm} \caption{Triangle diagrams contributing to the $H_2 \to H_1 \gamma^*$ decay, where the lightest inert is absolutely stable and hence invisible, while $\gamma^*$ is a virtual photon that couples to fermion-antifermion pairs. Analogous diagrams cannot be constructed if the initial particle is $A_{1}$ or $A_2$.} \label{triangle-decays} \end{figure} \end{minipage} \begin{minipage}{\linewidth} \begin{figure}[H] \begin{tikzpicture}[thick,scale=1.0] \draw[dashed] (0,0) -- node[black,above,sloped,yshift=-0.1cm,xshift=-0.4cm] {$H_{2}$} (1,0); \draw[dashed] (3,0) -- node[black,above,yshift=-0.3cm,xshift=0.5cm] {$H_1$} (4.2,-1.1); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (3,0) -- node[black,above,yshift=-0.4cm,xshift=0.5cm] {$\gamma^*$} (4.2,1.1); \draw[dashed] (1,0) node[black,above,sloped,yshift=0.95cm,xshift=1.05cm] {$H^+_{1,2}$} arc (180:0:1cm) ; \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1,0) node[black,above,sloped,yshift=-0.95cm,xshift=1.05cm] {$W^+$} arc (-180:0:1cm) ; \node at (1.5,-2.3) {(A)}; \end{tikzpicture} \hspace{2mm} \begin{tikzpicture}[thick,scale=1.0] \draw[dashed] (0,0) -- node[black,above,xshift=-0.3cm,yshift=-0.1cm] {$H_{2}$} (1.5,0); \draw[dashed] (1.5,0) -- node[black,above,yshift=0.4cm,xshift=0.2cm] {$H_1$} (2.7,1.4); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (2.8,-1.2) -- node[black,above,yshift=-0.3cm,xshift=0.9cm] {$\gamma^*$} (4.3,-1.7); \draw[dashed] (1.5,0) node[black,above,sloped,yshift=0.2cm,xshift=1.3cm] {$H^+_{1,2}$} arc (140:-40:0.9cm) ; \draw[dashed] (1.5,0) node[black,above,sloped,yshift=-1.4cm,xshift=0.55cm] {$H^+_{1,2}$} arc (-220:-40:0.9cm) ; \node at (1.5,-2.3) {(B)}; \end{tikzpicture} \begin{tikzpicture}[thick,scale=1.0] \draw[dashed] (0,0) -- node[black,above,xshift=-0.3cm,yshift=-0.1cm] {$H_{2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,yshift=0.4cm,xshift=0.2cm] {$\gamma^*$} (2.7,1.4); \draw[dashed] (2.8,-1.2) -- node[black,above,yshift=-0.3cm,xshift=0.9cm] {$H_1$} (4.3,-1.7); \draw[dashed] (1.5,0) node[black,above,sloped,yshift=0.2cm,xshift=1.3cm] {$H^+_{1,2}$} arc (140:-40:0.9cm) ; \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) node[black,above,sloped,yshift=-1.4cm,xshift=0.55cm] {$W^+$} arc (-220:-40:0.9cm) ; \node at (1.5,-2.3) {(C)}; \end{tikzpicture} \vspace{0.5cm} \caption{Bubble diagrams contributing to the $H_2 \to H_1 \gamma^*$ decay, where the lightest inert particle is absolutely stable and hence invisible, while $\gamma^*$ is a virtual photon that couples to fermion-antifermion pairs. Analogous diagrams cannot be constructed if the initial particle is $A_{1}$ or $A_2$.} \label{bubble-decays} \end{figure} \end{minipage} \begin{minipage}{\linewidth} \begin{figure}[H] \hspace{1.5cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$Z$} (3,1.0); \draw[dashed] (4.5,-0.75) -- node[black,above,yshift=-0.6cm,xshift=0cm] {$H_1$} (3,-0.75); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$A_{1,2}$} (3,-0.75); \draw[particle](3,1) -- node[black,above,yshift=0.1cm,xshift=0cm] {$f$} (4.5,1); \draw[antiparticle](3,0.25) -- node[black,above,yshift=-0.5cm,xshift=0cm] {$f$} (4.5,0.25); \draw[particle](3,0.25) -- node[black,above,yshift=-0.2cm,xshift=0.3cm] {$f$} (3,1); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (3,-0.75) -- node[black,above,yshift=-0.3cm,xshift=0.3cm] {$Z$} (3,0.25); \node at (1.5,-1.5) {(A)}; \end{tikzpicture} \hspace{1.5cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$W^+$} (3,1.0); \draw[dashed] (4.5,-0.75) -- node[black,above,yshift=-0.6cm,xshift=0cm] {$H_1$} (3,-0.75); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$H^+_{1,2}$} (3,-0.75); \draw[particle](3,1) -- node[black,above,yshift=0.1cm,xshift=0cm] {$f$} (4.5,1); \draw[antiparticle](3,0.25) -- node[black,above,yshift=-0.5cm,xshift=0cm] {$f$} (4.5,0.25); \draw[particle](3,0.25) -- node[black,above,yshift=-0.3cm,xshift=0.3cm] {$f'$} (3,1); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (3,-0.75) -- node[black,above,yshift=-0.4cm,xshift=0.4cm] {$W^\pm$} (3,0.25); \node at (1.5,-1.5) {(B)}; \end{tikzpicture}\\[2mm] \hspace{1.5cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$Z$} (2.5,1.0); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$A_{1,2}$} (2.5,-1); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (2.5,-1) -- node[black,above,yshift=-0.3cm,xshift=0.3cm] {$Z$} (3.5,0.); \draw[particle](2.5,1) -- node[black,above,yshift=-0.6cm,xshift=0cm] {$f$} (3.5,0); \draw[dashed] (4.5,-1) -- node[black,above,yshift=-0.6cm,xshift=0cm] {$H_1$} (2.5,-1); \draw[particle](3.5,0) -- node[black,above,yshift=0.0cm,xshift=0cm] {$f$} (4.5,1); \draw[particle](4.5,0) -- node[black,above,yshift=-0.8cm,xshift=0.7cm] {$f$} (2.5,1); \node at (1.5,-1.5) {(C)}; \end{tikzpicture} \hspace{1.5cm} \begin{tikzpicture}[thick,scale=1.0] \draw (1.5,0) -- node[black,above,xshift=-0.1cm,yshift=0.0cm] {$ $} (1.5,0.03); \draw[dashed] (0,0) -- node[black,above,xshift=-0.5cm,yshift=0cm] {$H_{2}$} (1.5,0); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (1.5,0) -- node[black,above,xshift=0cm,yshift=0cm] {$W^+$} (2.5,1.0); \draw[dashed](1.5,0) -- node[black,above,yshift=-0.65cm,xshift=-0.2cm] {$H^+_{1,2}$} (2.5,-1); \draw[decorate,decoration={snake,amplitude=3pt,segment length=10pt}] (2.5,-1) -- node[black,above,yshift=-0.3cm,xshift=0.5cm] {$W^+$} (3.5,0.); \draw[particle](2.5,1) -- node[black,above,yshift=-0.6cm,xshift=0cm] {$f'$} (3.5,0); \draw[dashed] (4.5,-1) -- node[black,above,yshift=-0.6cm,xshift=0cm] {$H_1$} (2.5,-1); \draw[particle](3.5,0) -- node[black,above,yshift=0.0cm,xshift=0cm] {$f$} (4.5,1); \draw[particle](4.5,0) -- node[black,above,yshift=-0.8cm,xshift=0.7cm] {$f$} (2.5,1); \node at (1.5,-1.5) {(D)}; \end{tikzpicture} \caption{Box diagrams contributing to $H_2 \to H_1 f \bar f$. } \label{box} \end{figure} \end{minipage} \subsection{The $\cancel{E}_{T}$ $f \bar f$ signature at the LHC} In this subsection, we focus on the possible sources of the aforementioned specific signature that can arise in the I(2+1)HDM, namely, missing transverse energy and a fermion-antifermion pair, $\cancel{E}_{T} f \bar f$. This final state can be produced both at tree-level and through one-loop decays, as previously explained. We dwell further on this here. The first mechanism is related to decays of the SM-like Higgs particle which is produced, e.g., through ggF. The $hgg$ effective vertex is identical to that in the SM, as the gauge and fermionic sectors in the I(2+1)HDM are not modified with respect to the SM. The Higgs particle can then decay into a pair of neutral or charged inert particles, denoted in Fig. \ref{Higgsprod} by $S_{i,j}$. Depending on the masses of $S_{i,j}$, these particles can further decay, providing various final states. \begin{minipage}{1.0\linewidth} \centering \begin{figure}[H] \centering \begin{tikzpicture}[thick,scale=1.0] \draw[gluon] (0,0) -- node[black,above,xshift=-0.6cm,yshift=0.4cm] {$g$} (1,-1); \draw[gluon] (0,-2) -- node[black,above,yshift=-1.0cm,xshift=-0.6cm] {$g$} (1,-1); \draw[dashed] (1,-1) -- node[black,above,xshift=0.0cm,yshift=0.0cm] {$h$} (2.5,-1); \draw[dashed] (2.5,-1) -- node[black,above,xshift=0.6cm,yshift=0.4cm] {$S_i$} (3.5,0); \draw[dashed] (2.5,-1) -- node[black,above,yshift=-1cm,xshift=0.4cm] {$S_j$} (3.5,-2); \draw[xshift=-0cm] (1,-1) node[circle,fill,inner sep=4pt](A){} -- (1,-1); \end{tikzpicture} \caption{The ggF-induced production of the SM-like Higgs particle at the LHC with its decay into inert particles, denoted as $S_i$ and $S_j$.} \label{Higgsprod} \end{figure} \end{minipage}\\[2mm] In the CPC I(2+1)HDM, a process contributing to the $\cancel{E}_{T} f \bar f$ signature (and one of our signals) is \begin{equation} gg \to h \to H_1 H_2 \to H_1 H_1 \gamma^* \to H_1 H_1 f \bar f, \label{first} \end{equation} where the off-shell $\gamma^*$ splits into $f \bar f$ and the $H_1$ states escape detection\footnote{A detailed analysis of the tree-level SM background, $gg \to h\to W^+ W^-\to \nu_l l^+ \nu_l l^-$ to this process is postponed to a future publication.}. Notice that there is also a tree-level $h$ decay into two charged scalars with the same signature ($\cancel{E}_{T}\; f \bar f$), albeit not an identical final state (the two would remain indistinguishable though), following the pattern: \begin{equation} gg \to h \to H^{\pm}_i H^{\pm}_i \to H_1 H_1 W^{+(*)} W^{-(*)}\to H_1 H_1 \nu_l l^+ \nu_l l^- \quad (i=1,2), \label{second} \end{equation} where the neutrinos escape detection as (additional) $\cancel{E}_{T}$. The process in (\ref{first}) is loop mediated and depends on $g_{H_1 H_2 h}$, a coupling affecting also DM relic density. Therefore, if this coupling is small, the whole process is suppressed. However, we shall maximise this coupling, while maintaining consistency with DM constraints. We also assume a mass spectrum so that the charged Higgs masses entering the loops are not too heavy, since their large masses would also suppress the loop. In fact, we shall see that there can be parameter configurations for which $m_{H_1}+m_{H_2}<m_h$, so that SM-like Higgs production and (loop) decay is resonant, thereby benefiting of an enhancement of ${\cal O}(1/\alpha_{\rm EM})$. The process in (\ref{second}) is a tree-level one, therefore potentially competitive. However, for the parameter space of interest, maximising the yield of the loop process, this mode becomes negligible, for two reasons: on the one hand, the charged Higgs masses are generally heavy so that there can be no resonant $h$ involved while, on the other hand, the $g_{H^\pm_i H^\pm_ih}$ coupling is generally small. In principle, there is another tree-level signal inducing the $\cancel{E}_{T}\; f \bar f$ final state in our scenario, \begin{equation} \label{nstr} q\bar q \to Z^*\to H_1H_1 Z\to H_1H_1 f \bar f, \end{equation} see diagrams (A) and (B) in Fig. \ref{diag:nstr}, induced by quark-antiquark annihilation and proceeding via an $s$-channel off-shell (primary) $Z^*$, wherein the on-shell (secondary) $Z$ eventually decays into an $f \bar f$ pair. However, this is of no concern here. The reason is twofold. On the one hand, as explained, the region of parameter space over which process (\ref{first}) is interesting for LHC phenomenology is the one where the $g_{H_1 H_2 h}$ strength is maximal and $h$ is possibly resonant: this is when the DM relic density sees a large contribution from $H_1H_2$ co-annihilation processes\footnote{This is further enhanced when $m_{H_1}\approx m_{H_2}$, which is in fact one of the conditions that we will use in the forthcoming analysis to exalt process (\ref{first}) (which is I(2+1)HDM specific) against the one (also existing in the I(1+1)HDM) that we will be discussing next.}, which in turn means that large $g_{H_1 H_1 h}$ (possibly in presence of a resonant $h$) and $g_{H_1H_1 ZZ}$ couplings are forbidden by such data, so that process (\ref{nstr}) becomes uninteresting at the LHC. On the other hand, in our construct, process (\ref{nstr}) is nothing more than a subleading contribution to the invisible Higgs signature of the SM-like Higgs boson (dominated by ggF and VBF topologies, extensively studied already in Ref.~\cite{Keus:2014isa}), rather featureless, in fact, as it does not catch any of the heavy scalar states of the model, unlike reaction (\ref{first}), which is sensitive to all of them, so that one could study the kinematic distributions of the final state attempting to extract their masses by isolating the corresponding thresholds entering the loops\footnote{In this sense, process (\ref{nstr}) would be a background to (\ref{first}), which can be easily removed through a mass veto: $m_{f \bar f}\ne m_Z$.}. For these reasons, we will not discuss these two topologies any further. \begin{minipage}{0.9\linewidth} \centering \begin{minipage}{0.9\linewidth} \centering \begin{figure}[H] \begin{tikzpicture}[thick,scale=1.0] \hspace{-1cm} \draw[particle] (0,0) -- node[black,above,xshift=-0.6cm,yshift=0.4cm] {$q_i$} (1,-0.75); \draw[antiparticle] (0,-1.5) -- node[black,above,yshift=-1.0cm,xshift=-0.6cm] {$\bar{q}_i$} (1,-0.75); \draw[photon] (1,-0.75) -- node[black,above,xshift=0.0cm,yshift=0.0cm] {$Z^*$} (2,-0.75); \draw[dashed] (2,-0.75) -- node[black,above,yshift=0.0cm,xshift=-0.0cm] {$h$} (3,-0.75); \draw[dashed] (3,-0.75) -- node[black,above,yshift=0.1cm,xshift=0.8cm] {$H_1$} (4,-0); \draw[dashed] (3,-0.75) -- node[black,above,yshift=-0.4cm,xshift=0.8cm] {$H_1$} (4,-1.5); \draw[photon] (2,-0.75) -- node[black,above,xshift=1.1cm,yshift=-1.5cm] {$Z$} (4,-2.5); \node at (2,-4) {(A)}; \hspace{1cm} \draw[particle] (5,0) -- node[black,above,xshift=-0.6cm,yshift=0.4cm] {$q$} (6,-0.75); \draw[antiparticle] (5,-1.5) -- node[black,above,yshift=-1.0cm,xshift=-0.6cm] {$\bar{q}$} (6,-0.75); \draw[photon] (6,-0.75) -- node[black,above,xshift=0.0cm,yshift=0.0cm] {$Z^*$} (7,-0.75); \draw[dashed] (7,-0.75) -- node[black,above,yshift=0.1cm,xshift=1.2cm] {$H_1$} (8.5,-0); \draw[dashed] (7,-0.75) -- node[black,above,yshift=-0.4cm,xshift=1.1cm] {$H_1$} (8.5,-1.5); \draw[photon] (7,-0.75) -- node[black,above,xshift=1.15cm,yshift=-1.5cm] {$Z$} (8.5,-2.5); \node at (6,-4) {(B)}; \hspace{1cm} \draw[particle] (10,0) -- node[black,above,xshift=-0.6cm,yshift=0.4cm] {$q$} (11,-0.75); \draw[antiparticle] (10,-1.5) -- node[black,above,yshift=-1.0cm,xshift=-0.6cm] {$\bar{q}$} (11,-0.75); \draw[photon] (11,-0.75) -- node[black,above,xshift=0.0cm,yshift=0.0cm] {$Z^*$} (12,-0.75); \draw[dashed] (12,-0.75) -- node[black,above,yshift=0.1cm,xshift=0.8cm] {$H_1$} (13,-0); \draw[dashed] (12,-0.75) -- node[black,above,yshift=-0.3cm,xshift=-0.3cm] {$A_{1,2}$} (12,-1.75); \draw[dashed] (12,-1.75) -- node[black,above,yshift=0.1cm,xshift=0.8cm] {$H_1$} (13,-1); \draw[photon] (12,-1.75) -- node[black,above,xshift=0.6cm,yshift=-0.9cm] {$Z^{(*)}$} (13,-2.5); \node at (11,-4) {(C)}; \end{tikzpicture} \caption{Diagrams leading to the $\cancel{E}_{T} f \bar f$ final state via the $H_1H_1 Z^{(*)}$ intermediate stage.} \label{diag:nstr} \vspace*{0.75cm} \end{figure} \end{minipage} \end{minipage} Another way of obtaining exactly the $H_1 H_1 f \bar f$ final state is shown in graph (C) of Fig. \ref{diag:nstr}, again produced through $s$-channel quark-antiquark annihilation into a virtual neutral massive gauge boson, i.e., \begin{equation} \label{tree} q\bar q\to Z^* \to H_1 A_i \to H_1 H_1 Z^{(*)} \to H_1 H_1 f \bar f\qquad(i=1,2), \end{equation} wherein the DM candidate is produced in association with a pseudoscalar state and the $Z$ may be off-shell. This mode is indeed competitive with the one in (\ref{first}) over the region of I(2+1)HDM parameter space of interest, so we will extensively dwell with it numerically in the remainder of the paper. Further, diagram (C) in Fig. \ref{diag:nstr}, unlike graphs (A) and (B) herein, because of its heavy pseudoscalar components, may also be isolated in the aforementioned kinematic analysis. Finally, we conclude this subsection by listing, in Fig. \ref{diag:VBF} (prior to the $H_2\to H_1 f \bar f$ decay), the topologies entering VBF production contributing to the $\cancel{E}_{T}\; f \bar f$ final state (our second signal) via \begin{equation} \label{VBF} q_i q_j \to q_k q_l H_1 H_2 \to H_1 H_1 \gamma^* \to H_1 H_1 f \bar f, \end{equation} where $q_{i,j,k,l}$ represents a(n) (anti)quark of any possible flavour (except a top quark). Here, two aspects are worth noticing. Firstly, there is the additional presence of two forward/backward jets, which may or may not be tagged (we will treat them inclusively). Secondly, not all diagrams proceed via $h\to H_1H_2$ induced topologies, graph (A), hence unlike the case of ggF, since graphs (B) and (C) are also possible. Clearly, the first diagram dominates when $h$ can resonate while the last two become competitive otherwise. We shall see how ggF and VBF will compete over the I(2+1)HDM parameter space of interest in being the carrier of its hallmark signature $\cancel{E}_{T} \; f \bar f$ in a later section. \begin{minipage}{0.9\linewidth} \centering \begin{minipage}{0.9\linewidth} \centering \begin{figure}[H] \begin{tikzpicture}[thick,scale=1.0] \draw[particle] (0,0) -- node[black,above,xshift=-0.6cm,yshift=0.0cm] {$q_i$} (1,0); \draw[photon] (1,0) -- node[black,above,xshift=-0.8cm,yshift=-0.3cm] {$Z,W^+$} (1,-1.5); \draw[photon] (1,-1.5) -- node[black,above,xshift=-0.8cm,yshift=-0.2cm] {$Z,W^+$} (1,-3); \draw[particle] (0,-3) -- node[black,above,xshift=-0.6cm,yshift=-0.6cm] {$q_j$} (1,-3); \draw[dashed] (1,-1.5) -- node[black,above,yshift=0.1cm,xshift=-0.0cm] {$h$} (2.5,-1.5); \draw[dashed] (2.5,-1.5) -- node[black,above,yshift=0.3cm,xshift=0.6cm] {$H_1$} (3.5,-0.75); \draw[dashed] (2.5,-1.5) -- node[black,above,yshift=-1.1cm,xshift=0.6cm] {$H_2$} (3.5,-2.25); \draw[particle] (1,0) -- node[black,above,xshift=1.2cm,yshift=0.0cm] {$q_k$} (3.5,0); \draw[particle] (1,-3) -- node[black,above,xshift=1.2cm,yshift=-0.6cm] {$q_l$} (3.5,-3); \node at (2,-4) {(A)}; \hspace{1cm} \draw[particle] (5,0) -- node[black,above,xshift=-0.6cm,yshift=0.0cm] {$q_i$} (6,0); \draw[photon] (6,0) -- node[black,above,xshift=-0.8cm,yshift=-0.3cm] {$Z,W^+$} (6,-1.5); \draw[photon] (6,-1.5) -- node[black,above,xshift=-0.8cm,yshift=-0.2cm] {$Z,W^+$} (6,-3); \draw[particle] (5,-3) -- node[black,above,xshift=-0.6cm,yshift=-0.6cm] {$q_j$} (6,-3); \draw[particle] (6,0) -- node[black,above,xshift=0.8cm,yshift=0.0cm] {$q_k$} (7.5,0); \draw[particle] (6,-3) -- node[black,above,xshift=0.8cm,yshift=-0.6cm] {$q_l$} (7.5,-3); \draw[dashed] (6,-1.5) -- node[black,above,yshift=0.3cm,xshift=0.6cm] {$H_1$} (7.5,-0.75); \draw[dashed] (6,-1.5) -- node[black,above,yshift=-1.1cm,xshift=0.6cm] {$H_2$} (7.5,-2.25); \node at (6,-4) {(B)}; \hspace{1cm} \draw[particle] (8.5,0) -- node[black,above,xshift=-0.6cm,yshift=0.0cm] {$q_i$} (9.5,0); \draw[photon] (9.5,0) -- node[black,above,xshift=-0.9cm,yshift=-0.3cm] {$Z (W^+)$} (9.5,-1); \draw[dashed] (9.5,-1) -- node[black,above,xshift=-0.9cm,yshift=-0.3cm] {$A_1 (H^+_1)$} (9.5,-2); \draw[photon] (9.5,-2) -- node[black,above,xshift=-0.9cm,yshift=-0.2cm] {$Z (W^+)$} (9.5,-3); \draw[particle] (8.5,-3) -- node[black,above,xshift=-0.6cm,yshift=-0.6cm] {$q_j$} (9.5,-3); \draw[particle] (9.5,0) -- node[black,above,xshift=0.8cm,yshift=0.0cm] {$q_k$} (11,0); \draw[particle] (9.5,-3) -- node[black,above,xshift=0.8cm,yshift=-0.6cm] {$q_l$} (11,-3); \draw[dashed] (9.5,-1) -- node[black,above,yshift=0.1cm,xshift=0.6cm] {$H_1$} (11,-1); \draw[dashed] (9.5,-2) -- node[black,above,yshift=-0.8cm,xshift=0.6cm] {$H_2$} (11,-2); \node at (10,-4) {(C)}; \end{tikzpicture} \caption{Diagrams leading to the $\cancel{E}_{T} + f \bar f$ final state via VBF topologies.} \label{diag:VBF} \end{figure} \end{minipage} \end{minipage} \section{Calculation \label{sec-loop}} In this section, we discuss the details of our calculation. In fact, the case of the channel in (\ref{tree}) is easily dealt with, as this is a tree-level process, which we computed numerically using CalcHEP \cite{Belyaev:2012qa}. The bulk of our effort was concentrated upon the loop processes (\ref{first}) and (\ref{VBF}), which we have tackled in factorised form, i.e., by breaking up the two channels into $pp\to H_1 H_2 X$ production followed by the $H_2$ $\to$ $H_1 f \bar f$ decay. Here, the ggF and VBF topologies entering at production level are well known in the literature, so we do not discuss them (again, we computed these numerically by exploiting CalcHEP). We therefore address in some detail only the case of the loop decay. This is expressed through a tensor structure appropriate to the I(2+1)HDM particle spectrum and illustrated for the case $f=e$, so that we can safely take $m_e=0$\footnote{The case $f=u,d,c,s,\mu,\tau$ with $m_f\ne0$ is a straightfoward extension of it.}. In general, there are two types of one-loop diagrams that contribute to the process $$H_2(p_3) \to H_1(p_2) \gamma^* (p_3-p_2) \to H_1(p_2) e^- (k_1) e^+ (k_2),$$ namely, those embedding the one-loop effective vertex $H_2 H_1 \gamma^*$, given by the diagrams in Figs. \ref{triangle-decays}--\ref{bubble-decays} plus the box diagrams shown in Fig. \ref{box}. Here, the labels $p_i$ and $k_j$ identify the external scalar and fermion momenta, respectively. In the following, we use the unitary gauge. The calculation below is done for the pair of CP-even dark particles $H_2$ and $H_1$, however, all results hold for CP-odd neutral dark particles as well, i.e., $A_2$ and $A_1$, following simple replacements of masses, $m_{H_i} \to m_{A_i}$, and relevant vertex coefficients, $g_{H_i X Y} \to g_{A_i X Y}$. The general expression for the amplitude of the loop calculation is: \begin{equation} \mathcal{M}=ie\bar{v}(k_1)\gamma^\nu u(k_2)\frac{ig_{\mu\nu}}{(p_3-p_2)^2}[A(p_3+p_2)^\mu +B(p_3-p_2)^\mu], \end{equation} where \begin{equation} i[A(p_3+p_2)^\mu +B(p_3-p_2)^\mu] \end{equation} is the general structure of the vertex $H_1 H_2 \gamma^*$ obtained in the calculation at one-loop level. However, when we consider the term $(p_3-p_2)^\nu$ and contract it with $\gamma^\mu g_{\mu\nu}$ we have: \begin{eqnarray} \slashed{p}_3-\slashed{p}_2=\slashed{k}_1-\slashed{k}_2. \end{eqnarray} Then the Dirac equation in the limit of $m_e=0$ gives us: \begin{eqnarray} \bar{v}(k_1)(\slashed{p}_3-\slashed{p}_2)u(k_2)=\bar{v}(k_1)(\slashed{k}_1+\slashed{k}_2)u(k_2)=0. \end{eqnarray} Under these circumstances, we can take \[(p_3-p_2)_\mu =0, \] which is the same as if the $\gamma$ were on-shell in the process $H_2\to H_1\gamma$, albeit $(p_3-p_2)^2$ is non-zero: \begin{equation} (p_3-p_2)^2=(k_1+k_2)^2=2k_1\cdot k_2. \end{equation} Therefore, the general structure of the amplitude is: \begin{eqnarray} \mathcal{M}=ie\bar{v}(k_1)\gamma^\nu u(k_2)\frac{ig_{\mu\nu}}{(p_3-p_2)^2}[A(p_3+p_2)^\mu ], \label{Amp1} \end{eqnarray} where $A(p_3+p_2)^\mu$ is related to the contribution of the each diagram in Figs. \ref{triangle-decays}, \ref{bubble-decays} and \ref{box}: \begin{eqnarray} A(p_3+p_2)^\mu =M_{\mu,T}= \sum_i M_\mu^{(i)}, \label{Amp2} \end{eqnarray} where $i$ runs across all diagrams. \subsection{Individual contributions to $H_2 \to H_1 f \bar f$} There are six of these, five for the case of the triangle and bubble diagrams of Figs. \ref{triangle-decays}--\ref{bubble-decays} plus two cumulative ones for the box diagrams shown in Fig. \ref{box}\footnote{Ultraviolet renormalisation is implictly performed for the former.}. \begin{itemize} \item The first contribution, ${M}_{\mu}^{(1)}$, comes from a diagram with two charged scalars $H_i^\pm$ ($i=1,2$) and one $W^\pm$ in the loop, given by diagram (A) in Fig. \ref{triangle-decays}: \begin{equation} {M}_{\mu}^{(1)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) = \frac{g^2 e}{4} A_{i}^{\pm}m^{(1)}_\mu(m_{H^\pm_i},m_W,m_{12}^2,m_{H_i}), \end{equation} where \begin{equation} m^{(1)}_\mu=\frac{1}{16\pi^2} \int \frac{d^{n}k}{(2 \pi)^{n}} \frac{ (k+2p_{3})_{\alpha} (2k+p_{3}+p_{2})_{\mu} (k+2p_{2})_{\beta} [ g^{\alpha \beta} - \frac{ k^{\alpha} k^{\beta}}{m_{W}^{2}} ] }{ [(k+p_{3})^{2} - m_{H_{i}^{\pm}}^{2} ] [(k+p_{2})^{2} - m_{H_{i}^{\pm}}^{2} ] [k^{2} - m_{W}^{2} ] } \nonumber \\ \label{d1} \end{equation} and $m_{H_{i}^{\pm}}$ ($i=1,2$) are the masses of the charged scalars, $m_{H_1}$ is the mass of the DM candidate and $m_{H_2}$ is the mass of the next-to-lightest inert particle $H_2$. The $A_{i}^{\pm}$s are coefficients related to the vertex structure of the loop diagram whose details are presented in Sect. \ref{section-Ai}. We define $m_{12}^2=(p_3-p_2)^2=(k_1+k_2)^2=2k_1\cdot k_2$, considering the limit $m_e=0$. Using this tensorial structure we calculate the other diagrams. \end{itemize} \begin{itemize} \item The tensorial amplitude for the diagram with two $W^\pm$ and one charged scalar $H_i^\pm$ in the loop, given by diagram (B) in the Fig. \ref{triangle-decays}, is: \end{itemize} \begin{eqnarray} {M}_{\mu}^{(2)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} )= - \frac{ g^{2} e}{4 } A_{i}^{\pm} {m}_{\mu}^{(2)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} ) \end{eqnarray} with \begin{eqnarray} &&{m}_{\mu}^{(2)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} )= \frac{1}{16 \pi^2}\int \frac{d^{n}k}{(2 \pi)^{n}} \frac{ (k+p_{3})_{\alpha} (k+p_{2})_{\beta} [ g^{ \beta \nu} - \frac{ (k -p_{2})^{\beta} (k - p_{2})^{\nu}}{m_{W}^{2}} ] }{ [k^{2} - m_{H_{i}^{\pm}}^{2} ] [(k-p_{2})^{2} - m_{W}^{2} ] [(k- p_{3})^{2} - m_{W}^{2} ] } \nonumber \\ &&\times\lbrace (k- 2p_{2} + p_{3})_{\rho} g_{\mu \nu} - (2k - p_{3} - p_{2})_{\mu} g_{\nu \rho} + (k - 2p_{3} + p_{2} )_{\nu} g_{\mu \rho} \rbrace \lbrace g^{\rho \alpha} - \frac{ (k-p_{3})^{\rho} (k - p_{3})^{\alpha}}{m_{W}^{2}} \rbrace \nonumber \\ \end{eqnarray} \begin{itemize} \item For the diagram with one $H_i^\pm$ and one $W^\pm$ particle in the loop, which is (A) in Fig. \ref{bubble-decays}, the tensorial amplitude is \begin{eqnarray} {M}_{\mu}^{(3)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} )= \frac{ g^{2} e}{4 } A_{i}^{\pm} {m}_{\mu}^{(3)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} ) \end{eqnarray} where \begin{eqnarray} {m}_{\mu}^{(3)} ( m_{H_{i}^{\pm}}, m_{W} ,m_{12}^2,m_{H_i}) &=& \frac{1}{16 \pi^2} \int \frac{d^{n}k}{(2 \pi)^{n}} \frac{ (k-p_{3} )_{\alpha} [ g^{\alpha \beta} - \frac{ (k+ p_{3})^{\alpha} (k + p_{3})^{\beta}}{m_{W}^{2}} ] g_{\beta \mu}}{ [(k+p_{3})^{2} - m_{W}^{2} ] [(k)^{2} - m_{H_{i}^{\pm}}^{2} ] } \nonumber \\ \end{eqnarray} \item For the diagram with two scalars in the loop, i.e., (B) in Fig. \ref{bubble-decays}, the tensorial amplitude is: \begin{eqnarray} {M}_{\mu}^{(4)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) &=& \frac{g^{2} e}{64 \pi^2} A_{i}^{\pm} \int \frac{d^{n}k}{(2 \pi)^{n}} \frac{ (2k+p_3-p_{2})_{\mu} }{ [(k+p_3-p_{2})^{2} - m_{H^\pm_i}^{2} ] [k^{2} - m_{H_{i}^{\pm}}^{2} ] } . \nonumber \\ \label{d5} \end{eqnarray} However, this last equation is zero because it is an odd function. \item For diagram (C) in Fig. \ref{bubble-decays}, with one $H_i^\pm$ and one $W^\pm$ in the loop, the tensorial amplitude is given by: \begin{eqnarray} {M}_{\mu}^{(5)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} )= \frac{ g^{2} e}{4 } A_{i}^{\pm} {m}_{\mu}^{(5)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i} ) \end{eqnarray} with \begin{eqnarray} {m}_{\mu}^{(5)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) &=& \frac{1}{16 \pi^2} \int \frac{d^{n}k}{(2 \pi)^{n}} \frac{ g_{\mu \alpha} [ g^{\alpha \beta} - \frac{ (k+ p_{2})^{\alpha} (k + p_{2})^{\beta}}{m_{W}^{2}} ] (k-p_{2})_{\beta} }{ [(k+p_{2})^{2} - m_{W}^{2} ] [(k)^{2} - m_{H_{i}^{\pm}}^{2} ] } . \nonumber \\ \label{d4} \end{eqnarray} \item For the box diagrams with $W^\pm$ in the loop (graphs (B) and (D) in Fig. \ref{box}) we obtain: \begin{eqnarray} M^{W-{\rm box}} (H_i^\pm)&=& \frac{g^4A^\pm_i}{32} \int \frac{d^{n}k}{(2 \pi)^{n}} \gamma^\mu (1- \gamma_5) (\slashed{k}+\slashed{k_1})\gamma_\alpha (1-\gamma_5) u(k_2) \frac{P^{\alpha \beta} Q_{\mu \rho} }{D_4}\times \nonumber \\ &&(k+p_3+p_2)_\beta (k+2p_3)^\rho, \label{mbox} \end{eqnarray} where \begin{eqnarray} P^{\alpha \beta}&=& [ g^{\alpha \beta} - \frac{ (k+ k_{1}+k_2)^{\alpha} (k + k_{1}+k_2)^{\beta}}{m_{W}^{2}} ], \nonumber \\ Q_{\mu \rho}&= &[ g_{\mu \rho} - \frac{ (k)_{\mu} (k )_{\rho}}{m_{W}^{2}} ], \nonumber \\ D_4 &=& [k+k_1]^2 [(k+k_1+k_2)^2-m_W^2][(k+p_3)^2-m_{H^\pm_i}][k^2-m_W^2], \\ \end{eqnarray} The structure of $M^{Z-{\rm box}}$, i.e., for diagrams with the $Z$ instead of a $W^\pm$ (graphs (A) and (C) in Fig. \ref{box}), is similar, with the replacements $(1-\gamma_5) \to (C_V-C_A\gamma_5)$ and $m_W \to m_Z$. When one considers the crossed box diagrams, the ultraviolet divergences cancel. In practice, when performing the loop calculation, one can see that the contribution of the boxes is not important due to the mass suppression and contributes to the aforementioned about 10\% of the overall calculation. Hence, as intimated, we shall neglect this from now on. \end{itemize} \subsection{Role of the $A_i^{\pm}$s} \label{section-Ai} The coefficients $A_i^{\pm}$s, related to the vertex structure of loop diagrams, are the characteristic features of the model. They are sensitive to the CP properties of the decaying particles and they can provide us with the information necessary to cancel the ultraviolet divergences. For the three neutral scalars we define: \begin{eqnarray} A_{H^+_1,H_2}^{+}&=& \cos(\theta_c-\theta_h) \sin(\theta_c-\theta_h) , \\ A_{H^+_1,A_1}^{+}&=& \cos(\theta_a-\theta_c)\cos(\theta_c- \theta_h),\\ A_{H^+_1,A_2}^{+}&=& \sin(\theta_c-\theta_a) \cos(\theta_c- \theta_h),\\ A_{H^+_2,A_1}^{+}&=& \sin(\theta_a- \theta_c) \sin(\theta_c- \theta_h),\\ A_{H^+_2,A_2}^{+}&=& \cos(\theta_a-\theta_c)\sin(\theta_c - \theta_h), \end{eqnarray} where $\theta_{h,a,c}$ are the inert mixing angles defined in Sect. \ref{section-masses}. We use the shorthand $A^\pm_{i}$ for $A_{H^\pm_i,S}^{\pm}$ where $S$ could be any of the neutral scalars $H_2, A_1, A_2$. The following relations hold: \begin{eqnarray} A_{1}^{-}&=& A_{1}^{+*}=A_{1}^{+} \label{A-m},\\ A_{2}^{+}&=& -A_{1}^{+} \label{canceldiv}\\ A_{2}^{-}&=& -A_{1}^{+*}=-A_{1}^{-}. \end{eqnarray} Despite not being exploited phenomenologically in the remainder of the paper, for completeness, we also describe here the case $A_{1,2}\to H_1 \gamma^* \to H_1 e^+e^-$. In the CP conserving I(2+1)HDM, one can distinguish the CP-even inert scalar and CP-odd inert scalar in the diagrams of the Figs. \ref{triangle-decays} and \ref{bubble-decays}. When considering the amplitude of any diagram plus its crossed companion, one obtains the following results: \begin{eqnarray} A^{\pm}_i &=&A^{\pm}_i (\text{crossed}) \,\,\,\,\,\, \text{for a CP-even inert scalar,}\\ A^{\pm}_i &=&-A^{\pm}_i (\text{crossed}) \,\,\,\,\,\, \text{for a CP-odd inert scalar,} \end{eqnarray} and as a consequence \begin{eqnarray} M_\mu^{i} + \text{crossed } &= & 2 M_\mu^{i} \,\,\,\,\,\,\text{for a CP-even scalar inert } \label{Mcrossed}\\ M_\mu^{i} + \text{crossed }&=& 0 \,\,\,\,\,\, \text{for a CP-odd scalar inert, } \end{eqnarray} which is consistent with the observation we made before: CP conservation requires $A_{1,2} \to H_1\gamma^* \to H_1 e^+e^-$ to be zero. However, for the box diagrams associated with Fig. \ref{box}, $A_{1,2} \to H_1 e^+e^-$ decays are possible but their contributions are small. In fact, these decays could also be mediated at one-loop level by an on- or off-shell $Z$ boson, however, the tree-level mode $A_{1,2} \to H_1 Z^* \to H_1 e^+e^-$ (already discussed) is much larger, which is why we concerned ourselves with the latter and not the former. Finally, one can see from (\ref{canceldiv}) that $A_1^\pm =-A_2^\pm$, which is crucial for the cancellation of the ultraviolet divergences. In fact, the total contribution of the one-loop calculation is, taking account (\ref{Mcrossed}) and (\ref{A-m}): \begin{eqnarray} M_{\mu ,T} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) = e g^{2} \sum_{i=1}^2 \sum_{k=1}^4 (A_{i}^{+}+A_{i}^{-}) m_{\mu}^{(k)} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i}). \end{eqnarray} Now, taking into account (\ref{canceldiv}), we have \begin{eqnarray} M_{\mu ,T} ( m_{H_{i}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) = e g^{2} A_1^{\pm} \sum_{k=1}^4 \delta m_{\mu}^{(k)} ( m_{H_{1}^{\pm}}, m_{H_{1}^{\pm}}) \label{mmu1} \end{eqnarray} with \begin{eqnarray} \delta m_{\mu}^{(k)} ( m_{H_{1}^{\pm}}, m_{H_{1}^{\pm}})= \bigg( m_{\mu}^{(k)} ( m_{H_{1}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) - m_{\mu}^{(k)} ( m_{H_{2}^{\pm}}, m_{W},m_{12}^2,m_{H_i}) \bigg). \end{eqnarray} One can see then that ultraviolet divergences cancel perfectly. \subsection{Partial decay width of $H_2 \to H_1 f \bar f$} When evaluating the tensorial integrals of (\ref{d1})--(\ref{d4}), these expressions are reduced in terms of Passarino-Veltman scalar functions: \begin{eqnarray} \delta m_{\mu}^{(k)} ( m_{H_{1}^{\pm}}, m_{H_{1}^{\pm}})= F_{\rm PV}( m_{H_i^{\pm}},m_W,m_{12}^2, m_{H_1},m_{H_j}) (p_3 + p_2)_\mu, \end{eqnarray} where $F_{\rm PV}( m_{H_i^{\pm}},m_W,m_{12}^2, m_{H_1},m_{H_j})$ is given in Appendix A. Then, comparing (\ref{Amp1}), (\ref{Amp2}) and (\ref{mmu1}), we calculate the factor $A$: \begin{eqnarray} A= e g^{2}A^+_1 F_{\rm PV}( m_{H_i^{\pm}},m_W,m_{12}^2, m_{H_1},m_{H_j}). \label{factor-A} \end{eqnarray} One can see that $A$ is a function of the same variables of $F_{\rm PV}$ and the factor $A^+_1$. Besides, following the notation of \cite{Agashe:2014kda} for three-body decays, in addition to the variable $m_{12}^2$ defined previously, we also introduce $m_{i3}^2=(k_i+p_2)^2=2k_i\cdot p_2+m_{H_1}^2$ ($i=1, 2$). Taking this into account, one can obtain the square amplitude (\ref{Amp1}) of the loop process (upon the usual final state spin summation): \begin{eqnarray} |\mathcal{M}|^2 &= &\frac{8 |A|^2}{m_{12}^4} \bigg((m_{H_2}^2- m_{23}^2 ) (m_{23}^2 - m_{H_1}^2) -m_{12}^2 m_{23}^2\bigg). \label{Ampsq1} \end{eqnarray} Besides, it is convenient to define \begin{eqnarray} \lambda(m_{H_2}, m_{H_1},m_{23}^2)= \bigg((m_{H_2}^2- m_{23}^2 ) (m_{23}^2 - m_{H_1}^2) -m_{12}^2 m_{23}^2\bigg). \end{eqnarray} Then (dropping henceforth the arguments of $F_{\rm PV}$) one has \begin{eqnarray} |\mathcal{M}|^2 &= &8 (e^2 g^2 A_1^+)^2\frac{|F_{\rm PV}|^2}{m_{12}^4} \lambda(m_{H_2}, m_{H_1},m_{23}^2). \label{Ampsq} \end{eqnarray} In agreement with Ref. \cite{Agashe:2014kda}, the partial decay width of $H_2 \to H_1 e^- e^+$ is: \begin{eqnarray} \Gamma= \frac{1}{256 \pi^3 m_{H_2}^3} \int_{0}^{(m_{H_2}-m_{H_1})^2}dm_{12}^2 \Bigg( \int_{(m_{23}^2)_{min}}^{(m_{23}^2)_{max}} d m_{23}^2 |\mathcal{M}|^2 \Bigg). \label{wh2} \end{eqnarray} From (\ref{mmu1}) and (\ref{Ampsq}), one can observe that the one-loop function $F_{\rm PV}$ contains only the integration variable $m_{12}^2$, so that we can integrate firstly in the variable $m_{23}^2$ in the following way: \begin{eqnarray} \Gamma=\frac{1}{16 \pi^3 m_{H_2}^3} \bigg( e^2 g^{2} (A_1^+)\bigg)^2 \int_{0}^{(m_{H_2}-m_{H_1})^2} d m_{12}^2 \bigg(\frac{| F_{\rm PV}|^2}{m_{12}^4}\bigg) I_2, \end{eqnarray} where the integral for $m_{23}^2$ is possible to obtain analytically, as \begin{eqnarray} I_2(m_{H_2},m_{H_1},m_{12}^2) &=& \int_{(m_{23}^2)_{min}}^{(m_{23}^2)_{max}} d m_{23}^2 \lambda(m_{H_2}, m_{H_1},m_{23}^2)= \delta m^6\\ \delta m^6 &=& \frac{1}{6}\bigg((m_{12}^2-m_{H_1}-m_{H_2}) (m_{12}^2+m_{H_1}-m_{H_2}) \nonumber \\ &\times&(m_{12}^2-m_{H_1}+m_{H_2}) (m_{12}^2+m_{H_1}+m_{H_2}) \bigg)^{3/2}. \end{eqnarray} With this result we can do the numerical calculation using the LoopTools library \cite{LT}. \subsection{Effective Lagrangian} As it was suggested some years ago \cite{Perez:1995dc,DiazCruz:2001tn}, one can perform a general study of the discussed radiative process in a model independent way using the effective Lagrangian technique, which can parameterise the virtual effects of new physics of a given model. This approach is mandatory in our case, as we will be implementing the effective $H_2H_1e^+e^-$ vertex in CalcHEP, which is otherwise unable to perform the calculation efficiently if using the exact formulae from the previous subsection. The effective Lagrangian for the I(2+1)HDM will be an extension of the SM one \cite{Buchmuller:1985jz}, following a similar parameterisation to the one used for the case of the 2HDM, when rare decays of neutral CP-odd \cite{Perez:1995dc} and charged Higgs \cite{DiazCruz:2001tn} bosons were implemented in this way. Following these studies, we use $SU(2) \times U(1)$ gauge invariant operators of higher dimension similar to those given in \cite{DiazCruz:2001tn}. Adopting this approach, we can define operators that satisfy all symmetries imposed in our model, in particular the discrete symmetry $Z_2$. Then the corresponding effective Lagrangian for our model is: \begin{equation} L_{\rm eff}= L_{\rm I(2+1)HDM} + \sum_{n\geq 6} \bigg[ \sum \frac{c_n^i}{\Lambda^{n-4}} (O^i_n + {\rm h. c.}) \bigg], \end{equation} where $L_{\rm I(2+1)HDM}$ is the I(2+1)HDM Lagrangian, $\Lambda$ is the scale of new physics, the $O^i_n$s are the higher dimensional operators and the unknown $c_n^i$ parameters are their dimensionless Wilson coefficients, whose order of magnitude can be estimated since gauge invariance makes it possible to take into account the order of perturbation theory where each operator can be generated in the fundamental theory \cite{Arzt:1994gp}. This fact allows us to introduce a hierarchy among operators, e.g., when the operators are generated at one-loop level, they must be suppressed by the loop factor $(4 \pi)^{-2}$. Using this method, we can study the generic structure of any process. With the knowledge that the box diagrams and the tree-level diagrams with the off-shell $Z$ are sub-dominant, we consider the effective coefficient of the vertex $H_2 H_1 e^+ e^-$. In practice, we can implement such a vertex in the effective Lagrangian as follows: \begin{eqnarray} L_{\rm eff} &=& L_{\rm I(2+1)HDM} +\sum_i \frac{c_i}{\Lambda^2} \bigg( \it{i} ( \phi_i^\dagger D_\mu \phi_i )\bar{e}_R \gamma^\mu e_R+ \it{i} ( \phi_i^\dagger D_\mu \phi_i ) \bar{L} \gamma^\mu L \nonumber \\ &+& \it{i} (\phi_i^\dagger D_\mu \tau^a \phi_i ) \bar{L} \gamma^\mu \tau^a L+ \phi_i^\dagger \phi_i \bar{L} \phi_3 e_R \bigg)+ {\rm {\rm h.c.}} +... \label{Lef-q} \end{eqnarray} where $\Lambda \geq v$ and $c_i $ can be estimated given the order of perturbation theory \cite{Buchmuller:1985jz}. In our model, for the full process $H_2 \to H_1 e^+ e^-$, we must consider the following for the coefficient $c_i$: (i) as the process is generated at one-loop level, it must be suppressed by the loop factor $(4 \pi)^{-2}$; (ii) the order in the perturbation theory is proportional to $e^2 g^2$, (see (\ref{Ampsq})). A good approximation is, therefore, $c_1 \propto e^2 g^2/ (4 \pi)^2 $. The first and second operators then induce the structure of the loop calculation stemming from the diagrams of Figs. \ref{triangle-decays}--\ref{bubble-decays} while the following operators relate to the structure of the diagrams given in Fig. \ref{box}. {Given the effective Lagrangian, we can induce the effective vertex $H_2 H_1 e^+ e^-$ as}: \begin{eqnarray} L_{(H_2 H_1 e^+ e^-)}&=& i \frac{c_1 v^2 \sin \theta_h \cos\theta_h}{ \Lambda^2} (H_1 \partial_\mu H_2- H_2 \partial_\mu H_1) \bar{e} \gamma^\mu e \nonumber \\ &=& i K (H_1 \partial_\mu H_2- H_2 \partial_\mu H_1) \bar{e} \gamma^\mu e. \end{eqnarray} In this framework, the Wilson coefficient $c_1$ contains information of the parameters of the Higgs potential of the model, in particular of the mixing angle of the charged sector, which is consistent with the amplitude of loop calculations (see (\ref{canceldiv}), (\ref{mmu1})). The Wilson coefficient to this order does not depend on the variables $m_{12}^2$ and $m_{23}^2$ of (\ref{wh2}), and in principle $c_1$ behaves like a constant in the eyes of these integration variables. In particular, $c_1= e^2 g^2/ (4 \pi)^2 f (m_{H_i^\pm}, \theta_c) (v/\Lambda)^2 $, where $f (m_{H_i^\pm}, \theta_c) $ is a function of the charged Higgs masses and their mixing angle \cite{DiazCruz:2001tn, Crivellin:2016ihg}, and the scale of new physics $\Lambda$ could in general be of order 1 TeV or the energy necessary at the LHC experiments to detect the DM candidate. Now, we can define the effective coefficient $K$ as \begin{eqnarray} K = e^2 g^2/ (4 \pi)^2 (v/\Lambda)^2 f (m_{H_i^\pm}, \theta_c) \sin \theta_h \cos\theta_h, \end{eqnarray} which we have implemented in CalcHEP as an effective vertex $H_2 H_1 e^+ e^-$ in the following way: \begin{eqnarray} g_{H_1 H_2 e^+ e^- }= i K (p_1+p_2)_\mu \gamma^\mu. \label{effec-ver} \end{eqnarray} In order to relate the $K$-factor with all numerical results of the previous sections and taking into account the discusion of the Wilson coefficients, we calculate the amplitude of the process $H_2 \to H_1 e^- e^+ $ using the effective vertex in (\ref{effec-ver}), which is given by \begin{eqnarray} M=i K \bar{v}(k_1) \gamma^\mu (p_3+p_2)_{\mu} u(k_2) \end{eqnarray} so that the amplitude squared is \begin{eqnarray} |M|^2= 8 |K|^2 \lambda(m_{H_2}, m_{H_1},m_{23}^2). \end{eqnarray} Thus, the partial decay rate of the $H_2\to H_1 e^- e^+$ channel, in terms of the $K$-factor, is \begin{eqnarray} \Gamma&=& \frac{1}{256 \pi^3 m_{H_2}^3} \int_{0}^{(m_{H_2}-m_{H_1})^2}dm_{12}^2 \Bigg( \int_{(m_{23}^2)_{min}}^{(m_{23}^2)_{max}} d m_{23}^2 |M|^2 \Bigg) \\ &=&\frac{1}{16 \pi^3 m_{H_2}^3} |K|^2 \int_{0}^{(m_{H_2}-m_{H_1})^2} d m_{12}^2 I_2 = \frac{1}{16 \pi^3 m_{H_2}^3} |K|^2 I_3, \end{eqnarray} where $I_3$ is given by \begin{eqnarray} I_3 & =& \int_{0}^{(m_{H_2}-m_{H_1})^2} d m_{12}^2 I_2. \end{eqnarray} The $K$-factor is therefore given by \begin{eqnarray} K^2= \frac{16 \pi^3 m_{H_2}^3 \Gamma(H_2 \to H_1 e^+ e^-)}{I_3} \label{K-factor} \end{eqnarray} where the width $\Gamma(H_2 \to H_1 e^+ e^-)$ is calculated using LoopTools. Thus, the $K$-factor is related directly to the loop calculation through (\ref{K-factor}). Using this method, we are able to use CalcHEP, since we no longer need to perform any integration externally to the generator itself, as required by the fully fledged computation performed in the previous subsection, thereby by-passing the fact that CalcHEP is actually a tree-level generator. In order to complete the study of the decay process $H_2 \to H_1 e^+ e ^-$, it is necessary to compare the $e^+e^-$ mode with the others possible final states, $H_2 \to H_1 f \bar f$ where $f=u,d,c,s,b,\mu,\tau$. Given the effective Lagrangian in (\ref{Lef-q}), one can obtain the contribution of the fermions via the following operators: \begin{eqnarray} L_{\rm eff} &=& L_{\rm I(2+1)HDM} +\sum_i \frac{c_i}{\Lambda^2} \bigg( \it{i} ( \phi_i^\dagger D_\mu \phi_i )\bar{q}_R \gamma^\mu q_R+ \it{i} ( \phi_i^\dagger D_\mu \phi_i ) \bar{Q_L} \gamma^\mu Q_L \nonumber \\ &+& \it{i} (\phi_i^\dagger D_\mu \tau^a \phi_i ) \bar{Q_L} \gamma^\mu \tau^a Q_L+ \phi_i^\dagger \phi_i \bar{Q_L} \phi_3 b_R + \phi_i^\dagger \phi_i \bar{Q_L} \tilde{\phi}_3 t_R\bigg)+ {\rm {\rm h.c.}} +... \end{eqnarray} in agreement with the general structure of the loop calculation, giving the general expression to be \begin{eqnarray} \mathcal{M}=ie\bar{v}(k_1)\bigg(A( \slashed{p_3}+\slashed{p_2}) + (B+C( \slashed{p_3}+\slashed{p_2})) P_L+(D+E( \slashed{p_3}+\slashed{p_2})) P_R\bigg) u(k_2), \end{eqnarray} where $A, B, C, D$ and $E$ are form factors associated with the loop structure. This structure helps us to calculate all the aforementioned form factors, taking in account all the contributions of the boxes (factors $B, C, D$ and $E$) and triangles (factor $A$ given in (\ref{factor-A})) in the loops. One can then calculate each form factor separately as they are all individually convergent. Finally, notice that in the channel $H_2 \to H_1 b \bar{b}$ the mass of the top quark appears in the boxes: while this makes the calculation more cumbersome, the mass effects do not contribute significantly to the yield of the total rate. Besides, in the approximation $m_e=0$, the factors $B, C, D$ are zero and the factor $E$ is small. In appendix \ref{appendix-B} we show the complete expressions of the factors associated with the box diagrams of Fig \ref{box}. \section{Results} \label{results} The benchmark scenarios that we study here do not necessarily correspond to regions of the parameter space where our DM candidate accounts for all the observed relic density in agreement with Planck data. In fact, the aim of these benchmark scenarios is to show in which regions of the parameter space the model has a discovery potential at the LHC. Following the discussion in Sect. \ref{simplified}, we define three base benchmark scenarios, A50, I5 and I10 in the low DM mass region ($m_{H_1} \leq 90$ GeV) as shown in Tab. 1. The main distinguishing parameter here is the mass splitting between $H_1$ and the other CP-even scalar, $H_2$. Benchmark A50 ($m_{H_2}-m_{H_1}=50$ GeV) is taken from the analysis done in \cite{Keus:2014jha}. Relatively large mass splittings between $H_1$ and other neutral scalars leads to a standard DM annihilation in the Universe, providing us with a DM candidate which is in agreement with DM searches for a large part of the parameter space. However, we expect the tree-level decays to dominate over the loop signal through $H_1 A_1 Z$ vertex. Benchmarks I5 ($m_{H_2}-m_{H_1}=5$ GeV) and I10 ($m_{H_2}-m_{H_1}=10$ GeV) have an intermediate mass splitting between $H_1$ and $H_2$ of the order of a few GeV. As mentioned in section 2.4, this influences the thermal history of DM, due to the appearance of coannihilation channels. For the I benchmarks, we expect the tree-level decays to be reduced, since there is a small mass gap between $H_1$ and $A_{1,2}$. Therefore, the intermediate gauge boson is produced off-shell. Further decreasing of the $H_1$-$H_2$ and $H_1$-$A_{1,2}$ mass splittings\footnote{One needs to take extra care with very small mass splittings, as they might lead to a large particle lifetime which will cause the particle to decay outside the detector.}, leads to strengthening of the desired loop signal, with further reduction of all tree-level decays. Note, however, that with increasing the mass splitting, the loop process acquires more phase space and starts seeing the $Z^* \to ll$ contribution and the partial width grows as a result. In all cases, differences between $m_{H_1}$ and masses of both charged scalars are relatively large. This leads to important consequences for the thermal history of DM particles: charged scalars are short-lived and they will not take part in the freeze-out process of $H_1$. However, this mass difference is not big enough to suppress the studied loop processes. Increasing this mass difference would lead to a smaller cross-section and, therefore, worse detection prospects. We would also like to stress that the all chosen mass splittings are in agreement with EWPT constraints, which disfavour a significant discrepancy between masses of charged and neutral particles. On the other hand, a significant reduction of this mass splitting would increase the coannihilation effect in the Universe, hence leading to heavily reduced relic density, and thus disfavouring the 3HDM as the model for Dark Matter. \begin{table}[h!] \label{BPs} \begin{tabular}{|c||c|c|c|c|c|} \hline Benchmark & $m_{H_2} - m_{H_1}$ & $m_{A_1} - m_{H_1}$ & $m_{A_2} - m_{H_1}$ & $m_{H^\pm_1} - m_{H_1}$ & $m_{H^\pm_2} - m_{H_1}$ \\ \hline A50 & 50 & 75 & 125 & 75 & 125 \\ \hline I5 & 5 & 10 & 15 & 90 & 95 \\ \hline I10 & 10 & 20 &30 & 90 & 100 \\ \hline \end{tabular} \caption{Definition of benchmark scenarios with the mass splittings shown in GeV.} \end{table} Figs. \ref{A50-plot}--\ref{I10-plot} show the anatomy of the given scenarios, which include not only the cross sections for leptonic ($\cancel{E}_{T} l^+ l^-$) and hadronic ($\cancel{E}_{T} q \bar q$) final states, but also the relevant couplings in each case with the same colour coding. The Higgs-DM coupling is also shown for reference. For each benchmark scenario, we calculate the cross section for three processes, namely, the ggF process (\ref{first}), the tree-level process (\ref{tree}) and the VBF process (\ref{VBF}) and present the dominant couplings entering in each case. \clearpage \begin{figure}[h!] \begin{center} \includegraphics[scale=0.37]{A50lep.pdf}\; \includegraphics[scale=0.37]{A50jet.pdf}\\[3mm] \includegraphics[scale=0.8]{CouplingsA50-normal.pdf} \caption{The anatomy of scenario A50. The plots on the top show the cross sections of the tree-level, ggF and VBF processes with leptonic (left) and hadronic (right) final states. The red regions are ruled out by LHC ($m_{DM} < 53$ GeV) and by direct detection ($m_{DM} > 73$ GeV). At the bottom we show the dominant couplings in each process with the same color coding where the Higgs-DM coupling is shown for reference. Note that the $g_{hH_1H_2}$ appears with the $K$-factor in the cross section calculations.} \label{A50-plot} \end{center} \end{figure} Let us first focus on scenario A50 presented in Fig. \ref{A50-plot}, which has two special features. First, mass splittings between $H_1$ and other inert particles are relatively large, as well as the main couplings (in particular the $g_{ZH_1 A_1}$), which leads to large tree-level $Z$-mediated cross sections (the blue curve). Second, the Higgs-DM coupling, $g_{h H_1H_1}$, is chosen such that the relic density is in exact agreement with Planck measurements. To fulfil that, around the Higgs resonance the coupling needs to be very small, of the order of $10^{-4}$ \cite{Keus:2014jha}. As the $g_{h H_1 H_2}$ coupling is closely related to $g_{h H_1H_1}$, we observe a sudden dip for the orange curve ($g_{h H_1 H_2}$), which then leads to a reduced cross section for the ggF processes, driven by that particular coupling. We also observe that the cross section for the VBF processes, which depend mainly on large mass splittings and relatively constant gauge couplings, are as expected relatively constant for this benchmark. \clearpage \begin{figure}[h!] \begin{center} \includegraphics[scale=0.6]{I5lep.pdf} \includegraphics[scale=0.6]{I5jet.pdf}\\[3mm] \includegraphics[scale=0.8]{CouplingsI5-log.pdf} \caption{The anatomy of scenario I5. The plots on the top show the cross sections of the tree-level, ggF and VBF processes with leptonic (left) and hadronic (right) final states. At the bottom we show the dominant couplings in each process in Log scale with the same color coding where the Higgs-DM coupling is shown for reference. Note that the $g_{hH_1H_2}$ appears with the $K$-factor in the cross section calculations.} \label{I5-plot} \end{center} \end{figure} Scenario I5, shown in Fig. \ref{I5-plot}, differs from the scenario A50 above. Here, the mass splittings are much smaller, but also the Higgs-DM coupling is set to a constant value for all masses, as seen in Fig. \ref{I5-plot}. This makes the phase space structure more visible. For $m_{H_1} < m_h/2$ all cross sections are roughly constant, with the ggF processes enhanced through the resonant Higgs production. However, after crossing the Higgs resonance region, with no increase of the Higgs-DM coupling to compensate for that, we observe a rapid decrease of the value of the cross section. For larger masses the cross section are too small to be observed for the current LHC luminosity. \clearpage \begin{figure}[h!] \centering \includegraphics[scale=0.6]{I10lep.pdf} \includegraphics[scale=0.6]{I10jet.pdf}\\[3mm] \includegraphics[scale=1]{CouplingsI10-log.pdf} \caption{The anatomy of scenario I10. The plots on the top show the cross sections of the tree-level, ggF and VBF processes with leptonic (left) and hadronic (right) final states. At the bottom we show the dominant couplings in each process in Log scale with the same color coding where the Higgs-DM coupling is shown for reference. Note that the $g_{hH_1H_2}$ appears with the $K$-factor in the cross section calculations.} \label{I10-plot} \end{figure} Very similar behaviour is present for scenario I10 depicted in Fig. \ref{I10-plot}, where, similarly to scenario I5, the Higgs-DM coupling is set to a constant value for all masses. Again we observe the almost constant cross sections, which are rapidly reduced after we cross the Higgs threshold. \clearpage \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Decay channels & BR$(H_2\to H_1 X)$ & tree-level & ggF & VBF \\ \hline $H_2\to b\overline{b}H_1$ & 1.88e-01 & 2.49e-03 & 1.18e-07 & 2.05e-06 \\ \hline $H_2\to s\overline{s}H_1$ & 2.00e-01 & 1.97e-03 & 1.26e-07 & 2.19e-06 \\ \hline $H_2\to c\overline{c}H_1$ & 2.00e-01 & 3.94e-03 & 1.26e-07 & 2.19e-06 \\ \hline $H_2\to d\overline{d}H_1$ & 2.00e-01 & 3.54e-03 & 1.26e-07 & 2.19e-06 \\ \hline $H_2\to u\overline{u}H_1$ & 2.00e-01 & 1.97e-03 & 1.26e-07 & 2.19e-06 \\ \hline \hline $H_2\to\tau^+\tau^-H_1$ & 6.56e-02 & 8.09e-04 & 4.13e-08 & 7.15e-07 \\ \hline $H_2\to\mu^+\mu^-H_1$ & 6.69e-02 & 8.22e-04 & 4.21e-08 & 7.29e-07 \\ \hline $H_2\to e^+e^-H_1$ & 6.69e-02 & 1.34e-03 & 4.21e-08 & 7.29e-07 \\ \hline \end{tabular} \caption{BR and cross sections (in pb units) for different processes for $m_{\rm DM}=54$ GeV in scenario A50.} \label{A50-table} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Decay channels & BR$(H_2\to H_1 X)$ & tree-level & ggF & VBF \\ \hline $H_2\to s\overline{s}H_1$ & 2.22e-01 & 5.71e-03 & 9.70e-04 & 7.93e-06 \\ \hline $H_2\to c\overline{c}H_1$ & 1.63e-01 & 1.52e-03 & 7.12e-05 & 5.82e-06 \\ \hline $H_2\to d\overline{d}H_1$ & 2.28e-01 & 3.74e-03 & 9.96e-05 & 8.14e-06 \\ \hline $H_2\to u\overline{u}H_1$ & 2.28e-01 & 4.80e-03 & 9.96e-05 & 8.14e-06 \\ \hline \hline $H_2\to\tau^+\tau^-H_1$ & 7.55e-03 & 1.13e-03 & 3.30e-06 & 2.70e-07 \\ \hline $H_2\to\mu^+\mu^-H_1$ & 7.54e-02 & 7.47e-04 & 3.30e-05 & 2.69e-06 \\ \hline $H_2\to e^+e^-H_1$ & 7.59e-02 & 1.73e-03 & 3.32e-05 & 2.71e-06 \\ \hline \end{tabular} \caption{BR and cross section (in pb units) for different processes for $m_{\rm DM}=54$ GeV in scenario I5.} \label{I5-table} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|c|c|c|} \hline Decay channels & BR$(H_2\to H_1 X)$ & tree-level & ggF & VBF \\ \hline $H_2\to b\overline{b}H_1$ & 2.69e-02 & 3.67e-03 & 5.33e-05 & 3.64e-06 \\ \hline $H_2\to s\overline{s}H_1$ & 2.02e-01 & 2.27e-02 & 4.00e-04 & 2.74e-05 \\ \hline $H_2\to c\overline{c}H_1$ & 1.87e-01 & 2.46e-03 & 3.70e-04 & 2.53e-05 \\ \hline $H_2\to d\overline{d}H_1$ & 2.03e-01 & 3.14e-03 & 4.02e-04 & 2.75e-05 \\ \hline $H_2\to u\overline{u}H_1$ & 2.03e-01 & 1.37e-02 & 4.02e-04 & 2.75e-05 \\ \hline \hline $H_2\to\tau^+\tau^-H_1$ & 4.21e-02 & 1.65e-03 & 8.34e-05 & 5.70e-06 \\ \hline $H_2\to\mu^+\mu^-H_1$ & 6.76e-02 & 1.29e-03 & 1.34e-04 & 9.16e-06 \\ \hline $H_2\to e^+e^-H_1$ & 6.77e-02 & 3.70e-03 & 1.34e-04 & 9.17e-06 \\ \hline \end{tabular} \caption{BR and cross section (in pb units) for different processes for $m_{\rm DM}=54$ GeV in scenario I10.} \label{I10-table} \end{center} \end{table} \begin{table} \begin{center} \begin{tabular}{|c|c|} \hline scenario & cross section (pb) \\ \hline \hline A50 & 6.77e-09 \\ \hline I5 & 7.91e-08 \\ \hline I10 & 4.19e-08\\ \hline \end{tabular} \caption{The background process, $h$ decay into two charged scalars, cross section for $m_{\rm DM}=54$ GeV.} \label{background} \end{center} \end{table} In Tabs. \ref{A50-table}--\ref{I10-table}, we show the BR of $H_2 \to H_1 f \bar f$ for any $f \bar f$ pair, whose production the $m_{h_2}-m_{H_1}$ mass splitting allows for, for an exemplary value of $m_{\rm DM}=54$ GeV. For each decay channel, we also show the cross section value (in pb units) for all three discussed processes, the tree-level background as well as the ggF and VBF cross-sections for Higgs $h$ production times the respective branching ratio for $h\to H_1 H_2\to H_1 H_1 f \bar f$. The cross-section for $h$ production and decay into two charged scalars for $m_{\rm DM}=54$ GeV is very small as shown in Tab. \ref{background}. \clearpage \section{Conclusion and outlook} \label{summa} In this paper, we have assessed the sensitivity of the LHC to Higgs signals in the $\cancel{E}_{T}\; f \bar f$ channel, $f=u,d,c,s,b,e,\mu,\tau$, with invariant mass of the $ f \bar f$ pair much smaller than the $Z$ mass. This signature would in fact point towards an underlying 3HDM structure of the Higgs sector, with one active and two inert doublets (so that the scenario can evocatively be nicknamed as I(2+1)HDM), induced by the decay $H_2\to H_1 f \bar f$, where $H_1$ represents the lightest CP-even neutral Higgs state from the inert sector (thereby being a DM candidate) and $H_2$ the next-to-lightest one. The decay proceeds via loop diagrams induced by the propagation of both SM weak gauge bosons ($W^\pm$ and $Z$) and inert Higgs states ($H^\pm_{1,2}$ and $A_{1,2}$) in two-, three- and four-point topologies, wherein the leading contribution comes from the intermediate decay step $H_2\to H_1\gamma^*$, involving a very low mass virtual photon scalarly polarised, eventually splitting in a collimated $f \bar f$ pair, which would be a distinctive signature of this Higgs construct. In fact, the corresponding 2HDM version, with one inert doublet only, i.e., the I(1+1)HDM, contains only one CP-even and only one CP-odd neutral Higgs state, so that no such a decay is possible owing to CP conservation. This signature would emerge from SM-like Higgs boson production, most copiously via ggF and VBF, followed by a primary $h\to H_2H_1$ decay, so that the complete particle final state is $H_1 H_1f \bar f$, wherein the two DM candidates would produce missing transverse energy, accompanied by some hadronic activity in the forward and backward directions, originating by initial state gluon radiation or (anti)quark remnant jets, respectively, for ggF and VBF. In fact, amongst the possible fermionic flavours $f$, the cleanest signature is afforded by the leptonic ones ($f=l$), in view of the overwhelming QCD background. While the muon and tauon cases are the cleanest, the latter being larger than the former (assuming only leptonic decays of the $\tau$'s), the electron case is potentially the one giving raise to the most spectacular signal, which, owing to parton distribution imbalances, so that the $h$ state would be boosted, would appear at detector level as a single EM shower with substantial $\cancel{E}_{T}$ surrounding it. However, there is a substantial tree-level contribution, due to $q\bar q\to Z^{*}H_1H_1$ topologies (a first one involving single $h$-strahlung followed by $h\to H_1H_1$ splitting, a second one via a $Z^*Z^*H_1H_1$ vertex and a third one through $A_{1,2}H_1$ production followed by $A_{1,2}\to H_1 Z^*$ decay), which is potentially much larger than the aforementioned loop diagrams, thereby acting as an intrinsic background. In fact, even though the $Z^*$ ought to be significantly off-shell in its transition to $f \bar f$ pairs to mimic the $\gamma^*\to f \bar f$ splitting, this can happen with substantial rates, because of the rather large value of the total $Z$ decay width. It is therefore clear that the $H_2\to H_1 f \bar f$ signal can only be established in presence of a rather small mass gap between $H_2$ and $H_1$. To this effect, we have then defined a few benchmarks on the I(2+1)HDM parameter space where the mass difference $m_{H_2}-m_{H_1}$ is taken to be increasingly small, varying from 50, to 10 to 5 GeV. Correspondingly, we have seen the relevance of the loop processes growing with respect to the tree-level one, with ggF dominating VBF, to the point that {the former become comparable to the latter for cross sections and BRs directly testable at Run 2 and/or Run 3 of the LHC. This is particularly true over the DM mass region observable at the CERN machine, i.e., for small values of the DM candidate mass, typically less than $m_h/2$. In this case, the cumulative signal can be almost within an order of magnitude or so of such an intrinsic background. Furthermore, other (irreducible) background processes can be present. The first one is the tree-level $h$ decay into two charged scalars with the same signature ($\cancel{E}_{T}\; f \bar f$), albeit containing two (invisible) additional neutrinos, which has a very small cross section, as shown in Tab. \ref{background} for each of our benchmarks for the usual illustrative value of $m_{\rm DM}=54$ GeV. A second one is due to $gg\to h\to VV$ (via resonant $h$ production) and $q\bar q\to VV$ (gauge boson pair production), where $VV=W^+W^-$ or $ZZ$. These two subprocesses have inclusively very large cross sections, of ${\cal O}(10~{\rm pb})$ (prior to $V$ decays), compared to our signals, and a significant amount of (differential) kinematical selection ought to be employed to reduce these noises, which is clearly beyond the scope of this paper. However, a few handles can be clearly exploited. For the case $V=Z$, a veto $m_{ll}\neq m_Z$ can always be adopted. For the case $V=W^\pm$, a requirement of the kind $m_{ll}<<m_W$, combined with the request of identical lepton flavours, can be used. We have obtained these results in the presence of up-to-date theoretical and experimental constraints, including amongst the latter those from colliders, DM searches and cosmological relic density. Therefore, we believe that the advocated discovery channel might serve as smoking-gun (collider) signature of the I(2+1)HDM, that may enable one to distinguish it from the I(1+1)HDM case, in a few years to come. In fact, once this signal is established and some knowledge of the $H_2$ and $H_1$ masses gained, the latter can be used to extract additional manifestations of the prevalent $H_2\to H_1\gamma^*$ decay, by considering the selection of additional splittings $\gamma^*\to f\bar f$, where $f$ can be identified with $q=u,d,c,s,b$, depending upon the relative value of $m_{H_2}-m_{H_1}$ and $2m_f$. Finally, in reaching these conclusions, we emphasise that we have done a complete one-loop calculation of the $H_2\to H_1 f\bar f$ decay process, including all topologies entering through the same perturbative order, i.e., not only those proceeding via $H_2\to H_1\gamma^*\to H_1 f\bar f$, which was never attempted before, so that we have collected the relevant formulae in this paper for future use. In conclusion, the 3HDM with two inert doublets, provides a well motivated dark matter model with distinctive LHC signatures in certain regions of parameter space arising from novel Higgs decays, the most spectacular being $e^+e^-+ \cancel{E}_{T}$ mono-shower. \section*{Acknowledgements} SFK and SM acknowledge support from the STFC Consolidated grant ST/L000296/1 and the European Union Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreements InvisiblesPlus RISE No. 690575 and Elusives ITN No. 674896. SM is financed in part through the NExT Institute. SM and VK acknowledge the H2020-MSCA-RISE-2014 grant no. 645722 (NonMinimalHiggs). VK's research is partially supported by the Academy of Finland project 274503. DS is supported in part by the National Science Center, Poland, through the HARMONIA project under contract UMO-2015/18/M/ST2/00518. JH-S, DR and AC are supported by CONACYT (M\'exico), VIEP-BUAP and PRODEP-SEP (M\'exico) under the grant: ``Red Tem\'atica: F\'{\i}sica del Higgs y del Sabor".
proofpile-arXiv_067-14280
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:Introduction} Production and exchange of information in complex systems with interacting components can be quantified via entropy measures \cite{cafaro2016thermodynamic,parrondo2015thermodynamics,kawai2007dissipation,horowitz2014thermodynamics,still2012thermodynamics,ortega2013thermodynamics,san2005information}. Discriminating between empirical data and models in terms of information content is interesting from several viewpoints. Consider an experiment with the outcomes obeying the probability distribution $P$ whereas the distribution $Q$ is a model for the same experiment. Quantifying the error of the wrong assumption of the model compared to the correct information content is relevant to a broad class of phenomena \cite{chen2021wiener,vedral2002role}. Such information-theoretical concepts bring also together the thermodynamic implications intrisically related to the dynamical evolution of the system under investigation. Hence, the dynamic of the information transferred along subsequent transformative stages of a complex system can be described in terms of divergence of the probability distributions $P$ at time $t$ and $P'$ at a subsequent time $t'$.\\ Entropy concepts and information-theoretical tools are applied in fields as diverse as climate, turbulence, neurology, biology and economics \cite{kleeman2002measuring,granero2018kullback,backus2014sources,tozzi2021information} and are increasingly adopted in unsupervised learning of unlabelled data where similarity/dissimilarity measures are concerned with dynamic rather than static features of data clustering \cite{ullmann2021validation,meilua2007comparing,liao2005clustering}. The \textit{cluster entropy} $\mathcal{S_{C}}(P_j)$ is defined as a Shannon entropy measure with $P_j$ the power-law probability distribution of the clusters formed in a long-range correlated data sets \cite{carbone2004analysis,carbone2007scaling}. $\mathcal{S_{C}}(P_j)$ has proved the ability to quantify heterogeneity and dynamics of long-range correlated stochastic processes in a broad range of applications \cite{carbone2013information,ponta2021information}. By extending the definition to continuous random variables, the \textit{differential cluster entropy} $\mathcal{S_{C}}(P)$ added clues to the meaning of the different contribution of the terms entering the cluster entropy relationship. \par In this work, the \textit{relative cluster entropy} or \textit{cluster divergence} $\mathcal{D_{C}}(P \| Q) $ of the cluster probability distribution $P$ with respect to a model distribution $Q$ is proposed. To illustrate how the \textit{relative cluster entropy} operates, synthetic and real-world data featuring power-law distribution behavior are considered. First, the approach is implemented on fractional Brownian motions ({\em fBms}) with given correlation exponent (Hurst exponent $H$). A systematic dependence of $\mathcal{D_{C}}(P \| Q) $ on the Hurst exponent is found. The \textit{minimum relative cluster entropy} principle is then implemented as a selection criterion to extract the optimal correlation exponent of the sequence. Furthermore, as a real-world case, we study the divergence $\mathcal{D_{C}}(P \| Q) $ of financial price series. The probability distribution $P$ is obtained by ranking the clusters generated in each market price series and compared to the distribution $Q$ drawn from synthetic data adopted as model. Finally, the \textit{minimum relative entropy} principle yields the best estimate of the correlation exponents of the financial series and quantifies the deviation of the price series from the model. \par The manuscript is organized as follows. In Section \ref{sec:Relative} the main computational steps of the \textit{relative cluster entropy} method are described for discrete variables. The approach is illustrated for synthetic data (fractional Brownian motions) and real-world data (market price series). In Section \ref{sec:Discussion} the \textit{relative cluster entropy} is extended and its expression for continuous variables is derived, conclusions and suggestions for further development are drawn. \begin{figure*}[htb!] \includegraphics[scale=0.3]{Figure1a.png} \includegraphics[scale=0.3]{Figure1b.png} \includegraphics[scale=0.3]{Figure1c.png}\\ \includegraphics[scale=0.3]{Figure1d.png} \includegraphics[scale=0.3]{Figure1e.png} \includegraphics[scale=0.3]{Figure1f.png}\\ \includegraphics[scale=0.3]{Figure1g.png} \includegraphics[scale=0.3]{Figure1h.png} \includegraphics[scale=0.3]{Figure1i.png} \caption{Plot of $\mathcal{D}_{j}$, defined by Eq.~(\ref{Kullbackdtau}), as a function of the cluster duration $\tau_j$ for \textit{fBm} pairs with Hurst exponent $H_{1}$ and $H_{2}$. The cluster frequency $P(\tau_j,n)$ is obtained by counting the occurrences of clusters with duration $\tau_j$ in fractional Brownian motions with Hurst exponent $H_{1}$. A simple Brownian motion, i.e. a $fBm$ with $H_{2} =0.50$, is taken to obtain the cluster partition and the model probability $Q(\tau_j ,n)$. In the above figures, $H_{1}$ varies respectively from $0.20 $ (top-left) to $0.80 $ (bottom-right). The length of the series is equal to $N=500000$ for all the graphs. Different curves in each graph refer to different $n$ values ($n=50, 100, 1000, 2000$) as indicated by the arrow. At large values of the parameter $n$, the curves tend to the asymptotic value $\mathcal{D}_{j} =0 $, expected at large $\tau_j$, whereas the curves exhibit a diverging behavior at small values of $\tau_j$. At small values of the parameter $n$ an opposite behaviour is observed, the curves tend to the theoretical value expected at small values of $\tau_j$, whereas the curves diverge at large $\tau_j$. The properties of the cluster probability divergence $\mathcal{D}_{j}$ are discussed in Section \ref{sec:Discussion} on the basis of the analytical expression of the divergence obtained for continuous variables $\tau$ (Eq.~(\ref{Kullbackc0})). } \label{fig:KulbackFBM} \end{figure*} \par \section{Methods and Results} \label{sec:Relative} In this section, the \textit{relative cluster entropy} approach is described The interest is towards the development of a divergence measure able to evaluate the situation where a model probability distribution $Q$ is defined in parallel to the true probability distribution function $P$ of the cluster partition. The method is applied to stochastic processes obeying power-law distributions like fractional Brownian motion and market prices. Before illustrating how the proposed \textit{cluster divergence} works, a few definitions are recalled. \begin{figure*}[htb!] \includegraphics[scale=0.3]{Figure3INDUFBM05.png} \includegraphics[scale=0.3]{Figure3INDUFBM06.png} \includegraphics[scale=0.3]{Figure3INDUFBM07.png}\\ \includegraphics[scale=0.3]{Figure3SPXFBM05.png} \includegraphics[scale=0.3]{Figure3SPXFBM06.png} \includegraphics[scale=0.3]{Figure3SPXFBM07.png}\\ \includegraphics[scale=0.3]{Figure3CCMPFBM05.png} \includegraphics[scale=0.3]{Figure3CCMPFBM06.png} \includegraphics[scale=0.3]{Figure3CCMPFBM07.png}\\ \caption{ Plot of the quantity $\mathcal{D}_{j}$, defined by Eq.~(\ref{Kullbackdtau}), vs. cluster duration $\tau_j$ . The cluster frequency $P(\tau_j, n)$ has been estimated on prices of the DJIA, S\&P500, NASDAQ indexes. The model probability $Q(\tau_j, n)$ has been estimated by considering the cluster generated by a $fBm$ with Hurst exponent $H_2$ ranging from $0.50$ to $0.70$ and length $N=492023$ equal to the sampled index series. Different curves in each graph refer to different values of the parameter $n$ ($n=50, 100, 1000$). } \label{fig:KulbackFIN} \end{figure*} Consider the time series $\{x_t \}$ of length $N$ and moving average $\{\widetilde{x}_{t,n}\}$ of length $N-n$ with $n$ the moving average window. For each $n$, the function $\{\widetilde{x}_{t,n}\}$ generates a partition $\{\cal{C}\}$ of non-overlapping clusters between consecutive intersections of $\{x_t \}$ and $\{\widetilde{x}_{t,n}\}$. Each cluster $j$ has duration $\tau_j\equiv \|t_{j}-t_{j-1}\|$ \noindent where the instances $t_{j-1}$ and $t_j$ refer to subsequent intersection pairs. The empirical distribution of the frequencies of the cluster duration $P(\tau_j,n)$ can be obtained by ranking the number of clusters ${\mathcal N}(\tau_1,n),{\mathcal N}(\tau_2,n), ..., {\mathcal N}(\tau_j,n)$ according to their duration $\tau_1, \tau_2,..., \tau_j$ for each $n$ and is defined as: \begin{equation} P(\tau_j,n)=\frac{{\mathcal N}(\tau_j,n)}{{\mathcal N_C}(n)} \end{equation} where ${\mathcal N_C}(n)=\sum_{j=1}^{k(n)} {\mathcal N}(\tau_j,n)$ is the number of clusters generated by the partition for each $n$, with $k=\sum_{n=1}^{N}{\mathcal N_C}(n)$ the total number of clusters for all possible values of $n$. \par The cluster entropy $\mathcal{S_{C}}[P] $ is obtained by introducing the cluster frequency $P(\tau_j,n)$ in the Shannon expression: \begin{equation} \mathcal{S_{C}}[P] = \sum_{j, n} P(\tau_j, n)\log P(\tau_j,n) \hspace{5pt}, \label{Shannon} \end{equation} with the normalization condition: \begin{equation} \sum_{n=1}^N \sum_{j=1}^{{\mathcal N_C}(n)} P(\tau_j,n)= 1 \hspace{5pt}. \end{equation} \par The cluster entropy approach has been applied and provided interesting clues regarding the intrinsic heterogeneity and dynamics of real-world data sequences \cite{carbone2013information,ponta2021information}. \par Here, the \textit{relative cluster entropy} $\mathcal{D_{C}}(P \| Q) $ is proposed to quantify the wrong information yield when a model probability distribution $Q$ is assumed in place of the true probability distribution $P$. A measure of distinguishability between two probability distributions $P$ and $Q\,$ is the \textit{Kullback-Leibler divergence}, defined for discrete variables as $ \mathcal{D_{KL}}(P\| Q)=\sum_{j} P_{j} \log \left({P_{j}}/{Q_{j}}\right) $, with the conditions $\mathrm{supp} (P) \subseteq \mathrm{supp}(Q)$ and $ \mathcal{D_{KL}}(P\| Q) \geq 0 $, with $ \mathcal{D_{KL}}(P\| Q) = 0 $ for $P=Q\,$. The \textit{minimum relative entropy} principle is then adopted as optimization criterion for model selection and statistical inference. The \textit{relative cluster entropy} is defined in terms of the discrete cluster durations $\tau_j$ as follows: \begin{equation} \mathcal{D}_{j,n}[P || Q ] = P(\tau_j, n)\log \frac{P(\tau_j, n)}{Q(\tau_j, n)} \hspace{5pt}, \label{Kullbackdtau} \end{equation} where the index $j$ refers to the set of clusters with duration $\tau_j$ occuring in the partition obtained for a given $n$ and the frequencies $P(\tau_j,n)$ and $Q(\tau_j,n)$ satisfy the condition $\mathrm{supp} (P) \subseteq \mathrm{supp}(Q)$. By using Eq.~(\ref{Kullbackdtau}), the \textit{relative cluster entropy} is written as: \begin{align} \mathcal{D_{C}}[P|| Q] & = \sum_{n,j} \mathcal{D}_{n,j}[P(\tau_j, n)|| Q(\tau_j, n) ] = \nonumber \\ & = \sum_{n,j} P(\tau_j, n)\log \frac{P(\tau_j, n)}{Q(\tau_j, n)} \hspace{5pt}, \label{Kullbackdtaun} \end{align} where the index $n$ runs over the allowed set of time window values, $n \in(1,N)$. \begin{figure*}[] \includegraphics[scale=0.31]{Figure4b.png} \includegraphics[scale=0.31]{Figure4c.png} \includegraphics[scale=0.31]{Figure4a.png} \caption{Plot of the quantity $\mathcal{D_{C}}[P|| Q]$, defined by Eq.~(\ref{Kullbackdtaun}), vs. cluster duration $\tau_j$ . The curves are obtained by summing the quantities $\mathcal{D}_{j}[P|| Q]$, as those shown in Fig.~\ref{fig:KulbackFIN}, over the parameter $n$ for the prices of DJIA, S\&P500, NASDAQ. Each curve in the figures corresponds to the cluster divergence with the probability $P(\tau_j, n)$ referred to the market price series $p_t$ and the model probability $Q(\tau_j, n)$ referred to \textit{fBms} with Hurst exponent $H_2$ ranging from $0.50$ to $0.70$ with step $0.1$ as indicated by the arrow. } \label{fig:Kulback0507} \end{figure*} \begin{figure*}[] \includegraphics[scale=0.4]{Figure5Color3MercatiNewStyle.png} \caption{Plot of the quantity $\mathcal{D_{V}}$ defined by Eq.~(\ref{Kullbackdmin}) for the relative cluster entropy curves plotted in Fig.~\ref{fig:Kulback0507} vs. the Hurst exponent $H_2$ of the model distribution $Q(\tau_j,n)$. The quantity $\mathcal{D_{V}}$ is estimated by means of the variance of $\mathcal{D}_{C}$ with respect to $0$ (the null hypothesis for $P=Q$) for the market time series in Fig.~\ref{fig:Kulback0507} over the cluster lifetime interval $1<\tau_j<20$. Each point is evaluated by using the definition given in Eq.~(\ref{Kullbackdmin}) for each market and for each $fBm$ with assigned Hurst exponent $H_2$. The Hurst exponent $H_2$ of the model distribution $Q(\tau_j,n)$ ranges between $0.50$ and $0.70$ with step $0.01$. The minimum divergence is obtained for $H_1=0.55$ (DJIA), $H_1=0.57$ (S\&P500) and $H_1=0.63$ (NASDAQ).} \label{fig:varianza} \end{figure*} \par To illustrate how the proposed approach operates, pairs of artificially generated fractional Brownian motions (\textit{fBms}) are analysed in terms of the \textit{relative cluster entropy} defined by Eqs.~(\ref{Kullbackdtau}-\ref{Kullbackdtaun}). \textit{fBms} $x_t^{H} $ with ${t \geqslant 0}$ are power-law correlated stochastic processes, defined by a centered Gaussian process with stationary increments and covariance given by $ {<x_{s}^{H}x_{t}^{H}>}=\frac{1}{2}\left(t^{2 H}+s^{2 H}-|t-s|^{2 H}\right)$ with $H \in(0,1)$ the Hurst exponent. Power-law behaviour of the correlation function implies very slow memory decay and non-Markovianity. Synthetic \textit{fBm} sequences have been generated with assigned Hurst exponent $H$ and length $N$ by using the code available at \url{https://project.inria.fr/fraclab/}. The cluster frequencies $P(\tau_j, n)$ and $Q(\tau_j, n)$ have been estimated by counting the number of clusters with duration $\tau_j$ and window $n$ for each $fBm$. \par Fig.~\ref{fig:KulbackFBM} shows the plots of $\mathcal{D}_{j}$, defined by Eq.~(\ref{Kullbackdtau}), estimated for cluster frequencies $P$ obtained from $fBms$ with $H_{1}$ varying from $0.20$ (top-left) to $0.80 $ (bottom-right). The model $Q$ has been estimated on clusters obtained from uncorrelated Brownian paths, i.e. $fBms$ with $H_{2} =0.50$. The values of the Hurst exponents correspond respectively to power-laws cluster correlation exponents $\alpha_1=2-H_1$ ranging from $1.80$ to $1.20$, whereas $\alpha_2=2-H_2$ is kept constant and equal to $1.50$. In real experiments, small deviations from the model distributions should be reasonably expected. Fig.~\ref{fig:KulbackFBM} shows the curves obtained for fractional Brownian motions with $H_{1}=0.45 $, $H_{1} =0.50$ and $H_{1}=0.55 $ with respect to the simple Brownian path, i.e. with respect to a $fBm$ with $H_{2} =0.50$ taken as the model in this example. Thus, \textit{fBm} pairs with close values of $H_{1}$ and $H_{2}$ correspond to more realistic experimental conditions. In other words, the situation where Eqs.~(\ref{Kullbackdtau}-\ref{Kullbackdtaun}) operate on data sequences with correlation exponents close to each other is expected to occur more frequently in the cases of practical interest. \par The quantity $\mathcal{D}_{j}$ shows characteristic deviations with respect to the null hypothesis (fully random processes with $H_2=0.5$). In particular, at small values of the cluster duration $\tau_j$, the quantity $\mathcal{D}_{j}$ takes positive and negative values respectively for \textit{fBms} with $0.5<H_1<1$ and $0<H_1<0.5$. As the cluster duration $\tau_j$ increases, $\mathcal{D}_{j}$ tends to the horizontal axis implying that the divergence between the distributions become negligible for very large clusters. \par To further illustrate how the proposed method operates with real-world data, price series $\{p_t\} $ of Dow Jones Industrial Average (DJIA), Standard and Poor 500 (S\&P500), National Association of Securities Dealers Automated Quotations Composite (NASDAQ), are considered. Data includes tick-by-tick prices from January to December 2018. Details (Ticker; Extended name; Country; Currency; Members; Length) as provided by Bloomberg can be found at \url{www.bloomberg.com/professional}. Raw data prices $\{p_t\} $ have different lengths ($N_{DIJA} = 5749145$, $N_{S\&P500} = 6142443$, $N_{NASDAQ} = 6982017$), thus they are sampled to yield equally spaced data sequences with equal length $N$ and perform the cluster entropy analysis over comparable data sets. \par The cluster frequency $P(\tau_j,n)$ is estimated by counting the clusters generated in the market price series. $Q(\tau_j,n)$ is estimated by counting the clusters generated in artificially generated stochastic processes assumed as a model of the price series. In the analysis, we consider the divergence between each price series, with unknown correlation exponent and artificially generated samples of fractional Brownian motions \textit{fBms} with assigned Hurst exponent $H_2$. Results of the analysis are plotted in Fig.~\ref{fig:KulbackFIN}, showing the relative cluster entropy for the three markets. Several samples of the curves obtained for different values of the parameter $n$ shown in Fig.~\ref{fig:KulbackFIN} have been summed over the parameter $n$ with same interval of cluster duration $\tau_j$. Fig.~\ref{fig:Kulback0507} shows the relative cluster entropy $\mathcal{D_{C}}[P|| Q]$ for the data shown in Fig.~\ref{fig:KulbackFIN}. The \textit{minimum relative entropy} principle is implemented non-parametrically on the values plotted in Fig.~\ref{fig:Kulback0507} by using the relationship: \begin{widetext} \begin{equation} \mathcal{D_{V}} \equiv \frac{1}{k-1} \sum_{j=1}^k \left[\mathcal{D}_{C}[P|| Q]- \mathcal{D}_{C}[P = Q]\right]^2 \equiv \frac{1}{k-1} \sum_{j=1}^k \left[\mathcal{D}_{C}[P|| Q]\right]^2 \label{Kullbackdmin} \end{equation} \end{widetext} $\mathcal{D_{V}}$ quantifies the variance of the quantity $\mathcal{D}_{C}[P|| Q ] $, defined by Eq.~(\ref{Kullbackdtau}), around the value $\mathcal{D}_{C}[P|| Q] =0 $ the null hypothesis expected for $P=Q$. The right side of Eq.~(\ref{Kullbackdmin}) corresponds to the mean square value of the area of the region between the curves $\mathcal{D}_{C}[P||Q]$ and the horizontal axis. $\mathcal{D_{V}}$ is a measure of the deviation of the probability distribution of the experimental outcomes with respect to the expected probability distribution function taken as a model. \begin{figure*}[htb!] \includegraphics[scale=0.3]{Figure_K_teorico_alphaM15_4.png} \includegraphics[scale=0.3]{Figure_K_teorico_alpha15_5.png} \includegraphics[scale=0.3]{Figure_K_teorico_alpham159.png} \caption{Plot of the quantity $\mathcal{D_{C}}[P|| Q]$, defined by Eq.~(\ref{Kullbackc0}), vs. the cluster duration $\tau$. Blue curves (left panel) correspond to a power law probability distribution $P(\tau)$ with ${\alpha_1}$ ranging between $1.55 \div 1.80$. The model probability distribution $Q(\tau)$ is a power law with correlation exponent ${\alpha_2} =1.50 $, the same for all the curves plotted here. Red curves (right panel) correspond to a power law probability distribution with ${\alpha_1}$ ranging between $1.20 \div 1.45$. The black line (middle panel) corresponds to the null hypothesis $\mathcal{D_{C}}[P|| P] =0$ obtained with ${\alpha_1}=1.50$ and ${\alpha_2} =1.50$. } \label{fig:Kulbackth} \end{figure*} The minimization criterion provided by Eq.~(\ref{Kullbackdmin}) has been applied to the data shown in Fig.~\ref{fig:Kulback0507} to yield the best estimate of the correlation degree of the market prices. The value of the Hurst exponent for the series of the prices ${p_t}$ has been deduced from the value of $H_2$ for which $\mathcal{D_{V}}$ takes its minimum ($\mathcal{D_{V}}=0$) and implies $H_1=H_2$. By using this rule, $H_1=H_2=0.55$, $H_1=H_2=0.57$, and $H_1=H_2=0.63$ are found respectively for the prices of DJIA, S\&P500 and NASDAQ. The results of the minimization are shown in Fig.~\ref{fig:varianza} for the markets whose relative cluster entropy is shown in Fig.~\ref{fig:Kulback0507}. \par \section{Discussion and Conclusion} \label{sec:Discussion} In this Section, the \textit{relative cluster entropy} will be extended to continuous random variables. For $N_{\mathcal{C}} (n) \rightarrow \infty$, the characteristic size of generated clusters $\cal{C}$ behaves as continuous random variables $\tau \in [1, \infty]$ with probability distribution function $P(\tau)$ varying as a power-law \cite{carbone2004analysis,carbone2007scaling}. By taking the limits $P({\tau_j}) \rightarrow P(\tau) d \tau$ and $Q ({\tau_j}) \rightarrow Q(\tau) d \tau$, Eq. (\ref{Kullbackdtaun}) can be written for continuous random variables in the form of an integral: \begin{equation} \mathcal{D_{C}}[P(\tau)|| Q(\tau)]= \int P(\tau)\log \frac{ P\left({\tau}\right)}{Q\left({\tau}\right)} d \tau \hspace{5pt}, \label{Kullbackctau} \end{equation} with $\tau \in [1, \infty]$. We are interested in the situations where the probability distributions are power-law functions, i.e. for $P(\tau)$ and $Q(\tau)$ respectively in the form: \begin{equation} P(\tau)=(\alpha_1-1) \tau^{-\alpha_1} \hspace{20pt} Q(\tau)= (\alpha_2 -1)\tau^{-\alpha_2} \hspace{5pt}, \label{PQ} \end{equation} where ${\alpha_1}$ and ${\alpha_2}$ are the correlation exponents, $ \alpha_1-1$ and $ \alpha_2 -1$ are the normalization constants for $\tau \in [1, \infty]$. By using Eqs.~(\ref{PQ}), Eq.~(\ref{Kullbackctau}) writes: \begin{equation} \mathcal{D_{C}}[P(\tau)|| Q(\tau)] = \int (\alpha_1-1) \tau^{-\alpha_1} \log \frac{ (\alpha_1-1) \tau^{-\alpha_1}}{(\alpha_2 -1)\tau^{-\alpha_2} } d \tau \hspace{5pt}, \end{equation} that after integration becomes: \begin{widetext} \begin{equation} \mathcal{D_{C}}[P(\tau)|| Q(\tau)] = \tau^{1-\alpha_1} \left \{\log \left(\frac {\alpha_1-1}{\alpha_2-1}\right) + \left [\log {\tau^{(\alpha_1 -\alpha_2)}} +\frac{(\alpha_1 -\alpha_2) }{1-\alpha_1}\right] \right\}+ \hspace{2pt} C \hspace{5pt}, \label{Kullbackc0} \end{equation} \end{widetext} where the integration constant $C$ is obtained by setting the condition $\mathcal{D_{C}}[P|| P]=0$ which yields $C=0$. By estimating Eq.~(\ref{Kullbackc0}) over the interval $[1, \infty]$, the definite integral yields: \begin{equation} \mathcal{D_{C}}[P|| Q] = \log \left(\frac {\alpha_1-1}{\alpha_2-1}\right) - \left(\frac{ \alpha_1 -\alpha_2 }{\alpha_1-1}\right) \hspace{5pt}, \label{Kullbackc00} \end{equation} that for $\alpha_1 = \alpha_2\,$, i.e. for the distribution $P$ coincident with the model distribution $Q$, provides $\mathcal{D_{C}}[P||Q] = 0$. \par $\mathcal{D_{C}}[P|| Q]$ quantifies the divergence between $P(\tau)$ and $Q(\tau)$, respectively true and model distribution, as a function of the cluster lifetime $\tau$ in terms of the different correlation exponents $\alpha_1$ and $\alpha_2$. In Fig.~\ref{fig:Kulbackth}, Eq.~(\ref{Kullbackc0}) is plotted as a function of $\tau$ for different values of the exponents $\alpha_1$ and $\alpha_2$. At small values of the cluster duration ($\tau \rightarrow 1$), $\mathcal{D_{C}}[P|| Q]$ is strongly dependent on the difference of the power-law exponent $\alpha_1$ with respect to the exponent $\alpha_2$ of the model distribution. Conversely, as the cluster duration increases ($\tau >> 1$), $\mathcal{D_{C}}[P|| Q]$ tends to become negligible. The decay can be understood by considering that as $\tau$ increases the cluster becomes disordered as a consequence of the spread of the distribution and the onset of finite-size effect and so the correlation vanishes as the process becomes almost fully uncorrelated. The behaviour of the cluster distribution divergence obtained by using continuous variables is consistent with the empirical tests performed on discrete data sets. In particular, the behaviour shown by fractional Brownian motions with different correlation exponents discussed in the Section II is reproduced by the curves shown in Fig.~\ref{fig:Kulbackth}, confirming that the approach is sound. \par The \textit{relative cluster entropy} can be therefore exploited to estimate the deviation of the power law exponent corresponding respectively to true and model probability distributions. \par Long-range correlated processes obeying power-law distributions occur frequently in complex system data related to several natural and man-made phenomena. Due to their ubiquity, the extent of long-range correlation and the scaling exponents are relevant to many disciplines, though several difficulties are met for their estimation which require suitable computational procedures to be carefully implemented \cite{clauset2009power}. A random variable $x$ obeys a power law if it is drawn from a probability distribution $p(x) \propto x^{-\alpha}$ with the parameter $\alpha>1$ the correlation exponent (scaling exponent). Empirical real-world data barely follow a power-law for all the values of $x$. Due to normalization requirements and finite-size effects, ideal power-law behaviour usually holds at values greater than some minimum $x_{\min}$ up to a maximum $x_{\max}$. An exponential cut-off is often artificially introduced to account for the deviation from the ideal power-law behaviour $x^{-\alpha} \mathrm{e}^{-\lambda x}$. The proposed \textit{relative cluster entropy} approach yields the optimal value of the correlation exponent $\alpha$ without relying on the estimate of the slope in a log-log plot and thus is robust against computational biases. \par The non-parametric minimization of the relative entropy has some advantages compared to parametric approaches, whose implementation requires normality of the random variables and knowledge of the first two moments of the distribution for the calculation of the Lagrange multipliers. \bibliographystyle{unsrt}
proofpile-arXiv_067-14483
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Although power-law distributions have been analyzed in depth in physical sciences, little has been said about their relevance to Artificial Intelligence (AI). We introduce the zeta distribution as an analytic device in algorithmic information theory and propose using it to approximate the distribution of programs. We have been inspired by the empirical evidence in complex systems, especially biology and genetics, that show an abundance of power-law distributions in nature. It is well possible that the famous universal distribution in AI theory is closely related to power-law distributions in complex systems. The transfer learning problem also merits our attention, as a general model of it has not been presented in machine learning literature. We develop a basic formalization of the problem using stochastic processes and introduce temporal bounds for learning a training sequence of induction problems, and transfer learning. The entropy rate of a stochastic process emerges as a critical quantity in these bounds. We show how to apply the bounds by analyzing the entropy rates of simple training sequence models that generate programs. Two models are close to what critics of AI have imagined, and easily result in unsolvable problems, while two models inspired by evolution suggest that there may be stochastic processes in nature on which AGI algorithms may be quite effective. \section{Approximating the Distribution of Programs} Solomonoff's universal distribution depends on the probability distribution of programs. A natural model is to consider programs, the bits of which are generated by a fair coin. Solomonoff defined the probability of a program $\pi \in \{0,1\}^+$ as: \begin{equation} \label{eq:prog-dist} P(\pi) = 2^{-|\pi|} \end{equation} where $|\pi|$ is the program length in bits. The total probability of all programs thus defined unfortunately diverges if all bit-strings $\pi \in \{0,1\}^*$ are considered valid programs. For constructing probability distributions, a convergent sum is required. Extended Kraft inequality shows that the total probability is less than $1$ for a prefix-free set of infinite programs \cite{Cover1991}. Let $M$ be a reference machine which runs programs with a prefix-free encoding like LISP. The algorithmic probability that a bit-string $x \in \{0,1\}^*$ is generated by a random program of $M$ is: \begin{equation} \label{eq:alp} P_M(x) = \sum_{M(\pi) = x*} P(\pi) \end{equation} which conforms to Kolmogorov's axioms \cite{levin-thesis}. $P_M$ is also called the universal prior for it may be used as the prior in Bayesian inference, as any data can be encoded as a bit-string. \subsection{Zeta Distribution of Programs} We propose the zeta distribution for approximating the distribution of programs of $M$. The distribution of \prettyref{eq:prog-dist} is already an approximation, even after normalization , since it contains many programs that are semantically incorrect, and those that do not generate any strings. A realistic program distribution requires us to specify a detailed probability model of programs, which is not covered by the general model, however, the general model, which is approximate, still gives excellent bounds on the limits of Solomonoff's universal induction method. Therefore, other general approximations may also be considered. Additionally, the zeta function is universal, which encourages us to relate algorithmic information theory to zeta distribution \cite{Voronin75}. Let us consider a program bit-string $\pi = b_1b_2b_3\dots b_k$. Let $\phi: \{0,1\}^+ \rightarrow \Z$ define the arithmetization of programs represented as bit-strings, where the first bit is the most significant bit. \begin{equation} \label{eq:arithmetization} \phi(\pi) = \sum_{i=1}^{i \leq |\pi|} b_i.2^{ |\pi|-i} \end{equation} Thus arithmetized, we now show a simple, but interesting inequality about the distribution of programs: \begin{align} P(\pi) &= 2^{-\lceil \log_2(\phi(\pi)+1) \rceil} \\ (2a)^{-1} &\leq 2^{-\lceil \log_2 a\rceil} \leq a^{-1}, \text{for } a\geq4\\ \label{eq:sandwich} (2(\phi(\pi)+1))^{-1} &\leq P(\pi) \leq (\phi(\pi)+1)^{-1}, \text{for } \phi(\pi)\geq 3 \end{align} which shows an approximation that is closer than a factor of $2$. Program codes $\phi(\pi) < 3$ are discarded. Zipf's law $f_n \alpha n^{-1}$ manifests itself as the Zipf distribution of ranked discrete objects $\{ o_1,o_2,\dots,o_n \}$ in order of increasing rank $i$ \begin{equation} \label{eq:zipf} P( Z_s^{(n)} = o_i) \triangleq \frac{1}{i^sZ} \end{equation} where $Z_s^{(n)}$ is a random variable, $Z$ is the normalization constant and $s \geq 1$ (we used the notation $Z_s^{(n)}$ simply to avoid confusion with exponentiation, $Z_s$ is a standard notation for the zeta random variable). Zeta distribution is the countably infinite version of Zipf distribution with parameter $s>1$ \begin{equation} \label{eq:zeta} P( Z_s = k) = \frac{1}{k^s.\zeta(s)} \end{equation} where $Z_s$ is a random variable with co-domain $\Z^+$ and the zeta function is defined as \begin{equation} \label{eq:2} \zeta(s) = \sum_{n=1}^{\infty}\frac{1}{n^s} \quad. \end{equation} Note that Zeta distribution is a discrete variant of Pareto distribution. It is much involved to work with a prefix-free set, therefore we will suggest an alternative device to approximate $P(\pi)$. \begin{theorem} \label{thm:zipf-approx} A program distribution may be approximated by the Zipf distribution with $s=1$, or by the zeta distribution with a real $s$ close to $1$ from above. \end{theorem} \begin{proof} (a) Zeta distribution is undefined for $s=1$. However, if we use the Zipf distribution instead, and model programs up to a fixed program-length, we can approximate the program distribution from above using $(\phi(\pi)+1)^{-1}$ and from below using $ (2\phi(\pi)+2)^{-1}$ due to the sandwich property \prettyref{eq:sandwich}. (b) We can approximate the program distribution from below using $(2\phi(\pi)+2)^{-1}$. Since \begin{equation*} \forall \epsilon >0, \ (2\phi(\pi)+2)^{-(1+\epsilon)} \leq (2\phi(\pi)+2)^{-1} < P(\pi) , \end{equation*} we can also approximate it with the Zeta distribution \prettyref{eq:zeta} for $s$ close to $1$. \end{proof} In either case, the need for a prefix-free set of programs is obviated. Of the simplified distribution, we investigate if the approximations are usable. \begin{theorem} \label{thm:zipf-conv} The program distribution $P(\pi)$ asymptotically obeys a power law with exponent $-1$ as program size grows. \end{theorem} \begin{proof} The probability of arithmetized program $\pi$ is sandwiched between $(\phi(\pi)+1)^{-1}$ and $(2\phi(\pi)+2)^{-1}$, therefore as $|\pi|$ grows, Zipf's law grows closer to $P(\pi)$. \begin{equation} \label{eq:zipf-conv1} \lim_{|\pi| \to \infty} (\phi(\pi)+1)^{-1} - (2\phi(\pi)+2)^{-1} = 0 \end{equation} \begin{equation} \label{eq:zipf-conv2} \lim_{|\pi| \to \infty} 2^{-|\pi|} - (2\phi(\pi)+2)^{-1} = \lim_{|\pi| \to \infty} (\phi(\pi)+1)^{-1} - 2^{-|\pi|} = 0 \end{equation} \end{proof} Combining \prettyref{thm:zipf-approx} and \prettyref{thm:zipf-conv}, we propose using a Zeta distribution with a parameter close to $1$. Obviously, lower and upper bounds vary only by a factor of $2$ within each other, therefore the error in the approximation of program distribution is at most by $1$ bit (this property will be analyzed in detail in an extended version of the present paper). Substituting into \prettyref{eq:alp}, we propose an approximation \begin{definition} \label{def:alp-zeta} \begin{equation} \label{eq:alp-zeta} P_M(x) \approxeq \sum_{M(\pi) = x*} \frac{1}{(\phi(\pi)+1)^{1+\epsilon}.\zeta(1+\epsilon)} \end{equation} \end{definition} where $\zeta(1+\epsilon) \geq 2$ ($\zeta(1.7) \approxeq 2$). \prettyref{def:alp-zeta} may be useful for machine learning theorists wherever they must represent a priori program probabilities, as it allows them to employ number theory. See Elias Gamma Code \cite{elias} for an alternative integer code. \section{Training Sequence as a Stochastic Process} Although Solomonoff has theoretically described how the transfer learning problem might be solved in \cite{solomonoff-incremental}, a detailed theoretical model of transfer learning for the universal induction setting is missing in the literature. Here, we attempt to fill this gap. In his treatise of incremental learning, Solomonoff approached the transfer learning problem by describing an update problem which improves the guiding conditional probability distribution (GCPD) of the system as an inductive inference problem of the type that the system usually solves. Solomonoff's modular approach started with a number of problem solving methods and invented new such methods as the system progressed. The initial methods, however, are not fully specified, and we leave it as an open problem in this paper. Instead, we attempt at describing the space of training sequences using the zeta distribution, showing an interesting similarity to our world, whereas most problems in a sequence may be solved, but rarely they are not solvable at all. For instance, a mathematician may solve most problems, but stall at a conjecture that requires the invention of a new, non-trivial axiom indefinitely. In usual Solomonoff induction (with no transfer learning component), a computable stochastic source $\mu$ is assumed. The stochastic source may generate sequences, sets, functions, or other structures that we please, the general law of which may be induced via Solomonoff's method. We extend Solomonoff's induction model to a training sequence of induction problems, by considering a stochastic process $\M$ of $n$ random variables. \begin{equation} \label{eq:training-sequence} \M = \{ \mu_1, \mu_2, \mu_3, \dots, \mu_n \} \end{equation} The transfer learning problem thus is constituted from solving $n$ induction problems in sequence which are generated from the stochastic process $\M$. It does not matter which type of induction problem these problems are, as long as they are generated via $\M$. \subsection{Entropy Rate of a Training Sequence} A critical measurement of a stochastic process is its entropy rate, which is defined as the following for $\M$: \begin{equation} \label{eq:entropy-rate} H(\M) = \lim_{n \to \infty} \frac{H(\mu_1,\mu_2,\mu_3, \dots, \mu_n)}{n} \end{equation} and the conditional entropy rate, \begin{equation} \label{eq:cond-entropy-rate} H'(\M) = \lim_{n \to \infty} \frac{H( \mu_n | \mu_1,\mu_2,\mu_3, \dots, \mu_{n-1})}{n} \end{equation} which gives the entropy given past observations. Observe that there is a well-known relation between average Kolmogorov complexity and the entropy of an i.i.d. stochastic process (Equation 5 in \cite{universality-zipf}): \begin{equation} \label{eq:kolmogorov-shannon} \lim_{n \to \infty} \frac{K_M(X_1,X_2,X_3, \dots, X_n)}{n} = H(X) + O(1) \end{equation} where $X$ is a stochastic process and $X_i$ its random variables. We assume that the relation extends to conditional entropy without proof due to lack of space. \subsection{Training Time} Let $\pi^*_i$ be the minimal program for exactly simulating $\mu_i$ on $M$. The most general expression for $\pi^*_i$ is given in the following \begin{equation} \label{eq:minimal} \pi^*_i = \argmin_{\pi_j}(\{ |\pi_j| \ | \ \forall x,y \in \{0,1\}^*: M(\pi_j,x,y)=P(\mu_i =x | y) \}) \end{equation} where the pdf of stochastic source $\mu_i$ is simulated by a program $\pi_j$. The conditional parameter $y$ is optional. Let us note the following identity \begin{equation} \label{eq:sim-length} K_M(\mu_i) = |\pi^*_i| \end{equation} since arguments $x,y$ are extraneous input to the pdf specified by $\pi^*_i$. Let $t(\mu_i)$ denote the time taken to solve $\mu_i$, and $t(\pi)$ denote the time taken by program $\pi$ on M. Assume that $t(\mu_i) < \infty$. We know that the running time of extended Levin Search is bias-optimal \cite{solomonoff-incremental}, and \begin{equation} \label{eq:cjs} \frac{t(\pi^*_i)} { P(\pi^*_i)} \leq t(\mu_i) \leq \frac{2 t(\pi^*_i)} { P(\pi^*_i)} \end{equation} for a computable stochastic source $\mu_i$ ($K_M(\mu_i)<\infty$). The lower bound in \prettyref{eq:cjs} has been named conceptual jump size by Solomonoff, because it refers to the solution of individual induction problems within a training sequence, quantifying how much conceptual innovation is required for a new problem \cite{solomonoff-incremental}. We cannot exactly predict $t(\mu_i)$ due to the incomputability of algorithmic probability. Extended Levin Search will keep running indefinitely. It is up to the user to stop execution, which is usually bounded only by the amount of computational resources available to the user. We should also mention that Levin himself does not think that any realistic problems can be solved by Levin search or created on a computer \cite{levin-forbidden}. In the present paper, we run counter to Levin's position, by arguing that Levin search can work in an evolutionary setting, assuming an $O(1)$ oracle for the transfer learning problem. We substitute the relation between $K_M(x)$ and $P_M(x)$ in the upper bound for $t(\mu_i)$, \begin{align} \label{eq:time} K_M(\pi^*_i) &= -\log_2{P(\pi^*_i)} \end{align} obtaining the following fact due to \prettyref{eq:sim-length} and \prettyref{eq:time}: \begin{lemma} \label{lem:time2} $ t(\mu_i) \leq 2t(\pi^*_i) 2^{K_M(\mu_i)}$ \end{lemma} The inequality translates to the time for the training sequence $\M$ as \begin{theorem} \label{thm:process-time} \begin{equation} \label{eq:5} t(\M) \leq \sum_{i=1}^n t(\pi_i^*) 2^{K_M(\mu_i)+1} \end{equation} \end{theorem} which is a simple sum of \prettyref{lem:time2}. The conditional entropy rate is useful when the stochastic process has inter-dependence. Let us define conditional Kolmogorov complexity for the training sequence $\M$, \begin{equation} \label{eq:conditional-entropy} K'(\M_{<k}) \triangleq K( \mu_k | \mu_1,\mu_2,\mu_3, \dots, \mu_{k-1}) \end{equation} where $\M_{<k} \triangleq \{ \mu_i | i \leq k \} $. We define likewise for the stochastic process probabilities. \begin{equation} \label{eq:flow} P'(\M_{<k}) \triangleq P( \mu_k | \mu_1,\mu_2,\mu_3, \dots, \mu_{k-1}) \end{equation} $K'(\M_{<k})$ captures new algorithmic information content for the $k^{th}$ variable of the stochastic process given the entire history. As $n$ grows, the transfer learning oracle has to add $H'(\M)$ bits of information to its memory on the average in the stochastic process $\M$ as Kolmogorov-Shannon entropy relation \prettyref{eq:kolmogorov-shannon} holds in the limit for conditional entropy, as well. Since the upper temporal bound grows exponentially, \prettyref{eq:conditional-entropy} only relates loosely to the solution time $t(\mu_i)$ of a particular problem. We instead define the conditional expected training time upper bound with respect to $\M$: \begin{equation} \label{eq:condexpectedtime} \E'[t(\M_{<k})] \triangleq \E_{\M}[t(\mu_k) | \mu_1, \dots, \mu_{k-1}] \leq \sum_{\forall \mu_k \in \{0,1\}*}2t(\pi_k^*) 2^{K'(\M_{<k})} P'(\M_{<k}) \end{equation} \subsection{Random Typing Model} Let us start by considering the well-known model of random typing. If each $\mu_i$ is regarded as a random $m$-bit program out of $2^m$ such programs, the programs are independent, and the entropy rate is $m$ bits exactly (under usual i.i.d. assumptions, e.g., we are using fair coin tosses, and we construct programs using a binary alphabet). Assume $2^m >> n$. In the random typing model, all $\mu_i$ are algorithmically independent, therefore there is no saving that can be achieved by transfer learning. The time it takes for any problem is therefore: \begin{align} \label{eq:3} t(\mu_i) &\leq t(\pi_i^*) 2^{m+1} \end{align} for any of the $2^m$ programs. Since $m$ can be arbitrarily large, this model is compatible with Levin's conjecture that AI is impossible. Note that this simplistic model is reminiscient of various no-free lunch theorems that were heralded as mathematical proof that general-purpose machine learning was impossible. However, this scenario is highly unrealistic. It is extremely difficult to find problems that are completely independent, as this would require us to be using true random number generators to generate any problem. In other words, we are only showing this ``model'' to demonstrate how far removed from reality no-free lunch theorems are. In a physical world, this model would correspond to the claim that quantum randomness saturates every observation we may make. However, we already know this claim to be false, since our observations do not consist of noise. On the contrary, there is a lot of dependable regularity in the environment we inhabit, which is sometimes termed ``commmon sense'' in AI literature. \subsection{Power-law in Nature} A more realistic model, however, uses the zeta distribution for programs instead of uniform distribution. We propose this indeed to be the case since zeta distribution is empirically observed in a multitude of domains, and has good theoretical justification for the abundance of power-law in nature. \prettyref{thm:zipf-conv} gives some weak and indirect justification as to why we might observe fractions of the zeta distribution of programs in a computable universe. However, there are more direct and appealing reasons why we must expect to see the zeta distribution in highly evolved complex systems. First, it is a direct consequence of the power-law ansatz, and scale-invariance \cite{universality-zipf} or preferential attachment in evolutionary systems \cite{yule}. Second, it follows from an application of maximum entropy principle where the mean of logarithms of observations is fixed \cite{visser-zipf}. Third, biologists have observed the zeta distribution directly in genetic evolution, thus strengthening the case that our $\pi^*_i$'s are likely to conform to zeta distributions. For instance, gene family sizes versus their frequencies follow a power-law distribution \cite{huynen-freqdist} and the gene expression in various species follows Zipf's law \cite{furusawa-zipf}. Universal regularities in evolution have been observed, for instance in the power-law relation between the number of gene families and gene family size, and number of genes in a category versus number of genes in genome, and power-law like distribution of network node degree \cite{koonin-laws-evolution}. Therefore, there is not only a highly theoretical heuristic argument that we are following, but there exist multiple theoretical and empirical justifications for expecting to observe the zeta distribution of programs in nature. The material evolution of the environment in a habitat, is not altogether different from biological evolution. Except in the case of rare natural catastrophes, the material environment changes only gradually in accord with the dynamic flow of natural law (surprise is small), and is dependent mostly on the actions of organisms in a complex habitat, which may be considered to be programs from an information-theoretic point of view. In that sense, the entire ecology of the habitat in question may be considered to be an evolutionary system, with program frequencies similar to the case of genes in a single organism. In the following, we introduce novel models of training sequences inspired by these empirical justifications. \subsection{Identical Zeta Random Variables} Let $\M$ be i.i.d. generated from zeta distribution according to \prettyref{thm:zipf-conv}. Then, \begin{equation} \label{eq:zeta-iid} H'(\M) = H(\mu_1) = H(Z_s) \end{equation} indicating that the constant entropy rate depends only on the entropy of the zeta distribution. We thus analyze the running time. Let $t_{max}=\max\ \{ t(\mu_i) \}$. \begin{equation} \label{eq:zeta-iid} \E'[t(\M_{<k})] \leq {2 t_{max} \over \zeta(s)} \sum_{k=1}^\infty 2^{\lceil \log_2 k \rceil} k^{-s} \leq {4 t_{max} \over \zeta(s)} \sum_{k=1}^\infty {k \over k^{s}} \end{equation} For the first 1 trillion programs, $t_{max}\sum_{k=1}^{10^{12}} 4k / k^{1.001}\zeta(1.001) \approxeq 3.89 \times 10^{9} t_{max}$ for $s=1.001$, which is a feasible factor for a realistic program search limit. Note that AI theorists interpret i.i.d. assumptions as the main reason why no free-lunch theorems are unrealistic \cite{lattimore2013}. Our i.i.d. zeta process here may be interpreted as an elaboration of that particular objection to no free-lunch theorems. Therefore, we follow the heuristic argument that the right description of the environment which we observe must be something else than the random typing model since agents succeed in transfer learning. The constant zeta process leans towards feasibility, but it does not yet model transfer learning in complex environments. \subsection{Zipf Distribution of Sub-programs} Based upon the observations of genetic evolution above and the fact that the whole ecology is an evolutionary system, we may consider a process of programs that has the following property. Each $\pi^*_i$ that corresponds to $\mu_i$ is constructed from a number of sub-programs (concatenated). The joint distribution of sub-programs is $Z_s^{(n)}$. This is a model of gene frequencies observed in chromosomes, where each chromosome corresponds to a program, and each gene corresponds to a sub-program. Such a distribution would more closely model a realistic distribution of programs by constraining possible programs, as in the real-world the process that generates programs is not ergodic. The total entropy of the process therefore depends on the sub-programs that may be assumed to be random, and program coding. Let each sub-program be a $k$-bit random program for the sake of simplicity. The sub-programs that correspond to instructions are specified in a database of $2^k$ bits. Instructions are not equiprobable, however, as in the random typing model. Let each program have $m$ instructions drawn from the set of $2^k$ instructions: \begin{equation} A = \{ a_1, a_2, a_3, \dots, a_{2^k}\} . \label{eq:5} \end{equation} Then, we can model each optimal program $\pi^*_i$ as \begin{equation} \label{eq:7} \pi^*_i = \pi^*_{i,1} \pi^*_{i,2} \pi^*_{i,3}\dots \pi^*_{i,m} \end{equation} which make up a matrix of instructions $P^* = \pi^*_{i,j}$ where $\pi^*_{i,j}$ is drawn from the set $A$ of instructions. The total entropy is due to the database of sub-programs, and the entropy of the global distribution of sub-programs $Z_s^{(n)}$ which determines the entropy of $P^*$. The total entropy is then approximately, \begin{equation} \label{eq:total-entropy} H(\mu_1,\mu_2,\dots,\mu_n) \approx \log_2k + k.2^k + \log_2n + \log_2m+ H(Z_s^{(2^k)}) \end{equation} where we show the significant terms for $k,n,m,$ parameters. \begin{lemma} For the Zipf distribution of sub-programs, \begin{equation} \label{eq:zipf-ent-rate} H'(\M) \approx \lim_{n \to \infty} \frac{1}{n} \Big( k.2^k + \frac{s}{H_{2^k,s}}\sum_{l=1}^{2^k}\frac{\ln(l)}{l^s} + \ln(H_{2^k,s}) \\ + \log_2k + \log_2n + \log_2m \Big) \end{equation} due to \prettyref{eq:total-entropy}. \end{lemma} which is to say that, the entropy rate, and thus running time, critically depends on the choice of $k$ and $n$. \subsection{An Evolutionary Zeta Process} Another process of programs may be determined by mimicking evolution, by considering random mutations of programs in a training sequence. Let us set \begin{align} \label{eq:9} \pi^*_1 &= \wedge \\ \label{eq:mutation} \pi^*_i & = \begin{cases} M(Z_s,\pi^*_{i-1}), &\text{if } Z_s \text{ is a valid transformation} \\ \pi^*_{i-1}, & \text{otherwise} \end{cases} \end{align} which would apply a random transformation sampled from $Z_s$ in sequence to an initially null program. Such mutations are unlikely to be too complex. The resulting process has small conditional entropy rate, which is wholly dependent on $Z_s$. \begin{equation} \label{eq:10} \lim_{n \to \infty} H'(\M) = H(Z_s) = \log(\zeta(s)) - \frac{s\zeta'(s)}{\zeta(s)} \end{equation} \begin{lemma} \begin{align} \label{eq:h-zeta-vals} H(Z_{1.1}) & = 13.8 & H(Z_{1.05}) & = 24.5 \\ \label{eq:h-zeta-vals2} H(Z_{1.01}) & = 106.1 & H(Z_{1.001}) & = 1008.4 \end{align} \end{lemma} The lemma suggests that if an evolutionary process evolves slowly enough, then an AI can easily learn everything there is to learn about it provided that the time complexity of random variables is not too large. We can also employ $Z_s^{(k)}$ instead of $Z_s$ in \prettyref{eq:mutation}. For a universal induction approximation, $Z_{1.001}$ may be difficult to handle, however, for efficient model-based learning algorithms such as gradient descent methods, digesting new information on the order of a thousand bits is not a big challenge given sufficiently many samples for a problem $\mu_i$ in the sequence. \section{Concluding Remarks} We have shown novel relations between Zipf's law and program distribution by means of the arithmetization of programs. We have shown that zeta distribution may be used for approximating program distributions. We have proposed using the conditional entropy rate as an informative quantity for transfer learning. We have extended Solomonoff's induction model to a training sequence of problems as a stochastic process. We have proposed that the entropy rate of a stochastic process is informative. We have defined conditional Kolmogorov complexity and probability for the sequence, and have used these quantities to define a conditional expected upper bound of training time assuming an $O(1)$ transfer learning oracle. We introduced sequence models to show that there is a wide range of possible stochastic processes that may be used to argue for the possibility of general purpose AI. The random typing model is a sensible elaboration of no-free lunch theorem kind of arguments, and demonstrate how artificial and unlikely they are since everything is interconnected in nature and pure randomness is very hard to come by, which we therefore rule out as a plausible model of transfer learning. We have shown several empirical justifications for using a power-law model of natural processes. Independent Zeta process tends to the feasible, but does not explain transfer learning. The models that were inspired by natural evolution allow general purpose learning to be feasible. In particular, the model of common sub-programs which is inspired by empirical evidence in genetics supports a view of evolution of natural processes that allows incremental learning to be effective. The evolutionary Zeta process applies random mutations, which can be slow enough for a machine learning algorithm to digest all the new information. A more detailed analysis of the transfer learning problem will be presented in an extended journal paper. Open problems include analyzing the complexity of the optimal update algorithm, time complexity analysis for the evolutionary processes, and accounting for the time complexity of individual programs. \section*{Acknowledgements} The paper was substantially improved owing to the extensive and helpful comments of anonymous AGI 2014 and AGI 2018 reviewers. \bibliographystyle{splncs03}
proofpile-arXiv_067-14623
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Overview} CVC4 is an efficient open-source automatic theorem prover for SMT problems. It can be used to prove the validity (or, dually, the satisfiability) of first-order formulas in a large number of built-in logical theories and combinations thereof. CVC4 is intended to be an open and extensible SMT engine, and it can be used as a stand-alone tool or as a library, with essentially no limit on its use for research or commercial purposes (see the section on its license below for more information). \section*{New Features \slash\hskip .25em Improvements} The CVC4 configuration entered in the SMT Competition 2018 is an improved and extended version of the version that entered SMT-COMP 2017. Most notably, it features the following extensions. \vspace{0.5ex} \paragraph*{Floating-Point Solver} CVC4 now features a floating point solver and thus, for the first time, enters all FP logics of all tracks of the competition. Its FP engine uses SymFPU~\cite{symfpu-github} to translate floating-point operations into bit-vector operations, which are then handed to CVC4's lazy bit-vector engine~\cite{HBJ+14}. \vspace{0.5ex} \paragraph*{Eager Bit-Blasting Solver} Last year, we used CryptoMiniSat\cite{DBLP:conf/sat/SoosNC09,cms-github} version~4 as the back-end SAT solver for CVC4's eager bit-blasting engine. This year, for the first time, CaDiCaL~\cite{Biere-SAT-Competition-2017-solvers,cadical-github} (commit id b44ce4f) serves as our SAT back-end for eager bit-blasting. \vspace{0.5ex} \paragraph*{Heuristic Approaches for Non-Linear Arithmetic} CVC4 uses techniques for handling non-linear real and integer arithmetic inspired by recent work by Cimatti et al~\cite{DBLP:conf/tacas/CimattiGIRS17}. If a QF\_NIA problem cannot easily be solved with that approach, it resorts to turning the input into a bit-vector problem. This year, it uses CaDiCaL as the underlying SAT solver for this approach. \vspace{0.5ex} \paragraph*{Quantifier Instantiation} For unsatisfiable problems with quantifiers, CVC4 primarily uses conflict-based quantifier instantiation~\cite{RTd14} and E-matching. CVC4 additionally implements finite model-finding techniques~\cite{RT+13-CADE} for satisfiable problems with quantifiers. \vspace{0.5ex} \paragraph*{Quantified Bit-Vectors} In~\cite{bv-cav18,DBLP:journals/corr/abs-1804-05025}, we present a novel approach for solving quantified bit-vectors based on computing symbolic inverses of bit-vector operators. This approach is now the default for quantified bit-vectors in CVC4. \vspace{0.5ex} \paragraph*{Strings} This year, CVC4 is entering the non-competitive experimental division QF\_SLIA (strings). In this division, CVC4 uses the procedure described in~\cite{DBLP:conf/cav/LiangRTBD14} combined with a finite model-finding approach, which searches for strings of bounded length. For handling extended string functions like string contains, substring and replace, CVC4 uses context-dependent simplification techniques as described in~\cite{DBLP:conf/cav/ReynoldsWBBLT17}. \section*{Configurations} This year's version of CVC4 is entering all divisions in the main, application, and unsat core tracks of SMT-COMP 2018. It further enters the non-competitive experimental division QF\_SLIA (strings). All configurations are compiled with the optional dependencies ABC~\cite{abc-website}, CLN~\cite{cln-website}, glpk-cut-log~\cite{glpk-cut-github} (a fork of GLPK~\cite{glpk-website}), CaDiCaL, and CryptoMiniSat version~5. The commit used for all configurations is tagged with \texttt{smtcomp2018}~\cite{smtcomp2018-tag}. For each track, we use a binary that was compiled with different options and the corresponding run script uses different parameters depending on the logic used in the input. For certain logics, we try different options sequentially. For details about the parameters used for each logic, please refer to the run scripts. \vspace{0.5ex} \paragraph*{Main track (CVC4-main)} For the main track, we configured CVC4 for optimized reading from non-interactive inputs and without proof support. In contrast to last year's version, we do not use a portfolio configuration for QF\_BV since the eager bit-blasting engine with CaDiCaL as a back end in sequential configuration is more efficient. The run script is available at~\cite{main-runscript}. \vspace{0.5ex} \paragraph*{Application track (CVC4-application)} For the application track, we configured CVC4 for optimized reading from interactive inputs and without proof support. The run script is available at~\cite{application-runscript}. \vspace{0.5ex} \paragraph*{Unsat core track (CVC4-uc)} For the unsat core track, we configured CVC4 for optimized reading from non-interactive inputs and with proof support (required for unsat core support). The run script is available at~\cite{uc-runscript}. \vspace{0.5ex} \paragraph*{Experimental (CVC4-experimental-idl-2)} Additionally, an experimental configuration, which features a specialized IDL solver, enters the QF\_IDL division of the main track. It implements a shortest paths algorithm as an incremental version of the Floyd-Warshall algorithm that can update weights as new edges are added. \section*{Copyright} CVC4 is copyright 2009--2018 by its authors and contributors and their institutional affiliations. For a full list of authors, refer to the AUTHORS file distributed with the source code~\cite{cvc4-github}. \section*{License} The source code of CVC4 is open and available to students, researchers, software companies, and everyone else to study, to modify, and to redistribute original or modified versions; distribution is under the terms of the modified BSD license. Please note that CVC4 can be configured (however, by default it is not) to link against some GPLed libraries, and therefore the use of these builds may be restricted in non-GPL-compatible projects. For more information about CVC4's license refer to the actual license text as distributed with its source code~\cite{cvc4-github}. \newpag
proofpile-arXiv_067-14743
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{\label{sec:intro}Introduction} The three spin states ($\sigma=0$, $\pm 1$) Blume-Emery-Griffiths (BEG) \cite{BEG} was introduced with the aim to describe qualitatively superfluidity in $^{3}$He - $^{4}$He mixtures and phase separation. It is composed by three terms: the spin exchange interaction responsible by stabilizing a magnetic order, the local crystal field favoring non-active spin states $\sigma=0$ and the non-local bi-quadratic interaction term favouring active spin state $\sigma=\pm 1$ in neighboring sites. The competition between these three mechanisms is responsible for giving rise to a complex phase diagram. For instance, it is expected as an outcome, a phase diagram with second and first-order phase transitions lines and multicritical points. This has motivated this model to be investigated using several methods, such as mean field theory \cite{PhysRevLett.67.1027}, effective field theory \cite{Tucker1989}, cluster variation method \cite{Grigelionis1989, PhysRevB.47.2643}, Monte Carlo simulations \cite{wang1,wang2,Kasono92}, Bethe lattice \cite{Akheyan_1996,CHAKRABORTY1986122,Osorio_1989} and renormalization group with hierarchical lattices \cite{SNOWMAN20093007}. This interest in the BEG model raises the question of what might be the effects of disorder on it. It is known that the presence of disorder might lead to changes in the boundary line order of the phase transitions and, consequently, affecting multicritical phase diagrams \cite{PhysRevLett.35.1399, PhysRevLett.62.2507}. In the case of the BEG model, disorder can be introduced in three ways: by choosing exchange interaction, crystal field or bi-quadratic exchange strengths as a random variable or even a combination of the previously mentioned possibilities. Each situation can describe different problems. For instance, the case of a random bi-quadratic exchange can be used in neural networks, where this term in the BEG model becomes a learning rule \cite{PhysRevE.68.062901, 2005EPJB...47..281B}. On the other hand, a random crystal field can be applied to the modeling mixtures $^{3}$He-$^{4}$He in porous medium such as aerogel \cite{PhysRevLett.74.426,PhysRevLett.71.2268}. The three possibilities of disorder in the BEG model and its combinations have been treated also in several techniques, such as mean field, renormalization group, Bethe lattice, transfer matrix, cluster variation, effective field theory (see, for instance, Refs. \cite{PhysRevLett.62.2507, arenzon, ALBAYRAK2015107, Albayrak2015, DONG200790, Kple2021, 2020IJTP...59.3915K, KARIMOU20172371, Dong2009, Dong2007, Dong2006, PhysRevB.60.1033, Buzano1994, PhysRevLett.69.221}). In the case of the random crystal field field displayed in Ref. \cite{PhysRevB.60.1033}, results coming from mean field approximation (i. e., with infinite dimension or coordination number) and real space renormalization group can be compared. This last technique is quite suitable to describe low dimensionality scenarios. The most important difference between the two techniques is the suppression of the first-order phase transition lines or their replacing by continuous ones obtained in the renormalization group. The random crystal field BEG model with anti-ferromagnetic (AF) bi-quadratic coupling constant in the Bethe lattice was investigated in ref. \cite{Kple2021}. Our purpose with this work is to study the random crystal field BEG (RCBEG) model on the ensemble of poissonian random graphs. The random graph offers the average connectivity $c$ as a continuous, controllable parameter, allowing to investigate the RCBEG model for different regimes, i.e., from large connectivity, close to the fully connected limit corresponding to the mean field approximation, till to the small connectivity situation. One can expect that might occur important changes as compared with the mean field results involving the replacement and/or disappearance of multicritical points in the phase diagrams as long as $c$ decreases. Indeed, this kind of changes for small $c$ has been confirmed in the Blume-Capel model \cite{Blume,CAPEL1966966} with an added disorder given by a random field. In that model, it was found that variations in $c$ produced drastic changes in the multicritical phase diagrams as compared with fully connected case \cite{Kaufman1990}. Indeed, some multicritical points disappear when $c$ decreases \cite{rubemBC}. To sum over the realizations of the random graph, we use the replica symmetry theory of order parameter functions \cite{monasson,Monasson_1998,Wemmenhove_2003}. As frustration is absent is this model, we anticipate that the replica symmetry solution is exact for this purpose. The same equations for this problem can be derived by the cavity method \cite{cavitymp} after taking the ensemble average \cite{lupo}. We approach the problem considering that the lattice of spins is a random graph, where the connectivity is finite and the degree of a site is given by a Poisson distribution. Thus, we offer an alternative route to approach this problem. Also, we study simultaneously the presence of random crystal field disorder and the disorder of the lattice. As it has been shown, the connectivity has a crucial role in phase diagram topology \cite{rubemBC,RubemWalter,PhysRevE.103.022133}, allowing to change the nature of transitions and critical points through a fine-tuning of the control parameter The paper is organized as follows. In Sec. \ref{sec:method} we describe our model and derive the fundamental equations using replica symmetry theory for finite connectivity systems. In Sec. \ref{sec:results} we explain the method to numerically calculate the distribution of fields, some examples of order parameters are shown and the behavior of the system is described by drawing phase diagrams for the thermodynamic phases. The conclusions can be found in Sec. \ref{sec:ccl}. \section{\label{sec:method} Model and Replica Procedure} The model's Hamiltonian is \begin{align} H(\boldsymbol{\sigma})=-\frac{J}{c}\sum_{i< j}c_{ij} \sigma_{i}\sigma_{j} - \frac{K}{c}\sum_{i< j}c_{ij} \sigma_{i}^{2}\sigma_{j}^{2}+\sum_{i}\Delta_{i}\sigma_{i}^{2}\,, \label{hamiltonian1} \end{align} where $\mathcal{\sigma}\equiv\{\sigma_i\}\,,i=1\dots N $ denotes the state of the system, $c_{ij}$ are independent, identically distributed random variables (i.i.d.r.v.) chosen from the distribution \begin{align} p\left(c_{ij}\right)=\frac{c}{N}\delta(c_{ij}-1) + \left(1-\frac{c}{N}\right)\delta(c_{ij})\,, \label{cdistr} \end{align} indicating if the pair of spins $i$ and $j$ is connected ($c_{ij}=1$) or not ($c_{ij}=0$), with the constant $c$ representing the mean connectivity. The local, random crystal fields $\Delta_i$ are i.i.d.r.v. chosen from the distribution \begin{align} p\left(\Delta_{i}\right)=p\delta(\Delta_{i}-\Delta) +\left(1-p\right)\delta(\Delta_{i})\,. \label{Kdistr} \end{align} The constant $K$ controls the strenght of the bi-quadratic couplings. Using the replica method we can write the disorder averaged free energy as \begin{align} f(\beta)=-\lim_{N\rightarrow\infty}\frac{1}{\beta N}\lim_{n\rightarrow 0} \frac{1}{n}\log\langle {Z^n}\rangle_{\mathbf{c},\boldsymbol{\Delta}}\,, \label{free} \end{align} where \begin{equation} Z^n=\sum_{\boldsymbol{\sigma}_{1}\dots\boldsymbol{\sigma}_{n}} \mathrm{e}^{-\beta\sum_{\alpha} H(\boldsymbol{\sigma}_\alpha)}\, \label{part} \end{equation} is the replicated partition function $\boldsymbol{\sigma}_\alpha\,,\alpha=1\dots n$ denotes the state of replica $\alpha$, $\langle\cdot\rangle_{\mathbf{c}\boldsymbol{\Delta}}$, with $\mathbf{c}\equiv\{c_{ij}\}$ and $\boldsymbol{\Delta}\equiv\{\Delta_{i}\}$, denotes the disorder average. In the limit $c/N\rightarrow 0$, the average over $c_{ij}$ gives \begin{align} \langle Z^{n} \rangle = \sum_{\boldsymbol{\sigma}_{1}\dots\boldsymbol{\sigma}_{n}} \langle\mathrm{e}^{-\beta\sum_{\alpha,i}\Delta_{i} \sigma_{i\alpha}^{2}}\rangle_{\boldsymbol{\Delta}}\exp\Big[\frac{c}{2N}\sum_{i\neq j}\Big(\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha} \sigma_{i\alpha}\sigma_{j\alpha}+\frac{\beta K}{c}\sum_{\alpha} \sigma_{i\alpha}^{2}\sigma_{j\alpha}^{2}}-1\Big) \Big]\,. \label{part1} \end{align} To transform into a single spin problem, order functions \begin{equation} P(\boldsymbol{\sigma})=\frac{1}{N}\sum_{i} \delta_{\boldsymbol{\sigma}\boldsymbol{\sigma}_{i}}\,, \end{equation} which represent the probability of a replicated spin variable $\boldsymbol{\sigma}_{i}$ to assume the replica state $\boldsymbol{\sigma}$, and their conjugated order functions $\hat{P}(\boldsymbol{\sigma})$, are introduced. The partition function can be rewritten as (see the appendix) \begin{align} \langle Z^{n} \rangle=&\int\prod_{\boldsymbol{\sigma}} d\hat{P}(\boldsymbol{\sigma})d P(\boldsymbol{\sigma})\exp N\Big\{\sum_{\boldsymbol{\sigma}}\hat{P}(\boldsymbol{\sigma}) P(\boldsymbol{\sigma})+\frac{c}{2}\sum_{\boldsymbol{\sigma}\boldsymbol{\sigma}'} P(\boldsymbol{\sigma})P(\boldsymbol{\sigma}')\nonumber\\ & \times\Big(\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha} \sigma_{\alpha}\sigma_{\alpha}^{\prime}+\frac{\beta K}{c}\sum_{\alpha} \sigma_{\alpha}^{2}\sigma_{\alpha}^{\prime 2}}-1\Big)+ \log\sum_{\boldsymbol{\sigma}}\langle\mathrm{e}^{-\hat{P}(\boldsymbol{\sigma}) - \beta\Delta\sum_{\alpha}\sigma_{\alpha}^{2}}\rangle_{\Delta}\Big\}\,. \label{RSsp} \end{align} In the thermodynamic limit the integral can be evaluated through the saddle-point method. We eliminate the $\hat{P}(\boldsymbol{\sigma})$'s through the saddle-point equations and rewrite the free-energy as \begin{align} \label{freenophat} f(\beta)&=-\lim_{n\rightarrow 0} \frac{1}{\beta n}\mathrm{Extr}\Big\{-\frac{c}{2}\sum_{\boldsymbol{\sigma} \boldsymbol{\sigma}^{\prime}} P(\boldsymbol{\sigma})P(\boldsymbol{\sigma}^{\prime}) \Big(\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha} \sigma_{\alpha}\sigma_{\alpha}^{\prime} + \frac{\beta K}{c}\sum_{\alpha} \sigma_{\alpha}^{2}\sigma_{\alpha}^{\prime 2}}-1\Big)\\ & + \ln\Big\langle\sum_{\boldsymbol{\sigma}} \exp{\Big[c\sum_{\boldsymbol{\sigma}^{\prime}}P(\boldsymbol{\sigma}^{\prime}) \Big(\mathrm{e}^{\frac{\beta J}{c}\sum_{\alpha} \sigma_{\alpha}\sigma_{\alpha}^{\prime}+\frac{K \beta}{c}\sum_{\alpha} \sigma_{\alpha}^{2}\sigma_{\alpha}^{\prime 2}}-1\Big) - \beta\Delta\sum_{\alpha}\sigma_{\alpha}^{2}\Big] \Big\rangle_{\Delta}}\Big\}\,, \nonumber \end{align} where $\mathrm{Extr}$ amounts to take the extreme of the expression between braces relatively to $P(\boldsymbol{\sigma})$, which gives the remaining saddle-point equations \begin{equation} P(\boldsymbol{\sigma})=\frac{1}{\mathcal{N}}\Big\langle \exp{\Big[c\sum_{\boldsymbol{\sigma}^{\prime}} P(\boldsymbol{\sigma}^{\prime})\Big(\mathrm{e}^{\frac{\beta J}{c} \sum_{\alpha}\sigma_{\alpha}\sigma_{\alpha}^{\prime} + \frac{K\beta}{c}\sum_{\alpha}\sigma_{\alpha}^{2}\sigma_{\alpha}^{\prime 2}} - 1\Big) - \beta\Delta\sum_{\alpha}\sigma_{\alpha}^{2}\Big]\Big\rangle_{\Delta}}\,, \label{RS1} \end{equation} where $\mathcal{N}$ is a normalization factor. We search solutions of Eq. (\ref{RS1}) satisfying the RS Ansatz, where the order function is invariant under replica index permutations, which are written in the form \begin{equation} P(\boldsymbol{\sigma}) =\int\mathcal{D}W(x{,}y) \frac{\mathrm{e}^{\beta x\sum_{\alpha}\sigma_{\alpha} + \beta y\sum_{\alpha}\sigma^{2}_{\alpha}}}{\Big(\sum_{\sigma}\mathrm{e}^{\beta x\sigma + \beta y\sigma^{2}}\Big)^{n}}\,, \label{RS} \end{equation} where $\mathcal{D}W(x{,}y)\equiv dxdyW(x{,}y)$. Expanding the exponential of Eq. (\ref{RS1}) and introducing Eq. (\ref{RS}) we obtain a self consistent equation for the distribution of local fields (details in the Appendix) \begin{align} \label{fieldist} W(x{,}y) = \sum_{k=0}^{\infty}\frac{c^{k}\mathrm{e}^{-c}}{k!}\Big\langle\int \prod_{l=1}^{k}\mathcal{D}W(x_l{,}y_l) \delta\Big[x-\frac{1}{\beta}\sum_{l}\phi(x_l{,}y_l)\Big]\delta\Big[y+ \Delta - \frac{1}{\beta}\sum_{l}\psi(x_l{,}y_l)\Big]\Big\rangle_{\Delta}\,, \end{align} where \begin{equation} \phi(x{,}y)=\frac{1}{2}\ln\frac{\chi_{+1}(x{,}y)}{\chi_{-1}(x{,}y)}\,, \end{equation} \begin{equation} \psi(x{,}y)=\frac{1}{2}\ln\frac{\chi_{+1}(x{,}y)\chi_{-1}(x{,}y)} {\chi_0^2(x{,}y)}\,, \end{equation} and \begin{equation} \chi_{\sigma}(x{,}y)=\sum_{\tau}\mathrm{e}^{\beta x\tau + \frac{\beta}{c}J \sigma\tau + \beta y\tau^{2} + \frac{\beta}{c}K \sigma^{2}\tau^{2}}\,. \end{equation} The relevant observables are the average magnetization \begin{equation} m=\sum_{\boldsymbol{\sigma}}\sigma_{\alpha}P(\boldsymbol{\sigma}) = \int\mathcal{D}W(x,y)\frac{2\sinh(\beta x)}{\mathrm{e}^{-\beta y} + 2\cosh(\beta x)}\, \label{magnet} \end{equation} and the occupation number \begin{equation} Q=\sum_{\boldsymbol{\sigma}}\sigma_{\alpha}^{2}P(\boldsymbol{\sigma}) = \int\mathcal{D}W(x,y)\frac{2\cosh(\beta x)}{\mathrm{e}^{-\beta y} + 2\cosh(\beta x)}\,. \label{occ} \end{equation} To determine the RS free-energy we insert the Ansatz (\ref{RS}) in Eq. (\ref{freenophat}) and take the limit $n\rightarrow 0$, which results \begin{align} \nonumber f(\beta)=\frac{c}{2\beta} &\int \mathcal{D}W(x{,}y) \mathcal{D}W(x'{,}y') \frac{\sum_{\sigma\sigma'} \mathrm{e}^{\beta x\sigma + \beta y\sigma^{2}+\beta x^{\prime}\sigma^{\prime} + \beta y^{\prime}\sigma^{\prime 2}+\frac{\beta}{c}J \sigma\sigma'+\frac{\beta}{c}K \sigma^2\sigma'^2}}{\chi_{0}(x{,}y)\chi_{0}(x'{,}y')} \nonumber\\ & -\frac{1}{\beta}\sum_{k=0}^{\infty}P_k\int \prod_{l=1}^{k}\mathcal{D}W(x_l{,}y_l) \Big\langle\ln\Big(\sum_{\sigma}\mathrm{e}^{-\beta\Delta\sigma^{2}} \prod_{l}\frac{\chi_{\sigma}(x_l{,}y_l)}{\chi_0(x_l{,}y_l)}\Big) \Big\rangle_{\Delta}\,, \label{free1} \end{align} where $P_k=c^k\mathrm{e}^{-c}/k!$ is a poissonian weight. \section{Results\label{sec:results}} \begin{figure} \centering \includegraphics[width=8cm,clip]{mQf_T_8_1.eps} \caption{ Magnetization $m$, occupation number $Q$ and free-energy as functions of $T$ for $p=1$, $c=8$, $K=2$ and $D = 1.55$. Solid black lines on $m$ and $Q$ represent the stable order parameters values. Dashed black (dashed red) line represents metastable FM (PM) solution. Dashed black line on $f$ is the metastable FM free-energy raw data. Solid black line is a polynomial adjust of the FM data. Solid red line is the PM free-energy data. The arrow signals the crossing of FM and PM free-energies. \label{fig:mQf_T_8_.05}} \end{figure} \begin{figure} \centering \includegraphics[width=8cm,clip]{mQf_D_8_1.eps} \caption{ Magnetization $m$, occupation number $Q$ and free-energy as functions of $\Delta$ for $p=0.85$, $c=8$, $K=2$ and $T = 0.05$. Solid black lines on $m$ and $Q$ represent the stable order parameters values. Dashed black (dashed red) line represents metastable FM (PM) solution. Dashed black (red) line on $f$ is the FM (PM) free-energy raw data. Solid black (red) line is a polynomial adjust of the FM (PM) data. The arrow signals the crossing of FM and PM free-energies.} \label{fig:mQf_D_8_.05} \end{figure} \begin{figure} \centering \includegraphics[width=8cm,clip]{mQ_2_.85.eps} \caption{Magnetization $m$ and occupation number $Q$ as functions of $T$ for $p=0.85$, $c=4$, $K=2$ and $\Delta = 2.07$.} \label{fig:mQ_2_.85} \end{figure} According to Eqs. (\ref{magnet}) -- (\ref{free1}), the relevant order parameters are obtained through the calculation of the local field distribution, given by the self consistent equation (\ref{fieldist}). This is done numerically, via a population dynamics algorithm \cite{cavitymp}, as follows: (i) a population of $\mathcal{N}$ two-component fields $(x{,}y)$ is created; (ii) an integer $k$ is randomly sorted from a Poisson distribution of mean $c$, and $k$ fields are randomly chosen from the population; (iii) with the sorted fields, evaluate the two summations appearing in the delta functions of Eq. (\ref{fieldist}) and (iv) the results are assigned to the components of a further randomly chosen field $(x^{*}{,}y^{*})$. The algorithm is repeated till the convergence to a stable population distribution $W(x,y)$. Throughout this work we used populations of $\mathcal{N}=100{,}000$ fields and convergence time that amounts to 5,000,000 iterations. Still, each point is averaged over 20 runs. As shown in Eq. (\ref{free1}), the first free-energy term contains a double integral and the second term contains a $k$-fold integral over the local field distribution. To evaluate these terms, we follow a Montecarlo algorithm: a large number (1,000,000) of pairs and $k$-sets of local fields are randomly chosen and and their contributions are summed. This results in a noisy curve, contrary to $m$ and $Q$ evaluations that contain a simple integral. To overcome the noise, the $f$ curves are adjusted by a polynomial. As example of the outcome, order parameters and free-energy curves are shown in Fig. \ref{fig:mQf_T_8_.05}, as functions of $T$, for $c=8$, $K=2$, $p=1$ and $\Delta=1.55$. Here and in the sequel the energy scale is fixed by assuming the bi-linear coupling constant $J=1$. For $\Delta=1.55$, PM and FM phases coexist from $T=0$ till a continuous FM -- PM transition at $T\approx 0.693$. To overcome the noisy free-energy and find the discontinuous transition locus we resort to a polynomial fit which indicates the crossing of the free-energy curves at $T\approx 0.266$. PM is stable in the $0\leq T\lesssim 0.266$ and $0.693\lesssim T$ interval. FM is stable in the $0.266\lesssim T\lesssim 0.693$ interval. This characterizes a re-entrant behavior. The discontinuous transition at $T\approx 0.266$ appears as a dashed red line on Fig. \ref{fig:TD_2}a and continuous transition at $T\approx 0.693$ appears as a solid red line in the same figure. Order parameters and free-energy curves as functions of the crystal field $\Delta$, for $c=8$, $K=2$, $p=0.85$ and $T=0.05$ are shown in Fig. \ref{fig:mQf_D_8_.05}. The curves show a high $m$, high $Q$ FM$_{1}$ phase at small $\Delta$, a low $m$, low $Q$ FM$_{2}$ at large $\Delta$ and a co-existence region between them. As mentioned above, we resort to a linear fit to find a crossing of the free-energy curves at $\Delta\approx 1.75$. This reveals a discontinuous transition between the two ferromagnetic phases, represented by the dashed red line on Fig. \ref{fig:TD_2}b. The reason for the existence of two FM's phases will be discussed below. The order parameters $m$ and $Q$ as functions of the temperature for $c=4$, $p=0.85$, $K=2$ and $\Delta=2.07$ are shown in Fig. \ref{fig:mQ_2_.85}. This figure shows, as the temperature increases, a FM$_2$ phase, then a re-entrant PM phase, a FM$_2$ phase and a PM phase at high $T$. To give a complete overview of a model with so many parameters, keeping a reasonable amount of pictures, is a difficult task, and the zero-temperature $K$ versus $\Delta$ phase diagram may guide us. This diagram is shown in Fig. \ref{fig:KD} for the representative case $c=4$ and $p=0.85$, revealing a discontinuous FM$_1$ - FM$_2$ transition and a continuous FM$_2$ - PM transition. The two ferromagnetic phases are present, at low temperature, whenever $p<1$, i.e., in the presence of disorder. This disorder acts turning off the crystal field $\Delta$ in a $1-p$ fraction of sites, this way favouring the active states in these sites. The higher magnetization FM$_1$ is found at low $\Delta$ value, while the lower magnetization FM$_2$ and PM are found for higher $\Delta$'s. Since the bi-quadratic coupling constant $K$ favours the active states, higher magnetization phases are found as $K$ increases. It is unnecessary to add further zero temperature diagrams, but it is worthy to mention that, as the connectivity $c$ increases, or $p$ decreases, FM$_2$ becomes stable at large $\Delta$ and there is no more a PM phase at $T=0$. To describe the finite temperature behavior, $T$ versus $\Delta$ phase diagrams for $K=2$ and $K=5$ are presented in Figs. \ref{fig:TD_2} and \ref{fig:TD_5}, respectively. For each $K$ value results for representative disorder parameters $p=1$, $p=0.85$ and $p=0.5$, as well as connectivity values $c=4$ and $c=8$, are shown. Results for $p=0.5$ with $c=25$ and $c=100$ were also included, allowing for a better comprehension of the convergence to the mean field approach, which is expected for large $c$ (see ref. \cite{PhysRevB.60.1033}). Smaller $c$ values, like $0<c<1$ are below the percolation limit $c=1$ preventing, thus, the appearing of ordered phases. This way, the solutions would be $m=0$, $Q>0$. The most interesting feature is the appearing of two paramagnetic phases, PM$_1$ and PM$_2$ (to be defined below), depending on parameters $T$ and $\Delta$. The ordered case, $p=1$ is shown in Figs. \ref{fig:TD_2}(a), for $K=2$ and \ref{fig:TD_5}(a), for $K=5$. If $K=2$, there is a FM phase at low $T$, low $\Delta$ and a single PM phase elsewhere, with a continuous transition at high temperature, a re-entrant discontinuous transition at high $\Delta$ and a tricritical point (TCP) between them. TCPs, critical points (CPs) and critical end points (CEPs) are indicated as circles, squares and triangles in the figures. The re-entrant behavior is illustrated in Fig. \ref{fig:mQf_T_8_.05}, described above. If $K=5$, in addition to the FM phase there are two paramagnetic phases, PM$_1$ and PM$_2$. The co-existence of PM$_1$ and PM$_2$ is typical of models with a non-magnetic state $\sigma=0$, in which a sufficiently large crystal field suppresses the active $\sigma=\pm 1$ states. The high $Q$ and low $Q$ PM phases are named PM$_1$ and PM$_2$, respectively. The transition from FM to PM$_1$ is continuous, while the transition from PM$_2$ to FM and to PM$_1$ is discontinuous and re-entrant, with a CEP where the two lines meet. The PM$_1$ - PM$_2$ discontinuous transition ends at CP. The $p=1$ diagrams are similar to those concerning the Bethe lattice approach reported in \cite{Akheyan_1996}, although the re-entrant behavior in the discontinuous transition is more pronounced in the present paper. The re-entrant behavior in the ordered system with $K=2$ was also reported in \cite{PhysRevB.60.1033}. As a further remark, our results are qualitatively equivalent for both $c=4$ and $c=8$, although a lowering $c$ appear to favour ordered phases. Disorder, even for a moderate amount, i.e. $p=0.85$, unfolds the ferromagnetic phase in two, namely FM$_1$ and FM$_2$. The first one is reminiscent of the ordered system's FM phase. The second one, located at low $T$ and large $\Delta$, arises consequently to disorder that turns off the crystal field in a fraction $1-p$ of sites favouring the active states in these sites, as stated above. Connectivity effects become relevant. Figures \ref{fig:TD_2}(b) and \ref{fig:TD_5}(b) show that, for $c=8$, FM$_2$ extends unbounded in $\Delta$, in contrast to $c=4$, where there appears a zero temperature PM phase. We argue that a moderate level of disorder is not a sufficient condition to stabilize a FM$_2$ phase at large $\Delta$. Instead, it must be associated to a large cooperative FM neighborhood. This condition is found for $c=8$, but it is not for $c=4$. The random network with $c=8$ and a moderate amount of disorder behaves similarly to a fully connected one, whose mean-field results are reported in \cite{PhysRevB.60.1033}. In both models there is a part of the FM$_1$ - PM$_2$ that is discontinuous. In our case, although the finite connectivity, the random graph architecture still preserves a high dimensional nature. Conversely, renormalization group results for bi-dimensional systems, also reported in \cite{PhysRevB.60.1033}, show that this transition is entirely continuous. To end this part, additional qualitative differences between $K=2$ and $K=5$ for $p=0.85$ should be reported. For $K=2$, $c=4$, there is a discontinuous FM$_1$ - FM$_2$ transition that ends in a CP, shown in the inset of Fig. \ref{fig:TD_2}(b). This way, the transition between the two FM phases and the PM is always continuous and re-entrant, as illustrated in Fig. \ref{fig:mQ_2_.85} . Conversely, for $K=5$ and $c=4$ there is a CEP and a TCP in the FM - PM transition, as shows Fig. \ref{fig:TD_5}(b). This figure also shows, detailed in the inset, for $c=8$, a discontinuous PM$_1$ - PM$_2$ transition ending in a CP. \begin{figure} \centering \includegraphics[width=8cm,clip]{KD_4_.85.eps} \caption{$K$ versus $\Delta$ phase diagram for $T=0$, $c=4$ and $p=0.85$. Solid (dashed) lines corresponds to continuous (discontinuous) transition.} \label{fig:KD} \end{figure} The scenario for a larger disorder, e.g. $p=0.5$, is shown in Figs. \ref{fig:TD_2}(c) and \ref{fig:TD_5}(c) corresponding to $K=2$ and $K=5$, respectively. There is little to remark in these figures beyond the $\Delta$-dependent continuous FM - PM transition. The expectation for lower $p$ values is that the critical temperature approaches a constant $T\sim 1$ for all $\Delta$. This behavior is significantly distinct from the mean-field description for high disorder \cite{PhysRevB.60.1033}. To investigate the behavior of the highly disordered random network as $c$ increases, the phase diagrams for $c=25$ and $c=100$, $K=2$ and $K=5$ were drawn, for $p=0.85$. The results are shown in Figs. \ref{fig:TD_2}(d), for $K=2$ and \ref{fig:TD_5}(d), for $K=5$. The results show that the convergence to the fully connected scenario is faster for $K=5$. For $c=25$ the FM phase unfolds in FM$_1$ and FM$_2$ with a discontinuous transition between them ending in a CP. The fully connected scenario is observed for $c=100$, with a CEP, a TCP and discontinuous FM$_1$ - PM transition between them. \begin{figure} \centering \includegraphics[width=6.5cm,clip]{TD_2_1.eps}\hspace{0.5cm} \includegraphics[width=6.5cm,clip]{TD_2_.85.eps} \vspace{0.5cm} \includegraphics[width=6.5cm,clip]{TD_2_.5.eps}\hspace{0.5cm} \includegraphics[width=6.5cm,clip]{TD_100_2_.5.eps} \caption{Thermodynamic phase diagrams $T$ versus $\Delta$ for $K=2$; (a) $p=1$, (b) $p=0.85$ and (c) $p=0.5$. The inset in (b) shows in detail the vicinity of the critical point. The connectivity values are $c=4$ (black) and $c=8$ (red). (d) Thermodynamic phase diagrams for $p=0.5$, $K=2$, $c=25$ (black), $c=100$ (red). Solid (dashed) lines correspond to continuous (discontinuous) transitions. Circles, squares and triangles represent tri-critical points, critical points and critical end points, respectively } \label{fig:TD_2} \end{figure} \begin{figure} \centering \includegraphics[width=6.5cm,clip]{TD_5_1.eps}\hspace{0.5cm} \includegraphics[width=6.5cm,clip]{TD_5_.85.eps} \vspace{0.5cm} \includegraphics[width=6.5cm,clip]{TD_5_.5.eps}\hspace{0.5cm} \includegraphics[width=6.5cm,clip]{TD_100_.5.eps} \caption{(a) Thermodynamic phase diagrams $T$ versus $\Delta$ for $p=1$ $K=5$, $c=4$ (black), $c=8$ (red). (b) The same, but for $p=0.85$; the inset shows in detail the vicinity of the critical points. (c) The same, but for $p=0.5$. (d) Thermodynamic phase diagrams for $p=0.5$, $K=5$, $c=25$ (black), $c=100$ (red). Solid (dashed) lines correspond to continuous (discontinuous) transitions. Circles, squares and triangles represent tri-critical points, critical points and critical end points, respectively. } \label{fig:TD_5} \end{figure} \section{Conclusions\label{sec:ccl}} The BEG model with a disordered random crystal field was revisited, in a random graph topology, employing a finite connectivity technique. The disorder was introduced in the crystal field, as in \cite{PhysRevB.60.1033} and through the random graph architecture. We argue that, instead the crystal field, disorder could be introduced in the bi-quadratic coupling constant and it would play a similar role. Our model for disorder `turns off' the crystal field in a fraction of sites, allowing to this fraction to assume active states $\sigma=\pm 1$ without energetic penalty, even for large crystal field values. Models with an inactive state $\sigma=0$, like the ordered BEG model, unfolds the PM phase in a high temperature PM$_1$ and a low temperature PM$_2$. The main role that the disorder plays is to unfold the FM phase in a high magnetization FM$_1$ and a low magnetization FM$_2$. The last one survives at high crystal field values because the crystal field is `turned off' in a finite fraction of sites. We fixed $K=2$ and $K=5$. Anti-ferromagnetic coupling constant $K<0$, as reported in \cite{Kple2021} for the Bethe lattice with fixed coordination number, allow for a richer thermodynamic scenario with the appearing of a quadrupolar staggered phase. To do the same in a random network architecture would require the introduction of a sub-network or a random network of clusters, and this remains in our scope for future works. To end this work we resume the most relevant results. i) We found that the moderate disorder regime, e.g. $p=0.85$, is the more sensitive to changes in the average connectivity, because the stabilization of FM$_2$ relies on the cooperative effect of a large neighborhood. Otherwise, for small $c$, a PM phase sets at low $T$ and large $\Delta$. This is the regime where the finite connectivity network becomes the more distinct from the fully connected one. ii) For a large disorder, like $p=0.5$, the FM$_1$ - FM$_2$ discontinuous transition and the associated CP disappear at low $c$ values, like $c=4$ and $c=8$, appearing for $c$ as large as $c=25$. iii) A phase diagram similar to the fully connected mean field description one only appears for $c=100$ and $K=5$, but not for $c=100$ and $K=2$. This suggests, in general lines, that some of the features observed in mean field phase diagrams are artifacts that does not exist in most of the real, finite connectivity physical systems. \section*{Acknowledgements} The authors thanks to Dr. Nilton Branco for fruitful discussions and for carefully reading the manuscript. This work was supported, in part, by CNPq (Conselho Nacional de Desenvolvimento Cient\'{\i}fico e Tecnol\'ogico, Brazil). \section*{Appendix: self consistent equation for the field distribution} The site spin variables appearing in the inner exponential of the replicated partition function, Eq. (\ref{part1}), are removed using the identity \begin{align} 1=\prod_{\alpha=1}^{n}\sum_{\sigma_{\alpha}}\delta_{\sigma_{\alpha}\sigma_{\alpha i}}=\sum_{\boldsymbol{\sigma}}\delta_{\boldsymbol{\sigma}\boldsymbol{\sigma}_{i}}\,, \end{align} where $\boldsymbol{\sigma}=\{\sigma_{1}\dots\sigma_{n}\}$ is a vector of replicated spin variables and $\boldsymbol{\sigma}_{i}$ is the replicated spin variable associated to spin $i$. Introducing the order functions $P(\boldsymbol{\sigma})$ through the identity \begin{align} 1 = \int\prod_{\boldsymbol{\sigma}}dP(\boldsymbol{\sigma}) \delta\Big[P(\boldsymbol{\sigma})-\frac{1}{N} \sum_{i}\delta_{\boldsymbol{\sigma}\boldsymbol{\sigma}_{i}}\Big]\,, \end{align} Eq. (\ref{part1}) becomes \begin{align} \langle Z^{n} \rangle=\sum_{\boldsymbol{\sigma}_{1}\dots\boldsymbol{\sigma}_{n}} \int\prod_{\boldsymbol{\sigma}}dP(\boldsymbol{\sigma}) d\hat{P}(\boldsymbol{\sigma}) & \exp\Big\{\sum_{\boldsymbol{\sigma}}\hat{P}(\boldsymbol{\sigma}) P(\boldsymbol{\sigma})\\ +\frac{cN}{2} \sum_{\boldsymbol{\sigma}\boldsymbol{\sigma}'} P(\boldsymbol{\sigma}) & P(\boldsymbol{\sigma}')\Big(\mathrm{e}^{\frac{\beta J}{c} \sum_{\alpha} \sigma_{\alpha}\sigma_{\alpha}^{\prime}+\frac{\beta K}{c}\sum_{\alpha} \sigma_{\alpha}^{2}\sigma_{\alpha}^{\prime 2}}-1\Big) \nonumber \\ & - \frac{1}{N}\sum_{\boldsymbol{\sigma}}\hat{P}(\boldsymbol{\sigma}) \sum_{i}\delta_{\boldsymbol{\sigma}\boldsymbol{\sigma}_{i}} \Big\}\langle\mathrm{e}^{-\beta\sum_{\alpha i}\Delta_{i}\sigma_{i\alpha}^{2}}\rangle_{\boldsymbol{\Delta}} \,. \nonumber \end{align} Summing over the spin variables $\boldsymbol{\sigma}_{i}$ and changing variables $\hat{P}(\boldsymbol{\sigma})\rightarrow N\hat{P}(\boldsymbol{\sigma})$, Eq. (\ref{RSsp}) is obtained. Expanding the exponential en Eq. (\ref{RS1}) and inserting the RS Ansatz, we obtain \begin{align} P(\boldsymbol{\sigma})=&\sum_{k=0}^{\infty} P_k \Big\langle\mathrm{e}^{-\beta\Delta\sum_{\alpha}\sigma_{\alpha}^{2}} \Big\rangle_{\Delta}\int\prod_{l=1}^{k}\frac{\mathcal{D}W(x_l{,}y_l)} {\Big(\sum_\sigma\mathrm{e}^{\beta x_l\sigma_l + \beta y_l\sigma^{2}_l}\Big)^n} \exp\sum_{\alpha=1}^n\ln\chi_{\sigma_{\alpha}}(x_l{,}y_l)\,. \end{align} Now we withdraw the $\sigma_{\alpha}$ variables outside of the $\log$ using the identity $\sum_{\sigma}\delta_{\sigma\sigma_{\alpha}}=1$, \begin{align} \sum_{\alpha=1}^{n}\log\chi_{\sigma_{\alpha}}(x_{l},y_{l}) = \sum_{\alpha=1}^{n}\sum_{\sigma}\delta_{\sigma\sigma_{\alpha}} \ln\chi_{\sigma}(x_{l},y_{l})\,. \end{align} The Kronecker delta representation for the spin states $\sigma=\{-1,0,1\}$ is given by \begin{align} \delta_{\sigma\sigma_{\alpha}} = 1 - \sigma^{2} - \sigma_{\alpha}^{2} + \frac{1}{2}\sigma\sigma_{\alpha} + \frac{3}{2}\sigma^{2}\sigma_{\alpha}^{2}\,. \end{align} Summing over $\sigma$ we get, after some algebra, \begin{align} \nonumber P(\boldsymbol{\sigma})=\sum_{k=0}^{\infty} P_k \Big\langle\int\prod_{l=1}^{k} & \frac{\mathcal{D}W(x_{l}{,} y_{l})}{\Big(\sum_{\sigma}\mathrm{e}^{\beta x_{l}\sigma_{l} + \beta y_{l}\sigma^{2}_{l}}\Big)^{n}} \exp\Big\{\Big(\sum_{\alpha}\sigma_{\alpha}\Big)\sum_{l=1}^{k}\phi(x_{l},y_{l}) \\ & + \Big(\sum_{\alpha}\sigma^{2}_{\alpha}\Big)\sum_{l=1}^{k}\psi(x_{l},y_{l}) -\Big(\sum_{\alpha}\sigma^{2}_{\alpha}\Big)\beta\Delta\Big\} \Big\rangle_{\Delta} \,. \end{align} Substitution of RS Ansatz in the LHS and taking the limit $n\rightarrow 0$ \begin{align} \int \mathcal{D}& W(x{,}y)\mathrm{e}^{\beta x\sum_{\alpha}\sigma_{\alpha} + \beta y\sum_{\alpha}\sigma^{2}_{\alpha}} = \int dx dy \Big\{\sum_{k=0}^{\infty} P_k \Big\langle\int\prod_{l=1}^{k} \mathcal{D}W(x_{l},y_{l})\\ \nonumber &\times\delta\Big[x -\beta^{-1}\sum_{l}\phi(x_l{,}y_l)\Big]\delta\Big[y+ \Delta-\beta^{-1}\sum_l\psi(x_l{,}y_l)\Big]\Big\rangle_{\Delta}\Big\} \mathrm{e}^{\beta x\sum_{\alpha}\sigma_{\alpha} + \beta y\sum_{\alpha}\sigma^{2}_{\alpha}}\,. \end{align} Comparing both sides of this equation we obtain Eq. (\ref{fieldist}).
proofpile-arXiv_067-14923
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} \ProblemName{Max-Cut}\xspace is a fundamental problem in multiple domains, from constraint satisfaction (CSP) and linear equations to clustering. It was studied extensively in many computational models and for types of inputs, and many (near) tight bounds were obtained, oftentimes leading the way to even more general problems. For instance, in the offline setting, it admits a polynomial-time $0.878$-approximation for general graphs~\cite{DBLP:journals/jacm/GoemansW95}, and this approximation factor is tight under the Unique Game Conjecture~\cite{DBLP:journals/siamcomp/KhotKMO07}. In contrast, if the input is a dense (unweighted) graph, or a metric space (viewed as a weighted graph), then a PTAS exists~\cite{DBLP:journals/rsa/VegaK00,VK01}. In the graph-streaming setting, $(1 + \epsilon)$-approximation can be obtained using $\tilde{O}(n)$ space~\cite{DBLP:conf/icalp/AhnG09}, and this space bound is tight~\cite{DBLP:conf/stoc/KapralovK19}. However, the streaming complexity of \ProblemName{Max-Cut}\xspace is only partially resolved in the geometric setting, i.e., for Euclidean points. The known algorithm achieves $(1 + \epsilon)$-approximation but uses space $\exp(d)$~\cite{DBLP:conf/stoc/FrahlingS05}, which is prohibitive when the dimension is high, and it remained open whether this approximation ratio can be achieved also in the high-dimension regime, namely, using space $\poly(d)$. In fact, in this regime, even the trivial $2$-approximation, which corresponds to partitioning the points randomly, is not immediate to implement in streaming. We answer this question by providing the first streaming algorithms that achieve $(1+\epsilon)$-approximation for \ProblemName{Max-Cut}\xspace in high dimension. We consider the setting of \emph{dynamic geometric streams}, introduced by Indyk~\cite{Indyk04}, where the input is a dataset $X \subseteq [\Delta]^d$ that is presented as a stream of point insertions and deletions. The goal of the algorithm is to approximate (multiplicatively) the so-called \ProblemName{Max-Cut}\xspace value, defined as \[ \ProblemName{Max-Cut}\xspace(X) := \max_{\emptyset\neq S\subset X} \sum_{x \in S, y \in X \setminus S}\|x -y\|_2 \] (see \Cref{sec:prelim} for general metric spaces). We say that $\eta\ge 0$ achieves \emph{$\alpha$-approximation}, for $\alpha \geq 1$, if $\ProblemName{Max-Cut}\xspace(X)/\alpha \leq \eta \leq \ProblemName{Max-Cut}\xspace(X)$.\footnote{We will actually aim at $\eta \in (1\pm\epsilon) \cdot \ProblemName{Max-Cut}\xspace$, for $0<\epsilon<1/2$, which can be scaled to achieve $(1+O(\epsilon))$-approximation. } We assume throughout that $X$ contains distinct points (and is not a multiset), hence $n:=|X|\leq |\Delta|^d$. In the \emph{high-dimension regime}, algorithms can use at most $\poly(d \log \Delta)$ bits of space, which is polynomial in the bits requires to represent a point in $[\Delta]^d$. In the \emph{low-dimension regime}, algorithms may have bigger space complexity, e.g., exponential in $d$. A central challenge in the area of geometric streaming algorithms is to achieve good accuracy (approximation) in the high-dimension regime. Indeed, algorithms for many basic problems (like diameter, minimum spanning tree, facility location, and \ProblemName{Max-Cut}\xspace), achieve good approximation, say for concreteness $O(1)$ or even $1+\epsilon$, using space that is exponential in $d$ \cite{AHV04, DBLP:journals/algorithmica/Zarrabi-Zadeh11, DBLP:conf/stoc/FrahlingS05, DBLP:journals/ijcga/FrahlingIS08,DBLP:conf/esa/LammersenS08, DBLP:conf/soda/CzumajLMS13, DBLP:conf/icalp/CzumajJKV22}. In contrast, algorithms that use space polynomial in $d$, are fewer and they typically achieve far worse approximation ratio \cite{Indyk04,DBLP:conf/stoc/ChenJLW22,CJKVW22,WY22}, and obtaining $O(1)$-approximation remains open. In particular, Indyk~\cite{Indyk04} tackled the high-dimension regime using a technique of randomized tree embedding, which is rather general and economical in space, but unfortunately distorts distances by a factor of $O(d\log\Delta)$ that goes directly into the approximation ratio. Attempts to improve the approximation ratio had only limited success so far; for example, the algorithms of~\cite{AS15} work only in insertion-only streams, and the algorithms of~\cite{DBLP:conf/stoc/ChenJLW22,CJKVW22} fall short of the desired $O(1)$-approximation (in one pass). \subsection{Our Results} We tackle this challenge via two well-known but fundamentally different approaches -- dimension reduction and data reduction -- and we obtain streaming algorithms that $(1+\epsilon)$-approximate \ProblemName{Max-Cut}\xspace using $\poly(d\log\Delta)$ space, thereby closing the gap of high dimension (for \ProblemName{Max-Cut}\xspace). \paragraph{Dimension Reduction} We present a dimension reduction result for \ProblemName{Max-Cut}\xspace in \Cref{thm:maxcut_jl_intro}. Our dimension reduction is by a standard sub-Gaussian type of the Johnson-Lindenstrauss (JL) transform \cite{JL84}, but improves significantly over the target-dimension bound $d'=\Theta(\epsilon^{-2}\log n)$, which follows immediately from the overly strong guarantee that \emph{all pairwise distances} are preserved. We show the \ProblemName{Max-Cut}\xspace value is preserved even using a target dimension $d'=O(\poly(\epsilon^{-1}))$ that is completely independent of $n$. Informally, this means that the optimal \ProblemName{Max-Cut}\xspace objective is preserved ``much before'' the worst pairwise distance. Results of similar flavor of beating a naive application of the JL bound (for specific optimization objectives), were previously obtained for clustering~\cite{DBLP:conf/nips/BoutsidisZD10, DBLP:conf/stoc/CohenEMMP15, DBLP:conf/stoc/BecchettiBC0S19, MMR19, DBLP:conf/cccg/KerberR15, DBLP:conf/nips/IzzoSZ21, CW22}, and for facility location and minimum spanning tree~\cite{DBLP:conf/icml/NarayananSIZ21}. \begin{theorem}[Informal; see \Cref{thm:maxcut_jl}] \label{thm:maxcut_jl_intro} Let $X \subset \mathbb R^d$ be a finite set, $0 < \epsilon < 1$ and $\pi : \mathbb R^d \to \mathbb R^{d'}$ be a suitable Johnson-Lindenstrauss transform with target dimension $d' = \tilde{O}( \varepsilon^{-2} )$. Then with probability at least $2/3$, \[ \ProblemName{Max-Cut}\xspace(\pi(X)) \in (1 \pm \epsilon)\cdot \ProblemName{Max-Cut}\xspace(X). \] \end{theorem} Since the Johnson-Lindenstrauss transform is data-oblivious and the target dimension $d'$ in \Cref{thm:maxcut_jl_intro} is independent of $n$, one can easily apply on the image set $\pi(X)$ any known streaming algorithm for $\ProblemName{Max-Cut}\xspace$ in low dimension. By plugging in the $(1+\epsilon)$-approximation algorithm of~\cite{DBLP:conf/stoc/FrahlingS05}, we obtain a $(1 + \epsilon)$-approximation using space $\exp(\poly(\epsilon^{-1}))\poly(d \log\Delta)$, stated in \Cref{cor:cut_oracle_intro}. In fact, the algorithm of~\cite{DBLP:conf/stoc/FrahlingS05} reports not only an approximation to the \ProblemName{Max-Cut}\xspace value, but also an implicit representation of an approximately optimal cut, in the form of an oracle $f:\mathbb{R}^d \to \{0, 1\}$ that reports for every data point $x\in X$ its side in the cut. (Intuitively, one can think of $f:X \to \{0, 1\}$, but formally $f$ does not encode $X$ and its domain is all of $\mathbb{R}^d$.) We therefore obtain also this stronger guarantee, as stated next. \begin{corollary}[See \Cref{cor:cut_oracle}] \label{cor:cut_oracle_intro} There is a randomized streaming algorithm that, given $0 < \epsilon < 1/2$, integers $\Delta, d \geq 1$, and an input dataset $X \subseteq [\Delta]^d $ presented as a dynamic stream, the algorithm uses space $\exp(\poly(\epsilon^{-1}))\poly(d \log \Delta)$, and reports (an encoding of) a mapping $f : \mathbb{R}^d \to \{0, 1\}$, such that with probability at least $2/3$, the cut $(X\cap f^{-1}(0), X\cap f^{-1}(1))$ has value that $(1 + \epsilon)$-approximates $\ProblemName{Max-Cut}\xspace(X)$. \end{corollary} Despite the generality of the dimension-reduction approach and its power to obtain $\poly(d\log\Delta)$-space algorithms, combining it with an $\exp(d)$-space algorithm, inevitably leads to exponential dependence on $1/\epsilon$, as indeed happens in \Cref{cor:cut_oracle_intro}. We therefore consider next a different approach, and indeed obtain the desired polynomial dependence on $1/\epsilon$. \paragraph{Data Reduction via Importance Sampling} We present an algorithm that is based on the data-reduction approach, namely, it uses the dataset $X$ to construct a small instance $X'$ that has a similar \ProblemName{Max-Cut}\xspace value, then solve \ProblemName{Max-Cut}\xspace on it optimally and report this value $\ProblemName{Max-Cut}\xspace(X')$. Following a common paradigm, $X'$ is actually a re-weighted subset of $X$, that is picked by non-uniform sampling from $X$, known as importance sampling. \begin{theorem}[Streaming $\ProblemName{Max-Cut}\xspace$ in $\ell_p$ Norm] \label{thm:streaming_intro} There is a randomized streaming algorithm that, given $0 < \epsilon < 1/2$, $p \geq 1$, integers $\Delta, d \geq 1$, and an input dataset $X \subseteq [\Delta]^d $ presented as a dynamic stream, uses space $\poly(\epsilon^{-1} d \log \Delta)$ and reports an estimate $\eta>0$ that with probability at least ${2}/{3}$, is a $(1 + \epsilon)$-approximation to $\ProblemName{Max-Cut}\xspace(X)$ in $\ell_p$. \end{theorem} This data-reduction approach was previously used for several clustering problems. For $k$-median and related problems, such an instance $X'$ is often called a coreset, and there are many constructions, see e.g.~\cite{DBLP:conf/stoc/Har-PeledM04,DBLP:conf/stoc/FrahlingS05,DBLP:conf/stoc/FeldmanL11,BFLSY17,DBLP:journals/corr/abs-1802-00459,DBLP:journals/siamcomp/FeldmanSS20}. Earlier work~\cite{Schulman00} studied a problem that is closely related to \ProblemName{Max-Cut}\xspace and even used importance sampling, but it avoids a guarantee that $X$ and $X'$ have a similar \ProblemName{Max-Cut}\xspace value (see \Cref{sec:related} for a more detailed discussion). Recently, importance sampling was used to design streaming algorithms for facility location in high dimension~\cite{CJKVW22}, although their sample $X'$ is not an instance of facility location. Overall, we cannot use all this prior work, and we have to devise our own sampling distribution, prove that it preserves the \ProblemName{Max-Cut}\xspace value, and design a streaming algorithm that samples from this distribution. The approximation $1 + \epsilon$ in \Cref{thm:streaming_intro} is essentially the best one can hope for using small space, because finding the \ProblemName{Max-Cut}\xspace value exactly, even in one dimension, requires $\Omega(\Delta)$ space, as shown in \Cref{claim:lb_exact}. The advantage of \Cref{thm:streaming_intro} over \Cref{cor:cut_oracle_intro} is that its algorithm works for all $\ell_p$ norms ($p \geq 1$) and not only $\ell_2$, however it no longer produces an implicit representation of a solution. We leave it as an open question to design streaming algorithms that find an implicit representation using space $\poly(\epsilon^{-1} d \log \Delta)$. \subsection{Technical Overview} \paragraph{Dimension Reduction} As mentioned, we make use of the standard Johnson-Lindenstrauss (JL) transform, and focus on analyzing the target dimension for which the \ProblemName{Max-Cut}\xspace value is preserved within factor $1 \pm \epsilon$. We actually prove a slightly stronger bound (in \Cref{lem:avgdist_jl}), that for every subset $S \subseteq X$, the cut value of $S$ is preserved with an additive error $\epsilon \cdot \sum_{x, y \in X} \|x - y\|_2$, which is at most $O(\epsilon) \cdot \ProblemName{Max-Cut}\xspace(X)$ by a standard bound. Conceptually, this guarantee is similar to that of dimension reduction for $k$-median clustering~\cite{MMR19}, where it was shown that target dimension $\poly(\epsilon^{-1}\log k)$ suffices to preserve the objective value for all $k$-partitions of the dataset. We cannot rely on this result in a black-box manner, because technically our objective is different and involves no center points, however, our analysis uses a crucial technical lemma from~\cite{MMR19}, which roughly says that, if a random mapping (e.g., the JL Transform) preserves pairwise distances in some set $P$ in an average sense (i.e., for most pairs), then $P$ must contain a large (random) subset $P' \subseteq P$, in which the pairs that violate the distance guarantee are ``sparse'', meaning that for every point $x \in P'$, the number of points $y \in P'$ for which the pair $(x, y)$ is violated must be small. This argument turns an average-case distortion guarantee, which is the case for JL Transform with small target dimension, into a more structured ``everywhere-sparse'' one. This structural guarantee turns out to be very useful for our analysis. Roughly speaking, the main concern is that the distances to a few distant points are distorted, and these distances contribute a lot to the \ProblemName{Max-Cut}\xspace value. However, our analysis shows this cannot happen when the violations are ``everywhere-sparse''. Finally, we remark that our analysis only leads to a universal additive error $\epsilon \cdot \sum_{x, y \in X} \|x - y\|_2$, which is enough for \ProblemName{Max-Cut}\xspace, but it does not guarantee $(1\pm\epsilon)$-multiplicative error simultaneously for all the cuts, because some cuts must be relatively small. It is an interesting open question to explore the target dimension for which the JL Transform preserves all the cuts. \paragraph{Data Reduction} In order to estimate \ProblemName{Max-Cut}\xspace using importance sampling, we must first identify a sampling distribution for which $\ProblemName{Max-Cut}\xspace(X')$ indeed approximates $\ProblemName{Max-Cut}\xspace(X)$, and then we have to design a streaming algorithm that samples from this distribution. \paragraph{Data Reduction: Sampling Probability} One indication that \ProblemName{Max-Cut}\xspace admits data reduction by sampling comes from the setting of dense unweighted graphs, where it is known that \ProblemName{Max-Cut}\xspace can be $(1+\epsilon)$-approximated using a \emph{uniform sample} of $O(\epsilon^{-4})$ vertices, namely, by taking the \ProblemName{Max-Cut}\xspace value in the induced subgraph and scaling appropriately~\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07} (improving over \cite{GGR96}). However, sampling points uniform clearly cannot work in the metric case -- if a point set $X$ has many co-located points and a few distant points that are also far from each other, then uniform sampling from $X$ is unlikely to pick the distant points, which have a crucial contribution to the \ProblemName{Max-Cut}\xspace value. It is therefore natural to employ importance sampling, i.e., sample each point with probability proportional to its contribution. The contribution of a point $x$ to the \ProblemName{Max-Cut}\xspace value is difficult to gauge, but we can use instead a simple proxy -- its total distance to all other points $q(x) := \sum_{y \in X} \dist(x, y)$, which is just its contribution to (twice) the total edge weight $\sum_{x,y\in X} \dist(x, y)$. This works well because such an estimate will (hopefully) have additive error $\epsilon \sum_{x,y\in X} \dist(x, y) = \Theta(\epsilon) \cdot\ProblemName{Max-Cut}\xspace(X)$. This analysis is straightforward for any fixed cut in $X$, say a maximum one, however proving that the sampling preserves the \ProblemName{Max-Cut}\xspace value is much more challenging, as one has to argue about all possible cuts. We show in \Cref{thm:sampling} that $O(\epsilon^{-4})$ independent samples generated with probability proportional to $q(x)$ preserve the \ProblemName{Max-Cut}\xspace value. This holds even if the sampling probabilities are dampened by a factor $\lambda \geq 1$, at the cost of increasing the number of samples by a $\poly(\lambda)$ factor. To be more precise, it suffices to sample from any probability distribution $\{p_x:\ x\in X\}$, where $p(x) \ge \frac{1}{\lambda} \frac{q(x)}{\sum_{y\in X} q(y)}$ for all $x\in X$. A small technicality is that we require the sampling procedure to report a random sample $x^*$ together with its corresponding $p(x^*)$, in order to re-weight the sample $x^*$ by factor $1/p(x^*)$, but often this extra information can be obtained using the same methods. This sampling distribution, i.e., probabilities proportional to $\{q(x):\ x\in X\}$, can be traced to two prior works. Schulman~\cite{Schulman00} used essentially the same probabilities, but his analysis works only for one fixed cut, and extends to a few cuts by a union bound.\footnote{We gloss over slight technical differences, e.g., he deals with squared Euclidean distances, and his sampling and re-weighting processes are slightly different. } The exact same probabilities were also used by~\cite{VK01} as weights to convert a metric instance to a dense unweighted graph, and thereby obtain a PTAS (without sampling or any data reduction). In fact, our proof combines several observations from~\cite{VK01} with the uniform sampling of size $O(\epsilon^{-4})$ in unweighted graphs~\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07}. In a nutshell, we relate sampling from $X$ proportionally to $\{q(x):\ x\in X\}$ with sampling uniformly from a certain dense unweighted graph, whose cut values corresponds to those in $X$, and we thereby derive a bound on the \ProblemName{Max-Cut}\xspace value of the sample $X'$. \paragraph{Data Reduction: Streaming Implementation} Designing a procedure to sample proportionally to $\{q(x):\ x\in X\}$, when $X$ is presented as a stream, is much more challenging and is our main technical contribution (\Cref{lem:importance_sampling_algorithm}). The main difficulty is that standard tools for sampling from a stream, such as $\ell_p$-samplers \cite{MW10,JST11,JW21}, are based on the frequency vector and oblivious to the geometric structure of $X$. Indeed, the literature lacks geometric samplers, which can be very useful when designing geometric streaming algorithms. Our goal of sampling proportionally to $q(x)$, which is the total distance to all other points in $X$, seems like a fundamental geometric primitive, and therefore our sampler (\Cref{lem:importance_sampling_algorithm}) is a significant addition to the geometric-streaming toolbox, and may be of independent interest. \paragraph{High-Level Picture} At a high level, our sampling procedure is based on a randomly-shifted quadtree $T$ that is defined on the entire input domain $[\Delta]^d$ and captures its geometry. This randomly-shifted quadtree $T$ is data oblivious, and thus can be picked (constructed implicitly) even before the stream starts (as an initialization step), using space $O(\poly(d \log \Delta))$. This technique was introduced by Indyk~\cite{Indyk04}, who noted that the quadtree essentially defines a tree embedding with expected distortion of distances $O(d\log\Delta)$. Unfortunately, the distortion is fundamental to this approach, and directly affects the accuracy (namely, goes into the approximation ratio) of streaming algorithms that use this technique~\cite{Indyk04,DBLP:conf/stoc/ChenJLW22}.\footnote{A randomly-shifted quadtree was used also in Arora's approximation algorithm for TSP~\cite{Arora98}, but as the basis for dynamic programming rather than as a tree embedding, and similarly in streaming applications of this technique~\cite{DBLP:conf/icalp/CzumajJKV22}. } While our approach still suffers this distortion, we can avoid its effect on the accuracy, as it affects only the importance-sampling distribution, namely, the dampening factor $\lambda$ grows by factor $O(d\log\Delta)$, which can be compensated by drawing more samples. This increases the space complexity moderately, which we can afford, and overall leads to $(1 + \epsilon)$-approximation using space $\poly(\epsilon^{-1} d \log \Delta)$. \paragraph{Comparison to Other Sampling Approaches} A different importance-sampling method was recently designed in~\cite{CJKVW22} for facility location in high dimension. Their sampling procedure relies on a geometric hashing (space partitioning), and completely avoids a quadtree.\footnote{A key parameter in that hashing, called consistency, turns out to be $\poly(d)$, and unfortunately affects also the final approximation ratio for facility location, and not only the importance-sampling parameter $\lambda$ (and thus the space). } Our sampling technique may be more easily applicable to other problems, as it uses a more standard and general tool of a randomly-shifted quadtree. Another importance-sampling method was recently designed in~\cite{MahabadiRWZ20} in the context of matrix streams. Geometrically, their importance-sampling can be viewed as picking a point (row in the matrix) proportionally to its length, but after the points are projected, at query time, on a subspace. This sampling procedure is very effective for linear-algebraic problems, like column-subset selection and subspace approximation, which can be viewed also as geometric problems. Previously, quadtrees were employed in geometric sampling results, but mostly a fixed quadtree and not a randomly-shifted one, and to perform \emph{uniform sampling} over its non-empty squares at each level, as opposed to our importance sampling. For instance, in~\cite{DBLP:conf/stoc/FrahlingS05,BFLSY17,DBLP:journals/corr/abs-1802-00459,DBLP:conf/icalp/CzumajJKV22}, a heavy-hitters approach was used to discover ``heavy'' squares, and this boils down to uniform sampling. And in~\cite{DBLP:journals/ijcga/FrahlingIS08}, uniform sampling was augmented to report also all the points nearby the sampled points, and this was used to estimate the number of connected components in a metric threshold graph. \paragraph{Sampling on Quadtree: Bypassing the Distortion} Finally, we discuss how to perform the importance sampling, based on a randomly-shifted quadtree $T$, but without the $O(d\log\Delta)$ distortion going into the approximation ratio. As explained above, our importance sampling (\Cref{thm:sampling}) requires that every point $x\in X$ is sampled with some probability $p(x) \geq \frac{1}{\lambda} \frac{q(x)}{Q}$ for $Q := \sum_{y\in X} q(y)$, and the dampening factor $\lambda$ can be up to $O(\poly(d \log \Delta))$. We will exploit the fact that there is no upper bound on the probability $p(x)$. As usual, a randomly-shifted quadtree $T$ is obtained by recursively partitioning of the grid $[2\Delta]^d$ into $2^d$ equal-size grids, which we call squares, and building a tree whose nodes are all these squares, and every edge between a square and its parent has weight equal to that square's Euclidean diameter. Furthermore, this entire partitioning is shifted by a uniformly random $v_{\textrm{shift}}\in [\Delta]^d$. (See \Cref{sec:proof_importance_sampling} for details.) The key is that every grid point $x\in [\Delta]^d$ is represented by a tree leaf, and thus $T$ defines distances on $[\Delta]^d$, also called a tree embedding. Define $q_T$ and $Q_T$ analogously to $q$ and $Q$, but using the \emph{tree distance}, that is, $q_T(x)$ is the total tree distance to all other points, and $Q_T := \sum_{y\in X} q_T(y)$. The tree-embedding guarantees, for all $x\in[\Delta]^d$, that $q_T(x)$ always overestimates $q(x)$ (with probability $1$), and that $\E_T[q_T(x)] \leq O(d \log \Delta) \cdot q(x)$. It thus suffices to have one event, $Q_T \leq O(d \log \Delta) \cdot Q$, which happens with constant probability by Markov's inequality, and then we would have $\frac{q_T(x)}{Q_T} \geq \frac{1}{O(d \log \Delta)} \cdot \frac{q(x)}{Q}$ simultaneously for all $x\in X$. As mentioned earlier, the algorithm picks (constructs implicitly) $T$ even before the stream starts, and the analysis assumes that good event happening. In fact, the rest of the sampling procedure works correctly for arbitrary $T$, i.e., regardless of that event. We stress, however, that our final algorithm actually needs to generate multiple samples that are picked independently from the same distribution, and this is easily achieved by parallel executions that share their random tree $T$ but use independent coins in all later steps. We can thus view this $T$ as a preprocessing step that is run only once, even for independent samples. \paragraph{A Two-level Sampling by Using Tree Structure} The remaining challenge is to sample (in a streaming fashion) with probability proportional to $q_T$ for a fixed quadtree $T$. To this end, we make heavy use of the structure of the quadtree. In particular, the quadtree $T$ has $O(d \log \Delta)$ levels, and every data point $x\in X$ forms a distinct leaf in $T$. Clearly, each internal node in $T$ represents a subset of $X$ (all its descendants in $T$, that is, the points of $X$ inside its square). In each level $i$ (the root node has level $1$), we identify a level-$i$ \emph{heavy node} $h_i$ that contains the maximum number of points from $X$ (breaking ties arbitrarily). We further identify a \emph{critical level} $k$, such that $h_1, \ldots, h_k$ (viewed as subsets of $X$), all contain more than $0.5 |X|$ points, but $h_{k+1}$ does not. This clearly ensures that $h_1, \ldots, h_k$ forms a path. Let $X(h_i) \subseteq X$ denote the subset of $X$ represented by node $h_i$. The tree structure and the path property of $(h_1, \ldots, h_k)$ guarantee that, for each $i < k$, points $x\in X(h_{i}) \setminus X(h_{i + 1})$ have roughly the same $q_T(x)$ values. For the boundary case $i = k$, a similar claim holds for $x \in X(h_{k})$. Hence, a natural algorithm is to employ two-level sampling: first draw a level $i^* \leq k$, then draw a uniform sample from $X(h_{i^*}) \setminus X(h_{i^* + 1})$. The distribution of sampling $i^*$ also requires a careful design, but we omit this detail from the current overview. Instead, we focus on how to draw a uniform sample from $X(h_i) \setminus X(h_{i + 1})$. Unfortunately, sampling from $X(h_i) \setminus X(h_{i+1})$ still requires $\Omega(\Delta)$ space in the streaming model, even for $d = 1$. (We prove this reduction from INDEX in \Cref{claim:lb_sampling_light}.) In fact, it is even hard to determine whether $X(h_i) \setminus X(h_{i + 1}) = \emptyset$; the difficulty is that $h_i$ is not known in advance, otherwise it would be easy. We therefore relax the two-level sampling, and replace sampling from $X(h_i) \setminus X(h_{i + 1})$, with its superset $X \setminus X(h_{i + 1})$. This certainly biases the sampling probability a bit, in fact some probabilities might change by an unbounded factor (\Cref{rem:ChangeOrderOfSum}), nevertheless we prove that the increase in the dampening parameter $\lambda$ is bounded by an $O(\log\Delta)$ factor (\Cref{lem:prob}), which we can still afford. This part crucially uses the tree and the path property of $(h_1, \ldots, h_k)$. \paragraph{Sampling from Light Parts} The final remaining step is to sample from $X \setminus X(h_{i})$ for a given $i\leq k$ (\Cref{lemma:streaming_light}). We can assume here $X(h_i)$ contains more than $0.5 |X|$ points, and thus $X \setminus X(h_i)$ is indeed the ``light'' part, containing few (and possibly no) points. To let the light points ``stand out'' in a sampling, we hash the \emph{nodes} (instead of the points) randomly into two buckets. Since $|X(h(i))| > 0.5 |X|$, the heavy node $h_i$ always lies in the bucket that contains more points, and we therefore sample only from the light bucket. (To implement this in streaming, we actually generate two samples, one from each bucket, and in parallel estimate the two buckets' sizes to know which of the two samples to use.) Typically, the light bucket contains at least half of the points from $X \setminus X(h_i)$, which is enough. Overall, this yields a sampling procedure that uses space $\poly(\epsilon^{-1}d \log \Delta)$. \subsection{Related Work} \label{sec:related} Geometric streaming in the low-dimension regime was studied much more extensively than the high-dimensional case that we investigate here. In~\cite{DBLP:conf/stoc/FrahlingS05}, apart from the $O(\epsilon^{-d}\poly\log \Delta)$-space $(1 + \epsilon)$-approximation for \ProblemName{Max-Cut}\xspace that we already mentioned, similar results were obtained also for $k$-median, maximum spanning tree, maximum matching and similar maximization problems. For minimum spanning tree, a $(1 + \epsilon)$-approximation $O((\epsilon^{-1}\log\Delta)^d)$-space algorithm was devised in \cite{DBLP:journals/ijcga/FrahlingIS08}, alongside several useful techniques for geometric sampling in low dimension. In~\cite{DBLP:conf/focs/AndoniBIW09}, an $O(\epsilon^{-1})$-approximation $\tilde{O}(\Delta^{\epsilon})$-space streaming algorithm was obtained for computing earth-mover distance in dimension $d = 2$. For facility location in dimension $d=2$, a $(1 + \epsilon)$-approximation $\poly(\epsilon^{-1}\log \Delta)$-space algorithm was designed in~\cite{DBLP:conf/soda/CzumajLMS13}. Recently, Steiner forest (a generalization of Steiner tree, which asks to find a minimum-weight graph that connects $k$ groups of points), was studied in~\cite{CJKVW22} for dimension $d=2$, and they obtain $O(1)$-approximation using space $(\poly(k \log \Delta))$. Our sampling distribution may seem reminiscent of an importance sampling procedure devised by Schulman~\cite{Schulman00}, however his results and techniques are not useful in our context. First, the problem formulation differs, as it approximates the complement objective of sum of distances inside each cluster (which is only stronger than our MAX-CUT objective), and the objective function sums squared Euclidean (rather than Euclidean) distances. Second, his algorithm is for the offline setting and is not directly applicable in streaming. Third, his analysis does not provide a guarantee on $\ProblemName{Max-Cut}\xspace(X')$, but rather only on a certain subset of the cuts of $X'$, and his approximation guarantee includes a non-standard twist. \section{Preliminaries} \label{sec:prelim} Consider a metric space $(V, \dist)$. The Euclidean case is $V = \mathbb{R}^d$ and $\dist = \ell_2$, and the $\ell_p$ case is $V = \mathbb{R}^d$ and $\dist = \ell_p$. For $X \subseteq V$, the cut function $\cut_X : 2^X \to \mathbb{R}$ is defined as $\cut_X(S) := \sum_{x\in S, y \in X \setminus S} \dist(x, y)$. The \ProblemName{Max-Cut}\xspace value of a dataset $X \subseteq V$ is defined as \[ \ProblemName{Max-Cut}\xspace(X) := \max_{S \subseteq X}{\cut_X(S)}. \] We shall use the following standard tools as building blocks in our algorithms. Recall that a turnstile (or dynamic) stream can contain insertions and deletions of items from some domain $[N]$, and it naturally defines a frequency vector $x\in\mathbb{R}^N$, where every possible item has a coordinate that counts its net frequency, i.e., insertions minus deletions. In general, this model allows frequencies to be negative. However, in our setting where $X\subset [\Delta]^d$ is presented as a dynamic geometric stream, then frequency vector has $\Delta^d$ coordinates all in the range $\{0,1\}$, and is just the incidence vector of $X$. \begin{lemma}[$\ell_0$-Norm Estimator~\cite{DBLP:conf/pods/KaneNW10}] \label{lem:l0estimator} There exists a streaming algorithm that, given $0 < \epsilon, \delta< 1$, integers $N, M \geq 1$, and a frequency vector $x \in [-M, M]^N$ presented as a turnstile stream, where we denote its support by $X := \{ i \in [N] :\ x_i \neq 0 \}$, uses space $\poly(\epsilon^{-1}\log(\delta^{-1} MN))$ to return $r^* \geq 0$, such that $\Pr[ r^* \in (1 \pm \epsilon) |X| ] \geq 1-\delta$. \end{lemma} \begin{lemma}[$\ell_0$-Sampler~\cite{JST11}] \label{lem:l0sampler} There exists a streaming algorithm that, given $0 < \delta< 1$, integers $N, M \geq 1$, and a frequency vector $x \in [-M, M]^N$ presented as a turnstile stream, where we assume that its support $X := \{ i \in [N] : x_i \neq 0 \}$ is non-empty, uses space $\poly\log(\delta^{-1} MN)$, to return a sample $i^* \in X \cup \{\perp\}$, such that with probability at least $ 1 - \delta$, \[ \forall i \in X, \qquad \Pr[i^* = i] = \frac{1}{|X|}. \] \end{lemma} \section{Dimension Reduction for \ProblemName{Max-Cut}\xspace} \label{sec:jl} Our dimension reduction for \ProblemName{Max-Cut}\xspace, stated in \Cref{thm:maxcut_jl}, is based on a recent work~\cite{MMR19} that gave dimension reduction using a Johnson-Lindenstrauss (JL) transform whose target dimension is independent of $n$ for clustering problems. Similar to~\cite{MMR19}, we need the following (\Cref{def:jl}) properties of a version of JL Transform~\cite{JL84}, which may be realized by a ``sub-Gaussian'' type of random projection. \begin{definition}[JL Transform~\cite{JL84,MMR19}] \label{def:jl} For every integer $d, d' \geq 1$, there exists a randomized mapping $\pi : \mathbb{R}^d \to \mathbb{R}^{d'}$ such that for all $x \neq y \in \mathbb{R}^d$, \begin{align*} \Pr_\pi & \Big[\dist(\pi(x),\pi(y)) \not\in (1 \pm \epsilon) \cdot \dist(x, y) \Big] \le e^{-C d' \varepsilon^2} \\ \E_\pi & \Big[\max\Big(\frac{\dist(\pi(x), \pi(y))}{\dist(x, y)} - (1 + \varepsilon), 0\Big) \Big] \le e^{-C d' \varepsilon^2}, \end{align*} for some universal constant $C > 0$. \end{definition} \begin{theorem} \label{thm:maxcut_jl} Let $X \subset \mathbb R^d$ be a finite set, $0 < \epsilon, \delta < 1$ and $\pi : \mathbb R^d \to \mathbb R^{d'}$ be a JL Transform (as in \Cref{def:jl}) with target dimension $d' = O\left( \varepsilon^{-2} {\log\left(\frac{1}{\delta \varepsilon}\right)} \right) $. Then with probability $1 - \delta$, \begin{equation} \label{eqn:maxcut_presesrved} \ProblemName{Max-Cut}\xspace(\pi(X)) \in (1 \pm \epsilon)\cdot \ProblemName{Max-Cut}\xspace(X). \end{equation} \end{theorem} \begin{definition} A (directed) graph $H = (V,E)$ is called $\theta$-\emph{sparse} if $|E| \le \theta |V|^2$, and called \emph{everywhere $\theta$-sparse} if $\deg(u) \le \theta|V|$ for every $u \in V$, where $\deg(u)$ is the out-degree of $u$. Empty graphs are (everywhere) $\theta$-sparse for every $\theta \geq 0$. \end{definition} \begin{definition} For two metric spaces $(X, \dist_X)$ and $(Y, \dist_Y)$ and a mapping $\varphi : X \to Y$, the \emph{expansion graph} of $\varphi$ is the graph $H(X, E)$ where \[ E := \big\{ (x, y) : x, y \in X, \ \dist_Y(\varphi(x), \varphi(y)) > (1 + \varepsilon) \dist_X(x, y) \big\}. \] Similarly, the \emph{distortion graph} of $\varphi$ is the graph $H'(X, E')$ where \[ E' := \big\{ (x, y) : x, y \in X, \ |\dist_Y(\varphi(x), \varphi(y)) - \dist_X(x, y)| > \varepsilon \dist_X(x, y) \big\}. \] \end{definition} \begin{lemma} \label{lem:sparsity_imply_approx} Suppose $(X, \dist_X)$, $(Y, \dist_Y)$ are metric spaces and $\pi : X \to Y$ is a mapping. If the distortion graph $H(X, E)$ of $\pi$ is everywhere $\theta$-sparse ($\theta \leq \frac{1}{4}$), then \begin{equation} \label{eqn:sparsity_approx} \frac{1}{1 + \varepsilon + 8 \theta} \sum_{u, v \in X} \dist_X(u, v) \leq \sum_{u,v \in X} \dist_Y(\pi(u), \pi(v)) \leq (1 + \varepsilon + 8 \theta) \sum_{u, v \in X} \dist_X(u, v). \end{equation} \end{lemma} \begin{proof} It suffices to show the following claim which only uses the expansion property of $H$ and asserts the right half of \eqref{eqn:sparsity_approx}. Indeed, applying this claim on both $\pi$ and $\pi^{-1}$ (whose expansion graph is defined on $\pi(X)$) implies \Cref{lem:sparsity_imply_approx} which finishes the proof. (Here we treat different images of $\pi$ in the same coordinates as different points, thus $\pi^{-1}$ is well-defined and its expansion graph is clearly everywhere $\theta$-sparse.) \begin{claim*} If the \emph{expansion} graph $H(X, E)$ of $\varphi : X \to Y$ is everywhere $\theta$-sparse, then the right half of \eqref{eqn:sparsity_approx} holds, i.e., \[ \sum_{u,v \in X} \dist_Y(\varphi(u), \varphi(v)) \leq (1 + \varepsilon + 8 \theta) \sum_{u, v \in X} \dist_X(u, v). \] \end{claim*} We focus on proving this claim. Let $\Gamma_x$ denote the neighborhood of $x$ in $H$. Fix any $u,v \in X$ and any $x \in I_{u,v} := X \backslash (\Gamma_u \cup \Gamma_v)$. By the everywhere sparsity, we know that $|I_{u,v}| \ge (1 - 2\theta)|X|$. Hence, \begin{align*} \dist_Y(\varphi(u), \varphi(v)) \le &\dist_Y(\varphi(u), \varphi(x)) + \dist_Y(\varphi(x), \varphi(v))\\ \le &(1 + \varepsilon) (\dist_X(u,x) + \dist_X(x,v))\\ \le &(1 + \varepsilon) (\dist_X(u,v) + 2\dist_X(x,v)). \end{align*} By the symmetry between $u,v$, also \[ \dist_Y(\varphi(u), \varphi(v)) \le (1 + \varepsilon) (\dist_X(u,v) + 2\dist_X(x,u)). \] Taking average, \[ \dist_Y(\varphi(u), \varphi(v)) \le (1 + \varepsilon) (\dist_X(u,v) + \dist_X(x,u) + \dist_X(x,v)). \] Now averaging over all $x \in I_{u,v}$ \[ \dist_Y(\varphi(u), \varphi(v)) \le (1 + \varepsilon) \dist_X(u,v) + \frac{1 + \varepsilon}{(1 - 2\theta)|X|} \sum_{x \in I_{u,v}} (\dist_X(x,u) + \dist_X(x,v)). \] Therefore, \begin{align*} & \sum_{u,v \in X} \dist_Y(\varphi(u), \varphi(v)) \\ = & \sum_{(u,v) \notin E} \dist_Y(\varphi(u), \varphi(v)) + \sum_{(u,v) \in E} \dist_Y(\varphi(u), \varphi(v))\\ \le & (1 + \varepsilon) \sum_{u,v \in X} \dist_X(u,v) + \frac{1 + \varepsilon}{(1 - 2\theta)|X|} \sum_{(u,v) \in E} \sum_{x \in I_{u,v}} (\dist_X(x,u) + \dist_X(x,v))\\ \le & (1 + \varepsilon) \sum_{u,v \in X} \dist_X(u,v) + \frac{1 + \varepsilon}{(1 - 2\theta)|X|} \sum_{(u,v) \in E} \left(\sum_{x \in X \backslash \Gamma_u} \dist_X(x,u) + \sum_{x \in X \backslash \Gamma_v} \dist_X(x,v) \right)\\ = & (1 + \varepsilon) \sum_{u,v \in X} \dist_X(u,v) + \frac{1 + \varepsilon}{(1 - 2\theta)} \sum_{u \in X} \frac{2\deg(u)}{|X|}\sum_{x \in X \backslash \Gamma_u} \dist_X(x,u)\\ \le & \left(1 + \varepsilon + \frac{2\theta(1 + \varepsilon)}{(1 - 2\theta)}\right) \sum_{u,v \in X} \dist_X(u, v)\\ \le & (1 + \varepsilon + 8 \theta) \sum_{u,v \in X} \dist_X(u, v). \qedhere \end{align*} \end{proof} \begin{lemma}[{\cite[Theorem 3.2]{MMR19}}] \label{lem:sparse_subgraph} Let $X$ be a finite set and $H = (V \subseteq X,E)$ be a random graph, where $V$ is a random subset of $X$ and $E$ is a random set of edges between vertices in $V$ (and they may be dependent). Let $\theta \in \left(0,\frac 1 2\right)$, and assume that for some $\delta \leq \frac{\theta^8}{600}$, $\Pr[(x,y) \in E] \le \delta$ for every $x,y \in X$ (if $x \notin V$ or $y \notin V$ then $(x,y) \notin E$). Then, there exists a random subset $V' \subseteq V$ (defined on the same probability space as $H$), such that \begin{itemize} \item $H[V']$ is everywhere $\theta$-sparse, \item $\Pr[u \in V \setminus V'] \le \theta$ for all $u \in X$. \end{itemize} \end{lemma} \begin{lemma} \label{lem:avgdist_jl} Let $X \subset \mathbb R^d$ be a finite set and $\pi : \mathbb R^d \to \mathbb R^{d'}$ be a JL Transform where $d' = O\left( \varepsilon^{-2} {\log\left(\frac{1}{\delta \varepsilon}\right)} \right) $ as in \Cref{def:jl}. Then with probability $1 - \delta$, \begin{equation} \label{avgdist_preserved} \max_{S \subseteq X}\left|\sum_{x,y \in S} \dist(\pi(x),\pi(y)) - \sum_{x,y \in S} \dist(x, y)\right| \le \varepsilon \sum_{x,y \in X} \dist(x, y). \end{equation} \end{lemma} \begin{proof} Let $V$ be a maximizer in the left hand side of \eqref{avgdist_preserved}, then it suffices to show \eqref{avgdist_preserved} for $S = V$, i.e., \[ \left|\sum_{x,y \in V} \dist(\pi(x),\pi(y)) - \sum_{x,y \in V} \dist(x, y)\right| \le \varepsilon \sum_{x,y \in X} \dist(x, y). \] In fact, we do not use the optimality of $V$ and our argument holds for an arbitrary $V$ that may depend on the randomness $\pi$. Let $\theta := \frac{1}{600}e^{-C d' \varepsilon^2 / 7}= O \left(\varepsilon \delta \right)$ as in \Cref{def:jl}. Apply \Cref{lem:sparse_subgraph} with the set $X$ and the distortion graph of $\pi$ restricted on $V$ to get the random subset $V'$. Applying \Cref{lem:sparsity_imply_approx} with $\pi$ on $V'$, we have: \[ \frac{1}{1 + \varepsilon + 8 \theta} \sum_{u, v \in V'} \dist(u, v) \le \sum_{u,v \in V'} \dist(\pi(u), \pi(v)) \le (1 + \varepsilon + 8 \theta) \sum_{u, v \in V'} \dist(u, v). \] Thus, \begin{align} &\Big| \sum_{u,v \in V} \dist(\pi(u), \pi(v)) - \sum_{u,v \in V} \dist(u, v)\Big| \\ \le\, & (\varepsilon + 8\theta) \sum_{u,v \in V'} \dist(u, v) + 2\ \Big|\sum_{u \in V \backslash V', v \in V} \dist(\pi(u), \pi(v)) - \dist(u,v)\Big| \nonumber \\ \le\, & O(\varepsilon) \sum_{u,v \in V'} \dist(u, v) + 2\sum_{u \in V \backslash V'} \sum_{v \in V} \dist(\pi(u), \pi(v)) + \dist(u,v).\label{avgdist_jl_formula1} \end{align} To bound the last term $T = \sum_{u \in V \backslash V'} \sum_{v \in V} \dist(\pi(u), \pi(v)) + \dist(u,v) $, we first bound its expectation \begin{align} \E \left[T \right] \le\,& \E \left[\sum_{u \in V \backslash V'} \sum_{v \in V} \max\left(\dist(\pi(u), \pi(v)) - (1 + \varepsilon)\dist(u,v), 0\right)\right] + (2 + \varepsilon) \E \left[\sum_{u \in V \backslash V'} \sum_{v \in V} \dist(u,v)\right]\\ \le\,& \sum_{u \in X} \sum_{v \in X} \E \left[\max\left(\dist(\pi(u), \pi(v)) - (1 + \varepsilon)\dist(u,v), 0\right)\right] + (2 + \varepsilon) \sum_{u \in X} \Pr[u \in V \backslash V'] \sum_{v \in X} \dist(u,v)\\ \le\,& 4\theta \sum_{u \in X} \sum_{v \in X} \dist(u,v). \end{align} By Markov's inequality, with probability $1 - \delta$, $T$ is at most a $\frac{4\theta}{\delta} = O(\varepsilon)$ fraction of $\sum_{u \in X} \sum_{v \in X} \dist(u,v)$. We conclude the proof by combining this with (\ref{avgdist_jl_formula1}) (with a proper rescaling of $\varepsilon$). \end{proof} \begin{proof}[Proof of \Cref{thm:maxcut_jl}] By \Cref{lem:avgdist_jl}, we know that \begin{equation} \label{eqn:allcuts_preserved} \forall S \subseteq X,\qquad \left|\sum_{x,y \in S} \dist(\pi(x),\pi(y)) - \sum_{x,y \in S} \dist(x,y)\right| \le \varepsilon \sum_{x,y \in X} \dist(x, y). \end{equation} Define $\vol(S) := \sum_{x, y \in S}\dist(x, y)$, then $2\cut_X(S) = \vol(X) - \vol(S) - \vol(X \setminus S)$. Hence, \eqref{eqn:allcuts_preserved} may be written as \[ \forall S \subseteq X, \qquad |\vol(S) - \vol(\pi(S))| \leq \epsilon \cdot \vol(X). \] By \eqref{eqn:allcuts_preserved}, we have for every $S \subseteq X$, \begin{align*} & \ 2| \cut_X(S) - \cut_{\pi(X)}(\pi(S))| \\ = & \ |\vol(X) - \vol(S) - \vol(X \setminus S) - ( \vol(\pi(X)) - \vol(\pi(S)) - \vol(\pi(X \setminus S)))| \\ \leq & \ |\vol(X) - \vol(\pi(X))| + |\vol(S) - \vol(\pi(S))| + |\vol(X \setminus S) - \vol(\pi(X \setminus S))| \\ \leq & \ 3\epsilon \vol(X). \end{align*} Hence, $\pi$ preserves the cut value for every subset, with additive error $O(\epsilon) \cdot \vol(X) \leq O(\epsilon) \cdot \OPT$, where the last inequality follows from the standard fact that $\OPT = \Omega(\vol(X))$. This finishes the proof of \Cref{thm:maxcut_jl}. \end{proof} \Cref{thm:maxcut_jl} implies the corollary below about a streaming algorithm that reports an encoding of a near-optimal cut (and not just its value). The most natural way to report a cut of $X$ is to somehow represent of a $2$-partition of $X$, but this is not possible because that contains $X$ itself, which requires $\Omega(n)$ bits to store. Instead, we let the algorithm report a function $f : \mathbb{R}^d \to \{0, 1\}$ (using some encoding), and then $f$ implicitly defines the cut $(X\cap f^{-1}(0) , X\cap f^{-1}(1))$. In other words, the algorithm essentially reports an ``oracle'' that does not know $X$, but can determine, for each input point $x\in X$, its side in the cut. This formulation was suggested by~\cite{DBLP:conf/stoc/FrahlingS05}, and in fact we rely on their solution and combine it with our dimension reduction. \begin{corollary}[Cut Oracle] \label{cor:cut_oracle} There is a randomized streaming algorithm that, given $0 < \epsilon < 1/2$, integers $\Delta, d \geq 1$, and an input dataset $X \subseteq [\Delta]^d $ presented as a dynamic stream, the algorithm uses space $\exp(\poly(\epsilon^{-1}))\poly(d \log \Delta)$, and reports (an encoding of) a mapping $f : \mathbb{R}^d \to \{0, 1\}$, such that with constant probability (at least $2 / 3$), $\cut_X(X\cap f^{-1}(0)) \geq (1 - \epsilon) \cdot \ProblemName{Max-Cut}\xspace(X)$. \end{corollary} \begin{proof} As noted in~\cite{DBLP:conf/stoc/FrahlingS05}, there exists an algorithm $\mathcal{A}$ that finds an $f$ with the same guarantee and failure probability, except that the space usage is $\epsilon^{-O(d)} \cdot \poly(\log \Delta)$. Hence, we can use this $\mathcal{A}$ as a black with \Cref{thm:maxcut_jl} to conclude the theorem. Specifically, suppose $\pi : \mathbb{R}^d \to \mathbb{R}^{d'}$ such that $d' = O(\epsilon^{-2}\log(\epsilon^{-1}))$ is a mapping that satisfies \Cref{thm:maxcut_jl}. Then, for every update of point $x \in [\Delta]^d$ in the stream, we map it to $\pi(x)$ and feed it to $\mathcal{A}$. When the stream terminates, we use $\mathcal{A}$ to compute an $f' : \mathbb{R}^{d'} \to \{0, 1\}$. Then, to define the final $f : \mathbb{R}^d \to \{0, 1\}$ is defined as $f(x) := f'(\pi(x))$. This finishes the proof. \end{proof} \section{Approximating \ProblemName{Max-Cut}\xspace by Importance Sampling} \label{sec:sampling} In this section, we consider a general metric space $(V, \dist)$ (which includes Euclidean spaces by setting $V = \mathbb{R}^d$ and $\dist = \ell_2$), and show that a small importance-sample $S$ on a dataset $X \subseteq V$ may be used to estimate $\ProblemName{Max-Cut}\xspace(X)$ by simply computing $\ProblemName{Max-Cut}\xspace(S)$. \paragraph{Point-weighted Set~\cite{VK01}} Since we apply importance sampling (as opposed to using a uniform probability for every point), the sampled points need to be re-weighted so that the estimation is unbiased. Hence, we consider the notion of \emph{point-weighted} sets. This notion was first considered in~\cite{VK01} to reduce the metric \ProblemName{Max-Cut}\xspace to \ProblemName{Max-Cut}\xspace in (dense) weighted graphs. Specifically, a point-weighted set $S \subseteq V$ is a subset of $V$ that is associated with a point-weight function $w_S : S \to \mathbb{R}_+$. For a point-weighted set $S$, the distance $\dist_S(x, y)$ between $x, y \in S$ is also re-weighted such that $\dist_S(x, y) := \frac{\dist(x, y)}{w_S(x) w_S(y)}$. Under this weighting, when an edge $\{x, y\}$ appears in a cut, its contribution is still accounted as $w_S(x) \cdot w_S(y) \cdot \dist_S(x, y) = \dist(x, y)$. We prove (in \Cref{thm:sampling}) that for every dataset $X \subseteq V$, if one construct a point-weighted subset $S \subseteq X$ by drawing i.i.d. samples from a distribution on $X$, where each $x \in X$ is sampled proportional to $q(x) = \sum_{y \in X}{\dist(x, y)}$ which is the sum of distances to other points in $X$, up to an error factor of $\lambda$, then $\ProblemName{Max-Cut}\xspace(S)$ is $(1 + \epsilon)$-approximation to $\ProblemName{Max-Cut}\xspace(X)$ with high probability. \begin{theorem}\label{thm:sampling} Given $\varepsilon,\delta > 0,\lambda \ge 1$, metric space $(V, \dist)$ and dataset $X \subseteq V$, let $\mathcal{D}$ be a distribution $(p_x : x \in X)$ on $X$ such that $\forall x \in X, p_x \geq \frac{1}{\lambda} \cdot \frac{q(x)}{Q}$, where $q(x) = \sum_{y \in X} \dist(x, y)$ and $Q = \sum_{x \in X}q(x)$. Let $S$ be a point-weighted set that is obtained by an i.i.d. sample of $m \geq 2$ points from $\mathcal{D}$, weighted by $w_S(x) := \hat{p}_x$ such that $p_x \leq \hat{p}_x \leq (1 + \epsilon) \cdot p_x$. If $m \geq O(\epsilon^{-4}\lambda^{-8})$, then with probability at least $0.9$, the value $\frac{\ProblemName{Max-Cut}\xspace(S)}{m^2} $ is a $(1 + \epsilon)$-approximation to $\ProblemName{Max-Cut}\xspace(X)$. \end{theorem} The $O(\epsilon^{-4})$ dependence on $\epsilon$ of \Cref{thm:sampling} matches a similar $O(\epsilon^{-4})$ sampling complexity bound for unweighted graphs in~\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07}\footnote{A slightly weaker bound of $O(\epsilon^{-4}\poly\log(\epsilon^{-1}))$ was obtained in~\cite{DBLP:journals/jcss/AlonVKK03}, and \cite{DBLP:journals/jacm/RudelsonV07} gave an improved technical lemma which can be directly plugged into~\cite{DBLP:journals/jcss/AlonVKK03} to obtain the $O(\epsilon^{-4})$ bound.}. To the best of our knowledge, this $O(\epsilon^{-4})$ is the state-of-the-art even for the case of unweighted graphs. Although our proof of \Cref{thm:sampling} is obtained mostly by using the bound in~\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07} as a black box, the generalization to metric spaces, as well as the allowance of $\lambda$ which is the error in sampling probability, is new. \subsection{Proof of \Cref{thm:sampling}} As mentioned, the plan is to apply the sampling bound proved in~\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07}, which we restate in \Cref{lemma:eps4}. In fact, the original statement in~\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07} was only made for unweighted graphs, i.e., edge weights in $\{0, 1\}$. However, we observe that only the fact that the edges weights are between $[0, 1]$ was used in their proof. Hence, in our statement of \Cref{lemma:eps4} we make this stronger claim of $[0, 1]$ edge weights. Here, for a graph $G(V, E)$ with weight function $\mathrm{len}_G : E \to \mathbb{R}$ ($\mathrm{len}_G(\cdot) = 1$ for unweighted graphs), we define the cut function for $S \subseteq X \subseteq V$ (as well as \ProblemName{Max-Cut}\xspace) similarly, as $\cut_X(S) = \sum_{x \in S, y \in X \setminus S : \{x, y\} \in E} \mathrm{len}_G(x, y)$. \begin{lemma}[\cite{DBLP:journals/jcss/AlonVKK03,DBLP:journals/jacm/RudelsonV07}] \label{lemma:eps4} Consider a weighted graph $G(V, E)$ with weights in $[0, 1]$. Let $D \subseteq V$ be a uniformly independent sample from $V$ (possibly with repetitions) of $O(\epsilon^{-4})$ points. Then with probability at least $0.9$, \[ \left|\frac{1}{|D|^2}\ProblemName{Max-Cut}\xspace(D) - \frac{1}{|V|^2}\ProblemName{Max-Cut}\xspace(V)\right| \leq \epsilon. \] \end{lemma} Hence, our plan is to define an auxiliary graph $G'$ whose edge weights are in $[0, 1]$, such that our importance sampling may be interpreted as a uniform sampling from vertices in $G'$. Eventually, our sampling bound would follow from \Cref{lemma:eps4}. \paragraph{Defining Auxiliary Graph} Since we focus on approximate solutions, we can assume that $p_x$'s ($x \in X$) are of finite precision. Then, let $N$ be a sufficiently large number such that for all $x \in X$, $Np_x$ is an integer. We define an auxiliary graph $G'(X', E' := X' \times X')$, such that $X'$ is formed by copying each point $x \in X$ for $N p_x$ times, and edge weight $\mathrm{len}_{G'} (x, y) := \frac{1}{4\lambda^2 Q} \cdot \frac{\dist(x, y)}{\hat{p}_x \hat{p}_y}$. Clearly, if we let $x^*$ be a uniform sample from $X'$, then for every $x \in X$, $\Pr[x^* = x] = p_x$. Hence, this uniform sample $x^*$ is identically distributed as an importance-sample from distribution $\mathcal{D}$ on $X$. Furthermore, for $x, y \in S$, it holds that \[ \dist_S(x, y) = \frac{\dist(x, y)}{\hat{p}_x \hat{p}_y} = 4\lambda^2 Q \cdot \mathrm{len}_{G'}(x, y). \] Hence, we conclude the following fact. \begin{fact} \label{fact:coupling} Let $S' \subseteq X'$ be $m$ uniform samples from $X'$. Then, the value $4\lambda^2 Q \cdot \ProblemName{Max-Cut}\xspace(S')$ and $\ProblemName{Max-Cut}\xspace(S)$ are identically distributed. \end{fact} Therefore, it suffices to show the $S'$ from \Cref{fact:coupling} satisfies that $4\lambda^2 Q \cdot \frac{\ProblemName{Max-Cut}\xspace(S')}{|S'|^2}$ is a $(1 + \epsilon)$-approximation to $\ProblemName{Max-Cut}\xspace(X)$, with constant probability. Our plan is to apply \Cref{lemma:eps4}, but we first need to show, in \Cref{lemma:edge_weight_1}, that the edge weights of $G'$ are in $[0, 1]$, and in \Cref{lemma:relate_value} that $\ProblemName{Max-Cut}\xspace(X')$ is a $(1 + \epsilon)$-approximation to $\ProblemName{Max-Cut}\xspace(X)$ up to a scaling. \begin{lemma} \label{lemma:edge_weight_1} For all $x, y \in X'$, $\mathrm{len}_{G'}(x, y) \leq 1$. \end{lemma} \begin{proof} We need the following fact from~\cite{VK01} (which was proved in~\cite[Lemma 7]{VK01}). \begin{lemma}[\cite{VK01}] \label{lemma:dist_qx_qy} For all $x \in X$, it holds that \[ \frac{\dist(x, y)}{q(x) q(x)} \leq \frac{4}{Q}. \] \end{lemma} Applying \Cref{lemma:dist_qx_qy}, \[ \mathrm{len}_{G'}(x, y) = \frac{1}{4\lambda^2 Q} \cdot \frac{\dist(x, y)}{\hat{p}_x \hat{p}_y} \leq \frac{1}{4\lambda^2 Q} \cdot \frac{ \dist(x, y) }{p_x p_y} \leq 1. \] \end{proof} \begin{lemma} \label{lemma:relate_value} $\frac{4\lambda^2 Q}{N^2}\ProblemName{Max-Cut}\xspace(X') \in (1 \pm \epsilon) \cdot \ProblemName{Max-Cut}\xspace(X)$. \end{lemma} \begin{proof} Let $\widetilde{X}$ be a point-weighted set formed by re-weight points in $X$ with $w_{\widetilde{X}}(x) = N p_x$. The following lemma from~\cite{VK01} shows that $\ProblemName{Max-Cut}\xspace(X) = \ProblemName{Max-Cut}\xspace(\widetilde{X})$. \begin{lemma}[{\cite[Lemma 5 and Lemma 6]{VK01}}] Let $(U, \dist_U)$ be a metric space, and $W \subseteq U$ be a dataset. Suppose for every $x \in W$, $\mu_x > 0$ is an integer weight. Then the point-weighted set $W'$ obtained from re-weighting each point $x \in W$ by $\mu_x$, satisfies that \[ \ProblemName{Max-Cut}\xspace(W') = \ProblemName{Max-Cut}\xspace(W). \] \end{lemma} Now, we observe that $\widetilde{X}$ can be naturally interpreted a weighted complete graph $\widetilde{G}$, where we copy $x \in X$ for $w_{\widetilde{X}}$ times to form the vertex set, and the edge length is defined as $\dist_{\widetilde{X}}(x, y)$. Notice that the vertex set of $\widetilde{G}$ is exactly $X'$, and that the edge length \[ \dist_{\widetilde{X}}(x, y) = \frac{\dist(x, y)}{N^2 p_x p_y} \in (1\pm \epsilon) \cdot \frac{4\lambda^2 Q}{N^2} \cdot \mathrm{len}_{G'}(x, y). \] Therefore, we conclude that $\ProblemName{Max-Cut}\xspace(X) = \ProblemName{Max-Cut}\xspace(\widetilde{{X}}) \in (1 \pm \epsilon) \cdot \frac{4\lambda^2 Q}{N^2} \cdot \ProblemName{Max-Cut}\xspace(X')$. This finishes the proof. \end{proof} Now, we are ready to apply \Cref{lemma:eps4}. Let $S' \subseteq X'$ such that $|S'| = O(\epsilon^{-4})$ be the resultant set by applying \Cref{lemma:eps4} with $G = G'$ (recalling that the promise of $[0, 1]$ edge weights is proved in \Cref{lemma:edge_weight_1}). Then \[ \frac{1}{|S'|^2} \ProblemName{Max-Cut}\xspace(S') \in \frac{\ProblemName{Max-Cut}\xspace(X')}{|X'|^2} \pm \epsilon. \] Applying \Cref{lemma:relate_value}, and observe that $|X'| = N$, the above equivalents to \begin{align*} \frac{4\lambda^2 Q}{|S'|^2} \ProblemName{Max-Cut}\xspace(S') &\in (1 \pm \epsilon) \cdot \ProblemName{Max-Cut}\xspace(X) \pm \epsilon \cdot 4\lambda^2 Q \\ &\in (1 \pm O(\lambda^2 \epsilon)) \cdot \ProblemName{Max-Cut}\xspace(X) \end{align*} where the last inequality follows from $\ProblemName{Max-Cut}\xspace(X) \geq \Omega(Q)$. We finish the proof by rescaling $\epsilon$. \section{Streaming Implementations} \label{sec:streaming} \begin{theorem}[Streaming Euclidean $\ProblemName{Max-Cut}\xspace$] \label{thm:streaming} There is a randomized streaming algorithm that, given $0 < \epsilon < 1/2$, $p \geq 1$, integers $\Delta, d \geq 1$, and an input dataset $X \subseteq [\Delta]^d $ presented as a dynamic stream, uses space $\poly(\epsilon^{-1} d \log \Delta)$ and reports an estimate $\eta>0$ that with high probability (at least ${2}/{3}$) is a $(1 + \epsilon)$-approximation to $\ProblemName{Max-Cut}\xspace(X)$ in $\ell_p$. \end{theorem} Our algorithm employs importance sampling as formulated in \Cref{thm:sampling}, and thus needs (enough) samples from a distribution $\mathcal{D}$ that corresponds to the input $X \subseteq [\Delta]^d $ with small parameter $\lambda>0$. Given these samples, the algorithm can estimate $\ProblemName{Max-Cut}\xspace(X)$ by a brute-force search on the (point-weighted) samples, which can be done using small space. Note that \Cref{thm:sampling} works for a general metric space, hence it also applies to the $\ell_p$ case as we require. We thus focus henceforth on performing importance sampling from a dataset $X$ that is presented as a dynamic stream, as formalized next in \Cref{lem:importance_sampling_algorithm}. \begin{lemma}[Importance-Sampling Algorithm] \label{lem:importance_sampling_algorithm} There is a randomized streaming algorithm $\mathcal A$ that, given $0 < \epsilon < 1/2$, $p \geq 1$, integers $\Delta, d \geq 1$, and an input dataset $X \subseteq [\Delta]^d$ presented as a dynamic stream, it uses space $\poly(\epsilon^{-1} d \log \Delta)$ and reports $z^* \in X\cup\{\perp\}$ together with $p^*\in[0,1]$. The algorithm has a random initialization with success probability at least $0.99$,\footnote{It is convenient to separate the random coins of the algorithm into two groups, even though they can all be tossed before the stream starts. We refer to the coin tosses of the first group as an initialization step, and condition on their ``success'' when analyzing the second group of coins. The algorithm cannot tell whether its initialization was successful, and thus this event appears only in the analysis (in \Cref{lem:tree_embeddings_prob}). } and conditioned on a successful initialization, its random output satisfies: (1) with probability at least $1 - 1 / \poly(\Delta^d)$, \[ \forall x \in X, \qquad \Pr[ z^* = x ] \ge \frac{1}{\lambda} \frac{q(x)}{Q}, \] for $q(x) := \sum_{y \in X} \dist(x,y)$, $Q := \sum_{x \in X} q(x)$, $\dist = \ell_p$, and $\lambda := \poly(d \log \Delta)$; and (2) whenever $z^*\neq \perp$, \[ z^*=x\in X \quad\Longrightarrow\quad p^* \in (1 \pm \epsilon) \cdot \Pr[z^* = x] . \] \end{lemma} The somewhat intricate statement of \Cref{lem:importance_sampling_algorithm} is very useful to generate many samples with a large success probability. The obvious approach to generate $t$ samples is to run $t$ executions of this algorithm (all in parallel on the same stream) using independent coins, but then the success probability is only $0.99^t$. Consider instead running $t$ parallel executions, using the \emph{same initialization coins} but otherwise independent coins, which requires total space $t\cdot \poly(\epsilon^{-1} d \log \Delta)$. Then with probability at least $0.99$ the initialization succeeds, in which case the $t$ executions produce $t$ independent samples, each of the form $(z^*,p^*)$ and satisfies the two guarantees in the lemma. \subsection{The Importance-Sampling Algorithm (Proof of \Cref{lem:importance_sampling_algorithm})} \label{sec:proof_importance_sampling} Our plan is to implement the importance sampling on a tree metric generated by a randomized embedding of the input dataset. The notion of randomized tree embedding was first proposed in~\cite{DBLP:conf/focs/Bartal96} for arbitrary metric spaces, and the specific embedding that we employ was given by~\cite{Indyk04} for $\ell_p$ metrics presented as a stream of points. We describe this tree embedding below. We stress that our algorithm can be easily implemented in low space because it does not need to compute the entire embedding explicitly; for instance, the algorithm's initialization picks random coins, which determine the embedding but do not require any further computation. \paragraph{Initialization Step: Randomized Tree Embedding~\cite{DBLP:conf/focs/Bartal96,CCGGP98,Indyk04}} Assume without loss of generality that $\Delta \geq 1$ is an integral power of $2$, and let $L := 1 + d \log \Delta$. Let $\{\mathcal{G}_i\}_{i=0}^{L}$ be a recursive partitioning of the grid $[2\Delta]^d$ into squares,\footnote{Strictly speaking, these squares are actually hypercubes (sometimes called cells or grids), but we call them squares for intuition.} as follows. Start with $\mathcal{G}_0$ being a trivial partitioning that has one part corresponding to the entire grid $[2\Delta]^d$, and for each $i \geq 0$, subdivide every square in $\mathcal{G}_{i}$ into $2^d$ squares of half the side-length, to obtain a partition $\mathcal{G}_{i + 1}$ of the entire grid $[2\Delta]^d$. Thus, every $\mathcal{G}_{i}$ is a partition into squares of side-length $2^{i}$. The recursive partitioning $\{\mathcal{G}_i\}_i$ naturally defines a rooted tree $T$, whose nodes are the squares inside all the $\mathcal{G}_i$'s, that if often called a \emph{quadtree decomposition} (even though every tree node has $2^d$ children rather than $4$). Finally, make the quadtree $T$ random by shifting the entire recursive partitioning by a vector $-v_{\textrm{shift}}$, where $v_{\textrm{shift}}$ is chosen uniformly at random from $[\Delta]^d$. (This is equivalent to shifting the dataset $[\Delta]^d$ by $v_{\textrm{shift}}$, which explains why we defined the recursive partitioning over an extended grid $[2\Delta]^d$.) Every node in $T$ has a \emph{level} (or equivalently, depth), where the root is at level $1$, and the level of every other node is one bigger than that of its parent node. The \emph{scale} of a tree node is the side-length of the corresponding square. Observe that leaves of $T$ have scale $2^0$ and thus correspond to squares that contain a single grid point; moreover, points $x\in [\Delta]^d$ correspond to distinct leaves in $T$. Define the weight of an edge in $T$ between a node $u$ at scale $2^i$ and its parent as $d^{\frac{1}{p}} \cdot 2^i$ (i.e., the diameter of $u$'s square). Define a \emph{tree embedding} of $[\Delta]^d$ by mapping every point $x \in [\Delta]^d$ to its corresponding leaf in $T$, and let the \emph{tree distance} between two points $x, y \in [\Delta]^d$, denoted $\dist_T(x, y)$, be the distance in $T$ between their corresponding leaves. The following lemma bounds the distortion of this randomized tree embedding. We remark that a better distortion of $O(d^{\max\{\frac{1}{p}, 1 - \frac{1}{p}\}} \log \Delta)$ may be obtained via a different technique that is less suitable for streaming~\cite{CCGGP98}. \begin{lemma}[{\cite[Fact 1]{Indyk04}}] \label{lem:tree_embeddings} Let $T$ be a randomized tree as above. Then for all $x,y \in [\Delta]^d$, \begin{align*} & \dist_T(x,y) \geq \dist(x,y) ; \\ \E[& \dist_T(x,y)] \leq O\left(d \log \Delta\right) \dist(x,y) . \end{align*} \end{lemma} \paragraph{Streaming Implementation of Randomized Tree Embedding} We emphasize that in our definition of the quadtree $T$ is non-standard as it contains the entire grid $[\Delta]^d$ as leaves (the standard approach is to recursively partition only squares that contain at least one point from the dataset $X$). The advantage of our approach is that the tree is defined obliviously of the dataset $X$ (e.g., of updates to $X$). In particular, the leaf-to-root path from a point $x \in [\Delta]^d$ is well-defined regardless of $X$ and can be computed on-the-fly (without constructing the entire tree $T$) using time and space $\poly(d \log \Delta)$, providing sufficient information for evaluating the tree distance. Our streaming algorithm samples such a tree $T$ as an initialization step, i.e., before the stream starts, which requires small space because it can be done implicitly by picking $\poly(d \log \Delta)$ random bits that describe the random shift vector $v$. Next, we show in \Cref{lem:tree_embeddings_prob} that this initialization step succeeds with $0.99$ probability, and on success, every distance $\dist(x,y)$ for $x,y\in X$ is well-approximated by its corresponding $\dist_T(x,y)$. In this case, the sampling of points $x$ with probability proportional to $q(x)$ can be replaced by sampling with probabilities that are derived from the tree metric. More specifically, the probability of sampling each $x\in X$ deviates from the desired probability $\frac{q(x)}{Q}$ by at most a factor of $\poly(d \log \Delta)$. We remark that the event of success does depend on the input $X$, but the algorithm does not need to know whether the initialization succeeded. \begin{lemma} \label{lem:tree_embeddings_prob} For $x \in X$, let $q_T(x) := \sum_{y \in X} \dist_T(x,y)$ and let $Q_T := \sum_{x \in X}q_T(x)$. Then \[ \Pr_T\left[\forall x \in X,\ \frac{q_T(x)}{Q_T} \ge \frac{1}{O\left(d \log \Delta\right)}\frac{q(x)}{Q}\right] \geq 0.99. \] \end{lemma} \begin{proof} Fix some $x \in X$. By \Cref{lem:tree_embeddings}, \begin{equation} \label{eqn:upper_bound_for_numerator} q_T(x) = \sum_{y \in X} \dist_T(x,y) \ge \sum_{y \in X} \dist(x,y) = q(x) \end{equation} and \[ \E\left[\sum_{y \in X}q_T(y)\right] = \E \left[\sum_{y,y' \in X} \dist_T(y,y')\right] \le O\left(d \log \Delta\right)\sum_{y,y' \in X} \E [\dist(y,y')]. \] By Markov's inequality, with high constant probability, \begin{equation} \label{eqn:sum_ub} \sum_{y \in X}q_T(y) \le O\left(d \log \Delta\right)\sum_{y,y' \in X} \E[\dist(y,y')]. \end{equation} We finish the proof by combining \eqref{eqn:sum_ub} and \eqref{eqn:upper_bound_for_numerator}. \end{proof} \paragraph{Sampling w.r.t.\ Tree Distance} In the remainder of the proof, we assume that the random tree $T$ was already picked and condition on its success as formulated in \Cref{lem:tree_embeddings_prob}. This lemma shows that it actually suffices to sample each $x$ with probability proportional to $q_T(x)$. Next, we provide in \Cref{fact:qt_x} a different formula for $q_T(x)$ that is based on $x$'s ancestors in the tree $T$, namely, on counting how many data points (i.e., from $X$) are contained in the squares that correspond to these ancestors. To this end, we need to set up some basic notation regarding $X$ and $T$. \paragraph{The Input $X$ in the Tree $T$} Let $n := |X|$ be the number of input points at the end of the stream. For a tree node $v \in T$, let $X(v) \subseteq X$ be the set of points from $X$ that are contained in the square corresponding to $v$. For $x \in X$ and $i \geq 1$, let $\anc_i(x)$ be the level-$i$ ancestor of $x$ in $T$ (recalling that $x$ corresponds to a leaf). By definition, $\anc_{L + 1}(x) := x$. For $0 \leq i \leq L$, let $\beta_i := d^{\frac{1}{p}} \cdot 2^{L + 1 - i}$, which is the edge-length between a level-$i$ node $u$ and its parent (since the scale of a level-$i$ node is $2^{L + 1 - i}$). Due to the tree structure, we have the following representation of $q_T(x)$. \begin{fact} \label{fact:qt_x} For every $x \in X$, we have $q_T(x) = 2 \sum_{i=0}^{L} \beta_i \cdot (n - |X(\anc_i(x))|)$. \end{fact} For each level $i$, let $h_i$ be a level-$i$ node whose corresponding square contains the most points from $X$, breaking ties arbitrarily. Next, we wish to identify a \emph{critical} level $k$; ideally, this is the last level going down from the root, i.e., largest $i$, such that $|X(h_i)| \ge 0.6n$ (the constant $0.6$ is somewhat arbitrary). However, it is difficult to find this $k$ exactly in a streaming algorithm, and thus we use instead a level $\tilde{k}$ that satisfies a relaxed guarantee that only requires estimates on different $|X(h_i)|$, as follows. Let us fix henceforth two constants $0.5 < \sigma^- \leq \sigma^+ \leq 1$. \begin{definition}[Critical Level] Level $1 \leq \tilde{k} < L + 1$ is called \emph{$(\sigma^-, \sigma^+)$-critical}, if $|X(h_{\tilde{k}})| \geq \sigma^- n$ and $|X(h_{\tilde{k} + 1})| \leq \sigma^+ n$. \end{definition} Suppose henceforth that $\tilde{k}$ is a $(\sigma^-, \sigma^+)$-critical level. (Such a critical level clearly exists, although its value need not be unique.) Since $|X(h_i)| \geq |X(h_{i + 1})|$ for every $i<\tilde{k}$ (because $h_i$ contains the most points from $X$ at level $i$), we know that $|X(h_i)| \geq \sigma^- n$ for every $i \leq \tilde{k}$ (not only for $i = \tilde{k}$), and $|X(h_i)| \leq \sigma^+ n$ for every $i > \tilde{k}$. \begin{fact} \label{fact:path} Each $h_{i}$ is the parent of $h_{i+1}$ for $1 \leq i \leq \tilde{k} - 1$, hence $(h_1, \ldots, h_{\tilde k})$ forms a path from the root of $T$. \end{fact} Next, we further ``simplify'' the representation of $q_T(x)$, by introducing an approximate version of it that requires even less information about $x$. Specifically, we introduce in \Cref{def:tilde_q} a sequence of $O(L)$ values that are independent of $x$, namely, one value $\tilde{q}_i$ for each level $i\leq \tilde{k}$, and then we show in \Cref{lem:estimator} that for every $x \in X$, we can approximate $q_T(x)$ by one of these $O(L)$ values, namely, by $\tilde{q}_i$ for a suitable level $i=\ell(x)$. \begin{definition}[Estimator for $q_T$] \label{def:tilde_q} For $1 \leq i \leq \tilde{k}$, define \[ \tilde{q}_i := n\beta_i + \sum_{j \leq i} \beta_j \cdot (n - |X(h_j)|). \] \end{definition} \paragraph{Relating $q_T$ and $\tilde{q}$} For $x\in X$, let $\ell(x)$ be the maximum level $1 \leq j \leq \tilde{k}$ such that $\anc_j(x) = h_j$. This is well-defined, because $j = 1$ always satisfies that $\anc_j(x) = h_j$. The next lemma shows that $q_T(x)$ can be approximated by $\tilde{q}_i$ for $i = \ell(x)$. \begin{lemma} \label{lem:estimator} Let $\tilde{k}$ be a $(\sigma^-, \sigma^+)$-critical level. Then \[ \forall x\in X, \qquad \tilde{q}_{\ell(x)} = \Theta(1) \cdot q_T(x). \] \end{lemma} \begin{proof} \begin{align} \frac{1}{2} q_T(x) &= \sum_{i=0}^{L}\beta_i \cdot (n - |X(\anc_i(x))|) \nonumber \\ &= \sum_{i \leq \ell(x)} \beta_i \cdot (n - |X(h_i)|) + \sum_{i > \ell(x)} \beta_i \cdot (n - |X(\anc_i(x))|) \label{eqn:anc_to_h} \\ &\in \sum_{i \leq \ell(x)} \beta_i \cdot (n - |X(h_i)|) + [\min\{ \sigma^-, 1 - \sigma^+ \}, 1] \cdot n \sum_{i > \ell(x)}\beta_i \label{eqn:use_mu} \\ &\in \sum_{i \leq \ell(x)} \beta_i\cdot (n - |X(h_i)|) + [\min\{ \sigma^-, 1 - \sigma^+ \}, 1] \cdot n \beta_{\ell(x)} \nonumber \\ &\in [\min\{ \sigma^-, 1 - \sigma^+ \}, 1] \cdot \tilde{q}_{\ell(x)}. \nonumber \end{align} In the above, \eqref{eqn:anc_to_h} follows from the fact that $\anc_i(x) = h_i$ for $i \leq \ell(x)$ (by the definition of $\ell(x)$ and the property that $(h_1, \ldots, h_{\tilde k})$ forms a path from \Cref{fact:path}). \eqref{eqn:use_mu} follows from the definition of $(\sigma, \mu)$-critical and the definition of $\ell$. \end{proof} The next lemma shows that the sequence $\tilde{q}_1,\ldots,\tilde{q}_{\tilde k}$ is non-increasing. \begin{fact} \label{fact:qt_dec} $\tilde{q}_1 = \beta_1 n$, and for every $2 \leq i \leq \tilde k$, we have $\tilde{q}_{i} \leq \tilde{q}_{i - 1}$. \end{fact} \begin{proof} The fact for $i = 1$ is immediate. Now consider $i \geq 2$. We have \[ \tilde{q}_{i - 1} - \tilde{q}_{i} = n(\beta_{i - 1} - \beta_i) - \beta_i \cdot (n - |X(h_i)| ) = \beta_i \cdot |X(h_i)| \geq 0, \] which verifies the lemma. \end{proof} \paragraph{Alternative Sampling Procedure} Recall that level $\tilde k$ is assumed to be $(\sigma^-, \sigma^+)$-critical for fixed constants $0.5 < \sigma^- \leq \sigma^+ \leq 1$. We plan to sample $x \in X$ with probability proportional to $\tilde{q}_{\ell(x)}$, and by \Cref{lem:estimator} this only loses an $O(1)$ factor in the bound $\lambda$ needed for importance sampling (as in \Cref{lem:importance_sampling_algorithm}). For $1 \leq i \leq \tilde k$, define $X_i := \{ x \in X \mid \ell(x) = i \}$. Notice that $\{X_i\}_{i=1}^{\tilde k}$ forms a partition of $X$, and \begin{equation} \label{eqn:Xi} X_i = \begin{cases} X(h_{i}) \setminus X(h_{i+1}) & \text{if $1 \leq i \leq \tilde{k} - 1$;} \\ X(h_{\tilde k}) & \text{if $i = \tilde{k}$.} \end{cases} \end{equation} By definition, points in the same $X_i$ have the same $\tilde{q}_{\ell(x)}$, and thus also the same sampling probability. A natural approach to sampling a point from $X$ with the desired probabilities is to first pick a random $i\in[\tilde{k}]$ (non-uniformly) and then sample uniformly a point from that $X_i$. But unfortunately, it is impossible to sample uniformly from $X_i$ in streaming (this is justified in \Cref{claim:lb_sampling_light}), and thus we shall sample instead from an ``extended'' set $X^{\mathrm{ext}}_i\supseteq X_i$, defined as follows. \begin{equation} \label{eqn:Xext} X^{\mathrm{ext}}_i := \begin{cases} X \setminus X(h_{i+1}) & \text{if $1 \leq i \leq \tilde{k} - 1$;} \\ X(h_{\tilde k}) & \text{if $i = \tilde{k}$.} \end{cases} \end{equation} The path structure of $\{h_i\}_{i}$ (\Cref{fact:path}) implies the following. \begin{fact} \label{fact:xext} For every $1 \leq i < \tilde{k}$, we have $X^{\mathrm{ext}}_i = X_1 \cup \ldots \cup X_i$. \end{fact} We describe in \Cref{alg:offline_sampling} a procedure for sampling $x \in X$ with probability proportional to $\tilde{q}_{\ell(x)}$, based on the above approach of picking a random $i\in[\tilde{k}]$ (from a suitable distribution) and then sampling uniformly a point from that $X^{\mathrm{ext}}_i$. We then prove in \Cref{lem:prob} that this procedure samples from $X$ with probabilities proportional to $\tilde{q}_{\ell(x)}$, up to an $O(L)$ factor. \begin{remark} \label{rem:ChangeOrderOfSum} Sampling from the extended sets ($X^{\mathrm{ext}}_i$ instead of $X_i$) can significantly bias the sampling probabilities, because the ``contribution'' of a point $x\in X$ can increase by an unbounded factor. On the one hand, this can increase the sampling probability of that $x$, which is not a problem at all. On the other hand, it might increase the total contribution of all points (and thus decrease some individual sampling probabilities), but our analysis shows that this effect is bounded by an $O(L)$ factor. The intuition here is that $q(x)$ represents the sum of distances from $x$ to all other points $y\in X$, and we can rearrange their total $\sum_x q(x)$ by the ``other'' point $y\in X$, and the crux now is that the contribution of each $y\in X$ increases by at most $O(L)$ factor. \end{remark} \begin{algorithm}[ht] \caption{Alternative sampling procedure (offline)} \label{alg:offline_sampling} \begin{algorithmic}[1] \State draw a random $i^*$ where each $1 \leq i \leq \tilde{k}$ is picked with probability $r_i := \frac{|X^{\mathrm{ext}}_i| \tilde{q}_i}{\sum_{j=1}^{\tilde k} |X^{\mathrm{ext}}_j| \tilde{q}_j}$ \label{line:offline_i} \State draw $x \in X^{\mathrm{ext}}_i$ uniformly at random \label{line:offline_x} \State return $z^* = x$ as the sample, together with $p^* = \sum_{i = \ell(x)}^{\tilde{k}} \frac{r_i}{|X^{\mathrm{ext}}_i|} $ as its sampling probability \label{line:offline_ret} \end{algorithmic} \end{algorithm} \begin{lemma} \label{lem:prob} \Cref{alg:offline_sampling} samples every $x \in X$ with probability $\Pr[z^* = x] = \sum_{i = \ell(x)}^{\tilde{k}} \frac{r_i}{|X^{\mathrm{ext}}_i|}$, exactly as line 3 reports in $p^*$, and furthermore this is bounded by $\Pr[z^* = x] \geq \Omega\left(\frac{1}{L}\right) \frac{\tilde{q}_{\ell(x)}} { \sum_{x \in X} \tilde{q}_{\ell(x)} }$. \end{lemma} \begin{proof} Observe that $x \in X_{\ell(x)}$, and by \Cref{fact:xext}, this point $x$ can only be sampled for $i \geq \ell(x)$. Therefore, $\Pr[z^* = x] = \sum_{i=\ell(x)}^{\tilde{k}} \frac{r_i}{|X^{\mathrm{ext}}_i|} = p^*$. We bound this probability by \begin{equation} \label{eqn:prob} \Pr[z^* = x] = \sum_{i \geq \ell(x)} \frac{r_i}{|X^{\mathrm{ext}}_i|} = \sum_{i \geq \ell(x)} \frac{\tilde{q}_i} { \sum_{j=1}^{\tilde k}{|X^{\mathrm{ext}}_j|\tilde{q}_j} } = \frac{\sum_{i \geq \ell(x)} \tilde{q}_i} { \sum_{j=1}^{\tilde k}{|X^{\mathrm{ext}}_j|\tilde{q}_j} } \geq \frac{\tilde{q}_{\ell(x)}} { \sum_{j=1}^{\tilde k}{|X^{\mathrm{ext}}_j|\tilde{q}_j} }. \end{equation} Next, to bound the denominator ${ \sum_{j=1}^{\tilde k}{|X^{\mathrm{ext}}_j|\tilde{q}_j} }$, observe that $|X^{\mathrm{ext}}_j| = \sum_{i = 1}^{j} |X_i|$ for all $j<\tilde{k}$ (by \Cref{fact:xext}), and therefore \begin{align*} \sum_{j=1}^{\tilde k}{|X^{\mathrm{ext}}_j|\tilde{q}_j} &= |X_{\tilde k}|\tilde{q}_{\tilde k} + \sum_{j = 1}^{\tilde{k} - 1} \sum_{i = 1}^{j}|X_i| \tilde{q}_j = |X_{\tilde k}|\tilde{q}_{\tilde k} + \sum_{i = 1}^{\tilde{k} - 1}\sum_{j = i}^{\tilde{k} - 1} |X_i| \tilde{q}_j \leq |X_{\tilde k}|\tilde{q}_{\tilde k} + \tilde{k} \cdot \sum_{i = 1}^{\tilde{k} - 1}|X_i| \tilde{q}_i \\ &\leq (L + 1) \sum_{i=1}^{\tilde k}{ |X_i| \tilde{q}_i } = (L + 1) \sum_{x \in X}{\tilde{q}_{\ell(x)}}, \end{align*} where the first inequality is by the monotonicity of $\tilde{q}_i$'s (\Cref{fact:qt_dec}). Combining this with \eqref{eqn:prob}, the lemma follows. \end{proof} \paragraph{Implementing \Cref{alg:offline_sampling} in Streaming} To implement \Cref{alg:offline_sampling} in streaming, we first need a streaming algorithm that finds a critical level $\tilde{k}$ using space $O(\poly(d \log \Delta))$. We discuss this next. \paragraph{Finding $\tilde{k}$} For each level $i$, we draw $\poly(d \log \Delta)$ samples $S_i \subseteq X$ uniformly at random from $X$. We then count the number of samples that lie in each tree node (square) at level $i$, and let $m_i$ be the maximum count. We let $\tilde{k}$ be the largest level $i$ such that $\frac{m_i}{|S_i|} \geq 0.6$. By a standard application of Chernoff bound, with probability at least $1 - \poly(\Delta^d)$, this level $\tilde{k}$ is $(0.55, 0.65)$-critical. Moreover, this process can be implemented in streaming using space $\poly(d \log \Delta)$, by maintaining, for each level $i$, only $|S_i|=\poly(d \log \Delta)$ independent $\ell_0$-samplers (\Cref{lem:l0sampler}) on the domain $[\Delta]^d$. A similar approach can be used to $(1+\epsilon)$-approximate the size of $X(h_i)$ for every $i \leq \tilde{k}$, and also sample uniformly from these sets, using space $O(\poly(\epsilon^{-1}d\log \Delta))$ and with failure probability $1 - 1 / \poly(\Delta^d)$ (by using \Cref{lem:l0estimator,lem:l0sampler}). \paragraph{Estimating and Sampling from $X \setminus X(h_i)$} We also need to estimate $\tilde{q}_i$ and $|X^{\mathrm{ext}}_i|$, and to sample uniformly at random from $X^{\mathrm{ext}}_i$, for every $i \leq \tilde{k}$. The case $i = \tilde{k}$ was already discussed, because $X^{\mathrm{ext}}_{\tilde k} = X(h_{\tilde k})$. It remains to consider $i < \tilde{k}$, in which case we need to $(1 \pm \epsilon)$-approximate the size of $X \setminus X(h_i)$, and also to sample uniformly at random from that set, and we can assume that $|X(h_i)| > 0.5 n$. We provide such a streaming algorithm in \Cref{lemma:streaming_light} below, which we prove in \Cref{sec:proof_streaming_light}. This lemma is stated in a more general form that may be of independent interest, where the input is a frequency vector $x\in \mathbb{R}^N$ (i.e., a stream of insertions and deletions of items from domain $[N]$) and access to a function $\mathcal{P}:[N] \to [N']$, for $N'\leq N$, that can be viewed as a partition of the domain into $N'$ parts. In our intended application, the domain $[N]$ will be the grid $[\Delta]^d$, and the partition $\mathcal{P}$ will be its partition into squares of a given level $i$; observe that it is easy to implement $\mathcal{P}$ as a function that maps each grid point to its level-$i$ square. Roughly speaking, the streaming algorithm in \Cref{lemma:streaming_light} samples uniformly from the support set $\supp(x) = \{ i\in[N]: x_i\neq 0\}$, but excluding indices that lie in the part of $\mathcal{P}$ that is heaviest, i.e., has most nonzero indices, assuming it is sufficiently heavy. In our intended application, this method samples uniformly from the input $X\subset[\Delta]^d$ but excluding points that lie in the heaviest square, i.e., uniformly from $X \setminus X(h_i)$. (square with the largest number of input points). \begin{lemma}[Sampling from Light Parts] \label{lemma:streaming_light} There exists a streaming algorithm, that given $0 < \epsilon, \delta, \sigma < 0.5$, integers $N, N', M \geq 1$, a mapping $\mathcal{P} : [N] \to [N']$, and a frequency vector $x \in [-M, M]^N$ that is presented as a stream of additive updates, uses space $O(\poly(\epsilon^{-1}\sigma^{-1}\log(\delta^{-1}MN)))$, and reports a sample $i^* \in [N] \cup \{\nil\}$ and a value $r^* \geq 0$. Let $X := \{ i \in [N] \mid x_i \neq 0\}$ be the support of $x$, and let $j_{\max} := \arg\max_{j \in [N']} |\mathcal{P}^{-1}(j) \cap X|$ be the heaviest $\mathcal{P}$ with respect to $X$. If $X_{\mathrm{heavy}} := \mathcal{P}^{-1}(j_{\max}) \cap X$ satisfies $|X_{\mathrm{heavy}}| \geq (0.5 + \sigma) |X|$, then with probability at least $1 - \delta$, \begin{itemize} \item $r^* \in (1 \pm \epsilon) \cdot |X_{\mathrm{light}}|$ where $X_{\mathrm{light}} := X \setminus X_{\mathrm{heavy}}$, and \item unless $X_{\mathrm{light}}$ is empty, $i^* \in X_{\mathrm{light}}$ and moreover for all $i \in X_{\mathrm{light}}$, it holds that $\Pr[i^* = i] = \frac{1}{|X_{\mathrm{light}}|}$ (provided that $|X_{\mathrm{light}}| \neq 0$). \end{itemize} \end{lemma} In our application, we will apply \Cref{lemma:streaming_light} in parallel for every level $i$, with $N = \Delta^d$, i.e., the items being inserted and deleted are points in $[\Delta]^d$, and a mapping $\mathcal{P}$ defined by the level-$i$ squares (tree nodes), i.e., for $x \in [\Delta]^d$ we define $\mathcal{P}(x)$ as the level-$i$ node that contains $x$. We will set the failure probability to be $\delta = 1 / \poly(\Delta^{d})$ and a fixed $\sigma = 0.05$. This way, conditioning on the success of \Cref{lemma:streaming_light}, we can compute $\tilde{q}_i$, $|X^{\mathrm{ext}}_i|$ with error $(1 \pm \epsilon)$, and sampled from $X^{\mathrm{ext}}_i$ uniformly. \paragraph{Concluding \Cref{lem:importance_sampling_algorithm}} In conclusion, our streaming algorithm initializes with sampling a randomly-shifted quadtree $T$ which defines a tree embedding, all in an implicit way. Then, assume $T$ is obtained and condition on the success of it, specifically \Cref{lem:tree_embeddings_prob} (with probability $0.99$), we use the streaming implementation of \Cref{alg:offline_sampling}, as outlined above. The resultant $z^*$ and $p^*$ are the return value. The error bound on $z^*$ and $p^*$ and the bound of $\lambda = O(\poly(d \log \Delta))$ follow by \Cref{lem:tree_embeddings_prob} and \Cref{lem:prob}, plus an additional error and failure probability introduced by streaming, which is bounded in the previous paragraphs. This finishes the proof. \subsection{Sampling from The Light Parts (Proof of \Cref{lemma:streaming_light})} \label{sec:proof_streaming_light} \paragraph{An Offline Algorithm} Notice that $\{ \mathcal{P}^{-1}(y) \}_{y \in [N']}$ defines a partition of $[N]$. In our proof, we interpret $\mathcal{P} = \{ P_i \}_i$ as such a partition. Let $P_{\max} := \mathcal{P}^{-1}(y_{\max})$ be the part of $\mathcal{P}$ that contains the most from $X$, so $X_{\mathrm{heavy}} = P_{\max} \cap X$. We start with an offline algorithm, summarized in \Cref{alg:sampling_light}. \begin{algorithm}[ht] \caption{Sampling and estimating from the light part (offline)} \label{alg:sampling_light} \begin{algorithmic}[1] \State let $u \gets 2$, $s \gets \Theta(\log(N\delta^{-1}))$ \State let $\mathcal{H} \gets \{ h_1, \ldots, h_s \}$ be a collection of independent random hash functions, where each $h \in \mathcal{H}$ ($h : \mathcal{P} \to [u]$) satisfies $\forall P \neq P'$, $\Pr[h(P) = h(P')] \leq 1 / u$ \label{line:hash} \For{$t \in [s]$} \State for $j \in [u]$, let $B_j \gets \left(\bigcup_{P \in \mathcal{P} : h_t(P) = j}P\right) \cap X$ \label{line:Bj} \State let $j^* \gets \arg\max_{j} |B_j|$ \label{line:jstar} \State let $D_t \gets X \setminus \bigcup_{P \in \mathcal{P} : h_t(P) = j^*}{P}$ \label{line:Di} \EndFor \State compute $D_{\mathrm{all}} \gets \bigcup_{t \in [s]} D_t$ \label{line:Dmax} \State return a uniform sample $i^* \in D_{\mathrm{all}}$, and report $r^* := |D_{\mathrm{all}}|$ as the estimate for $|X_{\mathrm{light}}|$ \end{algorithmic} \end{algorithm} In the algorithm, we consider a set of $s = \Theta(\log(N \delta^{-1}))$ random hash functions $h_1, \ldots, h_s$ that randomly map each part in $\mathcal{P}$ to one of $u = 2$ buckets (as in line~\ref{line:hash}). Then, consider some $h_t$ for $t \in [s]$. Let $B_j$ ($j \in [u]$) be the elements from all parts that are mapped by $h_t$ to the bucket $j$ (in line~\ref{line:Bj}). We find $j^*$ as the bucket that contains the most elements from $X$ (in line~\ref{line:jstar}). Since we assume $|X_{\mathrm{heavy}}| \geq (0.5 + \sigma) |X| > 0.5 |X|$, we know the bucket $h_t(P_{\max})$ contains \emph{more} than $0.5 |X|$ elements form $X$ (recalling that $P_{\max} = \mathcal{P}^{-1}(y_{\max})$ is the part that contains the most from $X$), and this implies $h_t(P_{\max})$ must be the bucket that contains the most elements from $X$. Hence, \begin{equation} \label{eqn:jstart_max} j^* = h_t(P_{\max}). \end{equation} Next, we drop the elements that lie in the bucket $j^*$, and take the remaining elements, as $D_t$ (in line~\ref{line:Di}). While $D_t$ certainly does not contain any element from $X_{\mathrm{heavy}}$ (by \eqref{eqn:jstart_max} and the definition of $D_t$'s), $D_t$ is only a subset of $X_{\mathrm{light}}$. Hence, we take the union of all $D_t$'s (over $t \in [s]$), denoted as $D_{\mathrm{all}}$ (in line~\ref{line:Dmax}), which equals $X_{\mathrm{light}}$ with high probability. \paragraph{Analysis of $D_{\mathrm{all}}$} For every $i \in X_{\mathrm{light}}$, every $t \in [s]$, \[ \Pr[i \notin D_t] = \Pr[h_t(P_i) = h_t(P_{\max}) ] \leq \frac{1}{u} = \frac{1}{2}, \] where $P_i \in \mathcal{P}$ is the part that $i$ belongs to. Therefore, by the independence of $h_t$'s, we know for every $i \in X_{\mathrm{light}}$, \[ \Pr[i \notin D_{\mathrm{all}}] = \Pr[\forall t \in [s], i \notin D_t] \leq \frac{1}{2^s} = \frac{\delta}{\poly(N)}. \] Taking a union bound over $i \in X_{\mathrm{light}}$, we have \[ \Pr[\exists i \in X_{\mathrm{light}}, i \notin D_{\mathrm{all}}] \leq \frac{\delta}{\poly(N)} |X_{\mathrm{light}}| \leq \delta. \] Hence, we conclude that \[ \Pr[D_{\mathrm{all}} = X_{\mathrm{light}}] \geq 1 - \delta. \] Conditioning on the event that $D_{\mathrm{all}} = X_{\mathrm{light}}$, we conclude that $r^* = |D_{\mathrm{all}}| = |X_{\mathrm{light}}|$, and that $\forall i \in X_{\mathrm{light}}$, $\Pr[i^* = i] = \frac{1}{|D_{\mathrm{all}}|} = \frac{1}{|X_{\mathrm{light}}|}$. \paragraph{Streaming Algorithm} It remains to give a streaming implementation for \Cref{alg:sampling_light}. Before the stream starts, we initialize several streaming data structures. We start with building the hash functions $\mathcal{H}$, and this can be implemented using space $\poly(\log N)$, by using hash families of limited independence. Next, we maintain for every $t \in [s]$, for every bucket $j \in [u]$, an $\ell_0$-sampler $\mathcal{L}^{(t)}_j$ (\Cref{lem:l0sampler}) with failure probability $O(\frac{\delta}{us})$, as well as an $\ell_0$-norm estimator $\mathcal{K}^{(t)}_j$ (\Cref{lem:l0estimator}) with failure probability $O(\frac{\delta}{us})$ and error guarantee $\epsilon \sigma \leq \min\{\epsilon, \sigma\}$, both on domain $[n]$. The setup of the failure probabilities immediately implies that with probability $1 - \delta$, all data structures succeed simultaneously, and we condition on their success in the following argument. Since we need to combine the linear sketches $\mathcal{L}^{(t)}_j$'s in later steps, for every $t \in [s]$ and $j \in [u]$, we use the same random seeds among all $\ell_0$-samplers $\{\mathcal{L}^{(t)}_j\}$'s, so that they can be ``combined'' by simply adding up. Also do the same for $\mathcal{K}^{(t)}_j$'s. Another detail is that, strictly speaking, we need $O(1)$ independent ``copies'' of every $\mathcal{K}$ and $\mathcal{L}$, since we need to query each of them $O(1)$ times. As this only enlarges the space by an $O(1)$ factor, we omit this detail for the sake of presentation. Whenever an update for element $i \in [n]$ is received, we update $\mathcal{L}^{(t)}_{j_i}$ and $\mathcal{K}^{(t)}_{j_i}$ for every $t \in [s]$, where $j_i := h_t(P_i)$, and $P_i \in \mathcal{P}$ is the unique part that contains $i$. When the stream terminates, we proceed to generate the sample $i^* \in X_{\mathrm{light}}$ and the estimate $r^*$ for $X_{\mathrm{light}}$. For $t \in [s]$, $j \in [u]$, query $\mathcal{K}^{(t)}_j$ to obtain an estimator for $|B_j|$ (line~\ref{line:Bj}) within $(1 \pm \epsilon\sigma)$ factor. Use these estimations to find $j^*$ (line~\ref{line:jstar}). Note that this $j^*$ is the same as computing using exact $|B_j|$ values. To see this, the key observation is that, $|X_{\mathrm{heavy}}| \geq ( 0.5 + \sigma) |X|$, while for every $P \in \mathcal{P} \setminus P_{\max}$ we have $|P| \leq (0.5 - \sigma) |X|$. Hence, to precisely find $j^*$, it suffices to distinguish between subsets $P$, $P'$ such that $|P| \geq (0.5 + \sigma) |X|$ and $|P'| \leq (0.5 - \sigma)|X|$. Even with a $(1 \pm \epsilon \sigma)$ error (which is the error of our $\mathcal{K}$'s), this gap is still $\frac{0.5 + \sigma}{ 0.5 - \sigma} \cdot \frac{1 + \epsilon \sigma}{ 1 - \epsilon \sigma} > 1$ which is large enough. Next, compute $\mathcal{L}^{(t)} := \sum_{j \in [u] \setminus \{j^*\}}\mathcal{L}^{(t)}_j$ as the $\ell_0$-sampler that corresponds to $D_t$ (line~\ref{line:Di}), and obtain $\mathcal{K}^{(t)} := \sum_{j \in [u] \setminus \{j^*\}}\mathcal{K}^{(t)}_j$ similarly. We can do this since we use the same random seeds among $\mathcal{L}^{(t)}_j$'s (and the same has been done to $\mathcal{K}$). We further compute $\mathcal{L} := \sum_{t \in [s]} \mathcal{L}^{(t)}$ whose support corresponds to $D_{\mathrm{all}}$. Define $\mathcal{K} := \sum_{t \in [s]} \mathcal{K}^{(t)} $ similarly. The final return values $i^*$ and $r^*$ are given by querying $\mathcal{L}$ and $\mathcal{K}$. Note that on the success of the $\ell_0$-sampler $\mathcal{L}$, the probability for $i^* = i$ for each $i \in W$ is exactly $\frac{1}{|W|}$ (\Cref{lem:l0sampler}). However, the $r^*$ deviates from $|W|$ by a multiplicative $(1 \pm \epsilon)$ factor. In conclusion, the analysis of \Cref{alg:sampling_light} still goes through by using the estimated values as in the above procedure, except that one needs to rescale $\epsilon$ and $\delta$ by a constant factor, to compensate the error and failure probability introduced by the streaming data structures. This finishes the proof of \Cref{lemma:streaming_light}.
proofpile-arXiv_067-15148
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Normal state properties of iron-based superconductors have been crucial in understanding the mechanism of superconductivity, and they have been garnering increasing attention, since the discovery of their high superconducting transition temperature $T_c$ \cite{firstreport}. Various problems with this material, such as the anomalous non-Fermi-liquid-like normal-state transport properties\cite{Fermilq} and the possibility of the Bardeen-Cooper-Shrieffer (BCS) - Bose-Einstein condensation (BEC) crossover regime\cite{BCS-BEC01,BCS-BEC02,BCS-BEC03,BCS-BEC04,BCS-BEC05}, are particularly in the forefront of condensed matter physics. Among the several Fe-based superconductors, the FeTe$_{1-x}$Se$_x$ system has an advantage in addressing the aforementioned problems because its crystal structure is the simplest, i.e, it comprises only conducting FeX (X: Te or Se) layers. However, excess Fe is inevitably incorporated in the crystals, which conceals the nature of anomalous physical properties in FeTe$_{1-x}$Se$_x$ systems. Hence, a lot of efforts have been made to eliminate obstructive excess Fe\cite{anneal01,anneal02,anneal03,anneal04}. In our previous research, we developed an annealing method that discards excess Fe by annealing single crystals in tellurium vapor (Te anneal)\cite{anneal05}. Using this Te-annealing method, the excess Fe was sufficiently eliminated without damaging the samples. With this high quality sample, we performed the magneto-transport measurements and inferred that the crossover from the incoherent to the coherent electronic state and the opening of the pseudogap occur at high temperatures\cite{MR}. Furthermore, via magnetoresistance measurements, we determined that the temperature $T_{scf}$ at which the superconducting fluctuations occur was 2.7 times larger than $T_c$, which is consistent with the behavior of the BCS-BEC crossover regime\cite{LTp}. The evolution from incoherent to coherent electronic states with increasing Se doping was also observed by angle-resolved photoemission spectroscopy (ARPES), which elucidates the close relationship between the coherent electronic state and emergence of superconductivity\cite{coherent01,coherent02}. On the other hand, whether iron-based superconductors, especially FeSe and FeSe$_{1-x}$S$_x$ are in the BCS-BEC crossover regime or not is hotly debated relative to the giant superconducting fluctuations\cite{BCS-BEC01,BCS-BEC02,BCS-BEC03,BCS-BEC04,BCS-BEC05}. Therefore, clarifying the doping ($x$)-temperature ($T$) phase diagram of Fe-based superconductors by using various measurements is important in elucidating the mechanism of superconductivity. Here, we measured the normal state magnetic susceptibility of Te annealed FeTe$_{1-x}$Se$_x$ ($x$ = 0.2, 0.3, and 0.4) and investigated the doping ($x$)-temperature ($T$) phase diagram. \section{Experiment} Single crystals of FeTe$_{1-x}$Se$_x$ were grown using the Bridgman method\cite{bridgeman} with their nominal compositions. As-grown crystals were cleaved into thin crystals with 1-mm thickness, and they were sealed into pyrex or quartz tube with pulverized Te. Then, the tube was heated for more than 400 h at 400 $^\circ$C. Annealed crystals were usually covered with grayish accretion. However, by carefully removing the accretion, shiny and flat surface emerged. The superconducting transition of the annealed crystals were very sharp ($\Delta T_c \leq $ 1K), thus indicating that the excess iron was completely discarded. Magnetic susceptibility was measured using a superconducting quantum interference device (SQUID) magnetometer (Quantum Design MPMS3) with the magnetic field applied parallel to the thin sample. \section{Results and Discussion} \subsection{Impurity phase of Fe$_3$O$_4$ in the Te-annealed samples} Figure 1(a) presents the temperature dependence of the magnetic susceptibility for as-grown FeTe$_{0.6}$Se$_{0.4}$. It exhibits Curie-Weiss-like behavior at low temperatures and slight symptoms of superconductivity below 5K. This Curie-Weiss-like behavior is attributed to the local moment from excess Fe; in addition, superconductivity is destroyed by the local moment. Below 50 K, the difference in susceptibility at each magnetic field is evident. The inset of Figure 1(a) illustrates the magnetic field dependence of the magnetic moment ($M-H$ plot). It presents the linear field dependence at high temperatures, while the deviation from the linear field dependence was observed at 10 K. These results indicate that the excess Fe works as local moment at high temperatures and that the ferromagnetic interaction of excess Fe would develop below 50 K. Figure 1(b) presents the temperature dependence of the magnetization for Te-annealed FeTe$_{0.6}$Se$_{0.4}$ measured at 7 T. It exhibits an almost temperature independent behavior, which implies that Pauli paramagnetism is dominant. The inset of Figure 1(b) presents a $M-H$ plot for Te-annealed FeTe$_{0.6}$Se$_{0.4}$. We can clearly observe the ferromagnetism, which could have emerged from the impurity phase other than excess Fe. To extract the Ferromagnetic component from the obtained susceptibility of Te-annealed sample, we deduced residual magnetic moment at 0 Oe, which is the $y$-intercept of linear magnetic moment in $M-H$ plot. Using the temperature dependence of the magnetic moment at 7 T ($M_{7T}(T)$) and 1 T ($M_{1T}(T)$), the residual magnetization $M_{res}(T)$ was calculated using the equation, \begin{equation} M_{res}(T) = M_{1T}(T) - \frac{M_{7T}(T) - M_{1T}(T)}{6} \times 1. \end{equation} Figure 1(c) presents the calculated residual magnetic moment, which emerges from the impurity phase. As observed in the figure, clear phase transition can be detected at approximately 120 K. This is considered a Verwey transition in magnetite (Fe$_3$O$_4$)\cite{Verwey}. Note that, there are no symptoms of transition around 120 K in the original FeTe$_{0.6}$Se$_{0.4}$ data, which suggests that the inclusion of the impurity phase (Fe$_3$O$_4$) was negligible. Because the saturated magnetization of magnetite is known to be approximately 90 emu/g at room temperature, we can estimate the impurity phase amount of magnetite as 2.46 $\times$ 10$^{-3}$ mg in 5.34 mg FeTe$_{0.6}$Se$_{0.4}$ (0.046 \%). The amount of the impurity phase does not depend on Se doping, but on the position of the samples. Because this impurity phase is absent in the as-grown samples, it is considered to have been introduced during the annealing process. In the following analysis, this ferromagnetic component was subtracted from the raw data. However, our conclusion was not altered depending on the details of subtraction. \subsection{Estimation of characteristic temperatures} Figure 2(a) illustrates the temperature dependence of magnetic susceptibility of the samples with various Se doping. It exhibits a rather large susceptibility (4 - 9 $\times$ 10$^{-6}$ emu/gOe), which is 10 times larger than that of high-$Tc$ cuprate Bi$_2$Sr$_2$CaCu$_2$O$_{8-\delta}$\cite{watanabe}. The observed large susceptibility is consistent with a previous report\cite{sus}. The absolute values of the magnetic susceptibility increase drastically with decreasing Se contents. Because Pauli paramagnetism would be dominant in the magnetic susceptibility in FeTe$_{1-x}$Se$_x$, the increase in the magnetic susceptibility implies the increase in DOS at the Fermi level. In fact, such an increase in DOS was observed in the specific heat measurement, where, the electronic specific-heat coefficient in the normal state $\gamma_n$ increases as Se decreases\cite{HC}. Because the carrier density would not change drastically (Te is an isovalent substitution), the increase in the effective mass due to strongly correlated electron would be realized toward a quantum critical point between the antiferromagnetic and superconducting phases ($\sim$ $x$ = 0.07). The overall temperature dependence of the magnetic susceptibility in FeTe$_{1-x}$Se$_x$ is as follows. With decreasing temperature, the magnetic susceptibility decreases below $T^{**}_{\chi}$ (indicated by red arrows). With further decreasing temperature, slight upturn can be observed below $T^{*}_{\chi}$ (indicated by blue arrows). To estimate the characteristic temperatures $T^{*}_{\chi}$, $T^{**}_{\chi}$, and $T_{scf}$ in the magnetic susceptibility, we plotted the temperature derivative of magnetic susceptibility vs temperature for $x$ = 0.2, 0.3, and 0.4 in Figure 2(b), 2(c), and 2(d) respectively. As observed in these figures, the temperature derivative changes its slope twice with decreasing temperature. Hence, we present the solid straight lines that are linear extrapolations at each temperature ranges. $T^{*}_{\chi}$ and $T^{**}_{\chi}$ were defined from the intersection point of the extrapolation lines. Similarly, $T_{scf}$ was determined by the temperature from which the temperature derivative start to deviate from the linear extrapolation line. The obtained temperatures are consistent with the observations of our magnetotransport measurements\cite{MR}. Figure 3 presents the $x-T$ phase diagram for FeTe$_{1-x}$Se$_x$, including the data from a previous study\cite{MR}. $T^{**}_{\chi}$ corresponds to $T^*_{\rho ab}$ at which $\rho_{ab}$ reaches its broad maximum. The decrease in magnetic susceptibility below $T^{**}_{\chi}$ is considered to be due to the decrease in DOS by opening the pseudogap. In strongly correlated iron chalcogenides, such as FeTe$_{1-x}$Se$_x$, the existence of an orbital-selective Mott phase (OSMP) and related incoherent to coherent transition were theoretically proposed\cite{OSMP01,OSMP02}. As explained in a previous report\cite{MR}, $T^{**}_{\chi}$ would correspond to the transformation from OSMP to the metallic state. Recently, a similar phase diagram was proposed using the ARPES measurement\cite{PhaseDiagram}. When the states become coherent, the Fermi surfaces become well defined with some type of band hybridization, which would cause the pseudogap to open via interband nesting\cite{MR}. On the other hand, another characteristic temperature $T^{*}_{\chi}$ appears to correspond with $T^*_{\rho c}$, below which $\rho_{c}$ values exhibit the typical plateau. As mentioned in a previous report, the electron carriers would participate in charge transport below this temperature; hence, magnetic susceptibility would increase due to the increase in DOS\cite{MR}. As observed in Fig. 2(a), the magnetic susceptibility decreased again below $T_{scf}$ when the temperature was further decreased (indicated by green arrows). We consider that the decrease in magnetic susceptibility is attributed to the diamagnetism originated in superconducting fluctuation. The estimated $T_{scf}$ values from Fig. 2(b)-(d) were $\sim$ 57 K for $x$ = 0.4, $\sim$ 45 K for $x$ = 0.3, and $\sim$ 31 K for $x$ = 0.2. Notably, they were 3.9, 3.2, and 2.4 times larger than their corresponding $T_c$ for $x$ = 0.4, $x$ = 0.3, and $x$ = 0.2 respectively. In the BCS-BEC crossover regime, the effect of superconducting fluctuations is expected to be enhanced, as Cooper pairs can be formed at higher temperatures than $T_c$\cite{BCS-BEC01}. These results indicate that the FeTe$_{1-x}$Se$_x$ system is indeed in the BCS-BEC crossover regime. \section{Conclusion} To estimate characteristic temperatures, such as $T^{**}_{\chi}$, $T^{*}_{\chi}$, and $T_{scf}$, we measured the magnetic susceptibility for high-quality Te-annealed FeTe$_{1-x}$Se$_x$. The absolute value of magnetic susceptibility increased as Se contents decreased, which is considered to be due to the increase in DOS as it approaches a quantum critical point between the antiferromagnetic and superconducting phases. We observed the decrease in magnetic susceptibility below $T^{**}_{\chi}$, which is considered to be due to the opening of the pseudogap. On the other hand, the increase in magnetic susceptibility below $T^{*}_{\chi}$ is attributed to the increase in DOS due to the participant of electron carrier. These results agree with our previous magnetotransport study\cite{MR}. Moreover, we inferred that $T_{scf}$ values were substantially high and they reached 57, 45, and 31 K for $x$ = 0.4, 0.3, and 0.2 respectively. In particular, it was 3.9 times larger than its $T_c$ for $x$ = 0.4. These high $T_{scf}$ values are consistent with the behavior of the BCS-BEC crossover regime. In this study, we determined that our Te-annealed sample contained a ferromagnetic impurity phase. Although the amount of this impurity phase is negligible, it may conceal the intrinsic behaviors of superconductivity at a low field. In the future, we have to improve the annealing process, to completely eliminate ferromagnetic impurity and reveal more detailed magnetic properties. \section*{Acknowledgment} This work was supported by JSPS KAKENHI Grant No. 20K03849 and the Hirosaki University Grant for Distinguished Researchers from the fiscal year 2017 to 2018. \newpage \section*{Figure captions} \begin{figure}[tbh] \begin{center} \end{center} \caption{(a): Temperature dependence of the magnetic susceptibility for as-grown sample of $x$ = 0.4. (b): Temperature dependence of the magnetization for Te-annealed sample of $x$ = 0.4 measured at 7 T. The blue line presents the data that the ferromagnetic component from Fe$_3$O$_4$ was subtracted. Inset of Fig 1(a) and (b): Magnetic field dependence of the magnetization. (c): Temperature dependence of calculated residual magnetic moment that indicates the magnetic moment of the impurity phase magnetite (Fe$_3$O$_4$).} \label{f1} \end{figure} \begin{figure}[tbh] \begin{center} \end{center} \caption{(a): Temperature dependence of the magnetic susceptibility of the Te-annealed sample with various Se doping measured at 7 T. (b), (c), (d): Temperature derivative of the magnetic susceptibility vs temperature for $x$ = 0.2, 0.3, and 0.4 respectively. The solid straight lines represent linear extrapolations at a certain temperature range, which are presented as a guideline.} \label{f2} \end{figure} \begin{figure}[tbh] \begin{center} \end{center} \caption{Characteristic temperatures $T^{*}_{\chi}$, $T^{**}_{\chi}$, and $T_{scf}$ vs Se concentration $x$ for Te-annealed FeTe$_{1-x}$Se$_x$, plotted together with data from previous magneto transport study\cite{MR}. The open circles of $T^{*}_{RH}$ are replotted from Ref. \cite{sun}.} \label{f3S} \end{figure} \newpage
proofpile-arXiv_067-15235
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Superconductivity is a prominent and extensively-studied quantum many-body phenomenon because of its fundamental importance, widespread occurrence in nature, and technological applications. One of the most active contemporary research directions in condensed matter physics is the superconductivity in magic-angle moir\'e graphene systems including magic-angle twisted bilayer graphene \cite{Cao2018_tbg1,Cao2018_tbg2,Yankowitz2019,Lu2019}, magic-angle twisted trilayer graphene \cite{Hao2021electric,Park2021tunable,Cao2021,Liu2021coulomb}, and magic-angle twisted graphene multilayers ($n>3$) \cite{Zhang2021ascendance,Park2021MAMG,Burg2022emergence}. The single-particle bands in such systems are tuned to be nearly flat \cite{Bistritzer,Li2019,Khalaf2019} such that many-body effects can become significant. It is so worth mentioning that robust reproducible superconductivity has not been systematically established in other moir\'e systems, making magic-angle twisted graphene systems distinctive. In addition, the extensively studied regular monolayer graphene is not known to be superconducting (because the electron-phonon coupling \cite{Hwang2008,Efetov2010} is not significant enough to produce an observable $T_c$ for a doped monolayer graphene), adding considerable excitement to the unexpected discovery of superconductivity in moir\'e magic angle twisted graphene layers. Twisting or moir\'e flatband or magic angle, however, is not an essential condition for superconductivity in graphene-based materials as rhombohedral trilayer graphene (RTG) \cite{Zhou2021,Zhou2021_SC_RTG} and Bernal Bilayer graphene (BBG) \cite{Zhou2021_BBG} also demonstrate robust superconducting behavior in recent experimental studies. There are two distinct superconducting phases in RTG, termed SC1 and SC2. The superconductivity in SC1 is suppressed by an in-plane magnetic field within the Pauli limit, consistent with a spin-singlet pairing; SC2 is likely to be a (spin-polarized) spin-triplet pairing since the superconductivity persists under a large in-plane magnetic field violating the Pauli limit. By contrast, superconductivity in BBG is rather mysterious -- a sufficiently large in-plane magnetic field is required to induce a (spin-polarized) spin-triplet superconducting state in BBG. Since similar spin-singlet/spin-triplet superconductivity has been observed experimentally in magic-angle moir\'e graphene systems \cite{Cao2018_tbg1,Cao2018_tbg2,Yankowitz2019,Lu2019,Hao2021electric,Park2021tunable,Cao2021,Liu2021coulomb,Zhang2021ascendance,Park2021MAMG,Burg2022emergence}, a reasonable question is if there exists a universal pairing mechanism for superconductivity in all graphene-based materials, with and without moir\'e structure. Although, in principle, it is possible that the observed superconductivity in various graphene multilayers, twisted and untwisted, arises from different underlying mechanisms, simple Occam's razor consideration implies this to be unlikely as it would be akin to different superconducting metals (e.g. Al, Pb, Ag, Cu) somehow having different underlying mechanisms. In this context, acoustic phonons are the most natural candidate for pairing since Cooper pairings in most superconductors in nature are caused by acoustic phonons, and in the untwisted systems with no moir\'e flat bands, the most obvious arguments in favor of strong correlation induced superconductivity become questionable. In the current work, we develop a detailed theory for acoustic phonon-induced superconductivity in `moir\'eless' graphene multilayers, where no twist is involved between the layers. Before discussing a potential universal mechanism, it is important to emphasize that superconductivity in graphene-based materials is distinct from other systems (e.g., conventional metals) because of the valley and sublattice degrees of freedom. For example, electron-acoustic-phonon coupling in graphene has an enlarged $SU(2)\times SU(2)$ symmetry due to the approximate valley symmetry \cite{Wu2019_phonon}. As a result, the acoustic-phonon-mediated intervalley pairings have a singlet-triplet degeneracy, and the intrasublattice pairings are typically favored \cite{Wu2019_phonon,Chou2021correlation}. Therefore, it is natural to ask if acoustic-phonon-mediated pairings can account for superconductivity in graphene-based materials \cite{Wu2019_phonon,Wu2020_TDBG}. Previously, we showed that the acoustic-phonon-mediated superconductivity can explain qualitatively and semi-quantitatively the distinct superconducting phenomenology reported in RTG \cite{Chou2021_RTG_SC} and BBG \cite{Chou2021_BBG}. Since the band structures of RTG and BBG are simpler and better established compared with twisted moir\'e graphene systems, it is easier to make direct (semi-)quantitative comparisons between theory and experiment here. Besides the acoustic-phonon mechanism, which we consider, a number of alternative theoretical ideas focusing on inter-electron interactions have also been proposed for RTG \cite{Chatterjee2021,Ghazaryan2021,Dong2021,Cea2021,Szabo2021,You2021,Qin2022,Dai2022} and BBG \cite{Szabo2021BBG}. We mention here that the case for robust acoustic phonon mediated superconductivity in twisted graphene systems has already been made in the literature, based on the enhancement of the effective electron-phonon coupling in moir\'e systems by virtue of the suppression of the graphene Fermi velocity \cite{Wu2019_phonon,Wu2020_TDBG}, but the current work, by contrast, is specifically on moir\'eless graphene multilayers. In this work, we investigate in considerable details the acoustic-phonon-mediated superconductivity in moir\'eless graphene multilayers including BBG, RTG, and (the experimentally not-yet-studied, and thus, we are making a prediction) ABCA stacked tetralayer graphene \cite{Kerelsky2021moireless}. We incorporate the $\vex{k}\cdot\vex{p}$ band structure and the Coulomb repulsion in our phonon-induced theory of superconductivity. For RTG and ABCA tetralayer graphene, we find that robust observable superconductivity ($T_c>20$mK) can be realized for a wide range of doping, even for doping away from the Van Hove singularity (VHS) while observable and rather fragile superconductivity is obtained only near VHS in BBG. Thus, we predict the existence of a more generic doping-independent (and also more robust) superconductivity in RTG and ABCA than in BBG. Our work, while being in agreement with the existing experimental observations, also provides a number of falsifiable predictions based on electron-acoustic-phonon coupling incorporating Coulomb repulsion, and we believe, based on our finding a reasonable agreement between our theory and experiment, that acoustic phonons are the main mediators of superconductivity for graphene-based materials in general. A unique feature of graphene is the fact that acoustic phonons can lead to both singlet and triplet superconductivity because of the enlarged $SU(2)\times SU(2)$ symmetry enabled by the valley degrees of freedom. The fact that graphene superconductivity has been observed both in moir\'e and in moir\'eless multilayers also argues in favor of acoustic phonon effects rather than correlation effects since the flatband enhancement of correlations is absent in the non-moir\'e untwisted multilayers. The rest of the paper is organized as follows: In Sec.~\ref{Sec:Model}, we introduce the $\vex{k}\cdot\vex{p}$ band model, the electron-acoustic-phonon coupling, and the Coulomb interaction. We discuss how to incorporate Coulomb repulsion in the theory of acoustic-phonon-mediated graphene superconductivity and present a simplified approach in Sec.~\ref{Sec:SC}. In Sec.~\ref{Sec:Results}, the main numerical results are presented and discussed in the context of experimental results. We conclude with a brief discussion in Sec.~\ref{Sec:Discussion}. A set of six appendices (A-F) complements the main text by providing various technical details used in our work. \section{Microscopic Model}\label{Sec:Model} Superconductivity is crucially dependent on density of states (DOS) and microscopic interactions (e.g., electron-phonon, electron-electron, etc). In this section, we discuss the single-particle band structures, and interactions used in this work. We focus on untwisted moir\'eless pristine BBG, RTG, and ABCA-stacked tetralayer graphene. \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{ABCA.pdf} \caption{Lattice structure of ABCA-stacked tetralayer graphene. (a) Top view. For each layer, we illustrate a hexagon to specify the relative position in the $xy$ plane. 1A, 2A, 3A, and 4A (1B, 2B, 3B, and 4B) denote the sublattice A (B) in the layer 1, 2, 3, and 4 respectively. Note that the lattice points in the first layer and the fourth layer are at the same $xy$ positions. (b) The cross-section view. At K and $-$K points, the intra-layer hybridizations can be ignored, and the nearest neighbor inter-layer couplings generate dimerization in 1B-2A, 2B-3A, 3B-4A bonds (black dashed bonds). 1A and 3B sites are the low-energy sites in this simplified picture. The BBG (RTG) structure can be derived from ABCA tetralayer graphene with only the first 2 (3) layers. } \label{Fig:ABCA} \end{figure} \subsection{Single-particle band structure} We are interested, following the experimental systems, in the low-doping graphene multilayers in the presence of a displacement field, which induces a tunable band gap at charge neutrality point. The single-particle bands near the $K$ and $-K$ valleys can be described by $\vex{k}\cdot\vex{p}$ band models. Generally, the single-particle Hamiltonian is described by \begin{align}\label{Eq:H_lowE} \hat{H}_{n,0}=\sum_{\tau}\sum_{\vex{k}}\hat{\Psi}_{n,\tau}^{\dagger}(\vex{k})\hat{h}_{n,\tau}(\vex{k})\hat{1}_{s}\hat{\Psi}_{n,\tau}(\vex{k}), \end{align} where $\hat{h}_{n,\tau=\pm}(\vex{k})$ is a $2n\times 2n$ low-energy Hamiltonian near $\pm K$ valley, $n\ge 2$ is the number of layers, $\hat{1}_{s}$ is the identity matrix in the spin space, and $\hat{\Psi}_{n,\tau}(\vex{k})$ is a $4n$-component column vector with a valley quantum number $\tau$, made of the fermionic annihilation operator $\psi_{\tau\sigma l s}$ with sublattice $\sigma$, spin $s$, and layer $l$. The low-energy bands of the graphene multilayer systems here have large probability on the A sites of the top layer (1A) and B sites of the bottom layer ($n$B). This property arises from the interlayer nearest-neighbor tunnelings which tend to form dimerized bonds as illustrated in Fig.~\ref{Fig:ABCA}, where ABCA tetralayer graphene is illustrated. One can obtain BBG (RTG) by considering just two (three) layers in Fig.~\ref{Fig:ABCA}. To gain some intuitive understanding, we construct an effective $2\times 2$ matrix given by \cite{Zhang2010} \begin{align}\label{Eq:h_two_band} \hat{h}_{n,+}'(\vex{k})\approx\left[\begin{array}{cc} \Delta_1 & C_n\left(\Pi_k^{\dagger}\right)^n\\ C_n\left(\Pi_k\right)^n & -\Delta_1 \end{array} \right], \end{align} where $2\Delta_1$ corresponds to the energy difference between two low-energy sites induced by the displacement field, $C_n=v_0^n/\gamma_1^{n-1}$, $v_0$ is the graphene velocity, and $\gamma_1$ corresponds to the interlayer dimerization energy. The effective energy bands are described by $\mathcal{E}_{\vex{k}}'=\pm \sqrt{C_n^2|\vex{k}|^{2n}+\Delta_1^2}$, resulting in a divergent DOS $\rho(\mathcal{E})\propto |\mathcal{E}\pm \Delta_1|^{-1+1/n}$ near the band edge ($\pm \Delta_1$). Based on this heuristic estimate, we expect that the DOS gets larger for a higher layer number ($n$). More careful analysis should include additional hopping terms and crystal fields, which might substantially alter the results, such as inducing VHS away from band edges. Regardless of the detail, the low-energy bands are approximately layer and sublattice polarized. As a result, the superconducting states with intralayer intersublattice pairings should be generically suppressed in the low-energy bands because one of the sublattices in each layer has higher energy. To obtain the low-energy band structure, we formally diagonalize the Hamiltonian in Eq.~(\ref{Eq:H_lowE}) as follows: \begin{align}\label{Eq:H_0_diagonalized} \hat{H}_0=\sum_{\tau=\pm}\sum_{b=1}^{2n}\sum_{s=\uparrow,\downarrow}\mathcal{E}_{\tau,b}(\vex{k})c^{\dagger}_{\tau b s}(\vex{k})c_{\tau b s}(\vex{k}), \end{align} where $\mathcal{E}_{\tau,b}(\vex{k})$ encodes the energy-momentum dispersion of the $b$th band and valley $\tau K$, and $c_{\tau b s}(\vex{k})$ is an electron annihilation operator of valley $\tau K$, the $b$th band, spin $s$, and momentum $\vex{k}$. The microscopic-basis operator $\psi_{\tau\sigma l s}$ and the band-basis operator $c_{\tau b s}$ obey $\psi_{\tau\sigma l s}(\vex{k})=\sum_b\Phi_{\tau b, \sigma l}(\vex{k})c_{\tau b s}(\vex{k})$, where $\Phi_{\tau b, \sigma l}(\vex{k})$ is the wavefunction of valley $\tau$K and band $b$. In addition, the (spinless) time-reversal symmetry imposes further constraints: $\mathcal{E}_{+,b}(\vex{k})=\mathcal{E}_{-,b}(-\vex{k})$ and $\Phi_{+ b, \sigma l}(\vex{k})=\Phi_{- b, \sigma l}^*(-\vex{k})$. We use the $\vex{k}\cdot\vex{p}$ bands described in Appendix~\ref{App:Bands} and compute DOS numerically for BBG, RTG, and ABCA tetralayer graphene as shown in Fig.~\ref{Fig:DOS}. (See Appendix~\ref{App:Numerics} for a discussion on the numerical calculations.) Note that the DOS in BBG is much smaller than the DOS in RTG and ABCA tetralayer graphene. The differences in DOS imply that the screened Coulomb interaction might show different behavior since screening depends crucially on the DOS, as will be discussed in detail later. \begin{figure}[t!] \includegraphics[width=0.4\textwidth]{DOS.pdf} \caption{Density of states based on $\vex{k}\cdot\vex{p}$ models. (a) BBG (b) RTG (c) ABCA. The numerical results are obtained by computing a $10^4\times 10^4$ momentum grid with a momentum spacing $\Delta k\approx 2\times 10^{-5}a_0^{-1}$. For a given $\mathcal{E}_F$, we determine $n_e$ based on the DOS profiles illustrated here. } \label{Fig:DOS} \end{figure} \subsection{Electron-phonon coupling} In graphene multilayers, the electron-optical-phonon couplings \cite{Wu2018} are generically suppressed because of the sublattice polarization in the systems \cite{Choi2019}. Thus, we focus on the in-plane longitudinal acoustic phonon, which is described by \begin{align} \hat{H}_{ph}=\sum_l\sum_{\vex{q}}\omega_{\vex{q}}a^{\dagger}_{l,\vex{q}}a_{l,\vex{q}}, \end{align} where $a_{\vex{q}}$ is the phonon annihilation operator with momentum $\vex{q}$, $\omega_{\vex{q}}=v_{ph}|\vex{q}|$ is the acoustic phonon dispersion, and $v_{ph}$ is the sound velocity. For simplicity, we consider that the acoustic phonon modes are layer decoupled, i.e., the same as that in monolayer graphene \cite{Hwang2008}. However, our qualitative results do not rely on this assumption. The electron-acoustic phonon coupling \cite{Coleman2015introduction} is given by, within the well-known deformation potential coupling approximation, \begin{align}\label{Eq:H_ep} \hat{H}_{ep}=\frac{D}{\sqrt{\mathcal{A}}}\sum_{\vex{q},l}\sqrt{\frac{\hbar}{2\rho_m\omega_{\vex{q}}}}\left(-i\vex{q}\cdot\hat{e}_{\vex{q}}\right)\left(a_{l,\vex{q}}+a^{\dagger}_{l,-\vex{q}}\right)\hat{n}_{l}(-\vex{q}), \end{align} where $\hat{e}_{\vex{q}}^*=\hat{e}_{-\vex{q}}$ is the polarization vector, $\rho_m$ is the mass density of monolayer graphene, $D$ is the deformation potential, $\mathcal{A}$ is the area of the 2D system, $\omega_{\vex{q}}=v_{\text{ph}}|\vex{q}|$ is the acoustic phonon dispersion, and $\hat{n}_l(-\vex{q})=\sum_{\vex{k}}\sum_{\tau,\sigma,s}\psi^{\dagger}_{\tau,l,\sigma,s}(\vex{k})\psi_{\tau,l,\sigma,s}(\vex{k}-\vex{q})$. \subsection{Coulomb interaction} In addition to electron-acoustic-phonon couplings, the electrons are directly interacting via Coulomb repulsion, which is an important factor in determining the existence of superconductivity, because no superconductivity would be possible if the repulsive Coulomb coupling overwhelms the attractive interaction induced by acoustic phonons. We focus on the long-range component of the instantaneous Coulomb interaction, described by \begin{align}\label{Eq:H_C} \hat{H}_C=&\frac{1}{2\mathcal{A}}\sum_{\vex{q}}V_C(\vex{q})\sum_{l}\hat{n}_{l}(\vex{q})\sum_{l'}\hat{n}_{l'}(-\vex{q}) \end{align} where $V_C(\vex{q})$ encodes the Coulomb potential and $\hat{n}_l(\vex{q})=\sum_{\vex{k}}\sum_{\tau,\sigma,s}\psi^{\dagger}_{\tau,l,\sigma,s}(\vex{k})\psi_{\tau,l,\sigma,s}(\vex{k}+\vex{q})$. The long-range Coulomb potential given by Eq.~(\ref{Eq:H_C}) has a $SU(2)\times SU(2)$ symmetry, which results in a singlet-triplet degeneracy. However, the short-range contributions of Coulomb potential might break the $SU(2)\times SU(2)$ symmetry down to a $SU(2)$ symmetry \cite{Chatterjee2021}. In the experiments, the graphene multilayer system is sandwiched between two metallic plates which screen the Coulomb interaction. After solving the electrostatic problem using the image charge approximations, we obtain \begin{align} V_{C}(\vex{q})=&\frac{2\pi e^2}{\epsilon |\vex{q}|}\tanh\left(\left|\vex{q}\right|d\right), \end{align} where $\epsilon$ is the dimensionless average background lattice dielectric constant and $d$ is the distance between the 2D system and the metallic plates. In addition to the gate screening, the large DOS in graphene multilayers result in significant intraband screenings. To incorporate the intraband screening by the carriers themselves, we adopt the extensively-used Thomas-Fermi approximation defined by \begin{align} \label{Eq:V_C_TF}V_{\text{TF}}(\vex{q},\mathcal{E}_F)=\frac{1}{\left[V_C(\vex{q})\right]^{-1}+\rho(\mathcal{E}_F)}=\frac{V_C(\vex{q})}{1+V_C(\vex{q})\rho(\mathcal{E}_F)} \end{align} where $\rho(\mathcal{E}_F)$ is the total DOS at Fermi energy. The Thomas-Fermi approximation is the static limit of the random phase approximation, which is exact under the well-controlled limits of high density and/or many fermion flavors. When $V_{\text{C}}(\vex{q})\rho(\mathcal{E}_F)\gg 1$, $V_{\text{TF}}(\vex{q};\mathcal{E}_F)\approx 1/\rho(\mathcal{E}_F)$, which is independent of $\epsilon$ and $d$ (simply because, in this large DOS limit, the screening by the carriers themselves dominate). In graphene multilayers discussed here, the intraband process is the dominating mechanism for the screening of Coulomb repulsion. We will discuss the interplay between phonon-mediated pairings and screened Coulomb repulsion next. \section{Phonon-Mediated Superconductivity incorporating Coulomb repulsion}\label{Sec:SC} To achieve a more quantitative understanding, it is customary to apply the Eliashberg theory \cite{Marsiglio2020eliashberg,Chubukov2020eliashberg} with the full frequency dependence of the problem, which is typically solved by intensive numerical methods. This is because the retarded effective attraction can overcome the instantaneous Coulomb repulsion even though the bare interaction is repulsive at all frequency \cite{MorelAnderson1962,Coleman2015introduction,Marsiglio2020eliashberg,Chubukov2020eliashberg}. In the rest of this section, we present a simplified treatment without carrying out intensive numerics, incorporating both the acoustic-phonon attraction and Coulomb repulsion, to solve for superconductivity in moir\'eless graphene multilayers. (The full numerical solution is presented in the Appendix.~\ref{App:Elia_numerics}.) We first discuss the effective BCS interaction and examine the retardation effect by comparing the phonon velocity with the estimated Fermi velocity, Then, we review the Eliashberg theory and present a simplified mean-field approach. We support our results by solving the Eliashberg theory numerically in Appendix \ref{App:Elia_numerics} and Fig.~\ref{Fig:Eliash}, where the qualitative agreement with our simplified almost-analytical mean field solution is shown. \subsection{BCS superconductivity and retardation} Electrons near the Fermi surface can attract each other via a phonon-mediated interaction. Such an attractive interaction can overcome the repulsive Coulomb repulsion and create Cooper pairs. This is the central idea of the BCS theory. To derive the phonon-mediated attraction, one starts with the electron-phonon couplings [given by Eq.~(\ref{Eq:H_ep})] and integrate out the phonon fields in the imaginary-time path integral. The effective interaction is described by an action, \begin{align}\label{Eq:S_ph} \mathcal{S}_{\text{ph}}=-\frac{1}{2\beta\mathcal{A}}\sum_{\nu_n,\vex{q}}V_g(\nu_n,\vex{q})\sum_{l}\hat{n}_{l}(\nu_n,\vex{q})\hat{n}_{l}(-\nu_n,-\vex{q}), \end{align} where $\nu_n$ is the Matsubara frequency, $V_g(\nu_n,\vex{q})=g\frac{\omega_{\vex{q}}^2}{\omega_{\vex{q}}^2+\nu_n^2}$ is the phonon-mediated dynamical potential, $\omega_{\vex{q}}=v_s|\vex{q}|$, $v_s$ is the sound velocity, and $g=D^2/(\rho_mv_{s}^2)$ is the strength of phonon-mediated attraction. The overall minus sign indicates the effective attraction mediated by acoustic phonons, and the effective attraction has a $SU(2)\times SU(2)$ symmetry, resulting in a singlet-triplet degeneracy in the pairing. To estimate $g_0$, we use $D=30$ eV, $\rho_m=7.6\times 10^{-8}$ g/cm$^2$ \cite{Efetov2010,Hwang2008}, and $v_s=2\times 10^6$ cm/s. We obtain $g\approx474$ meV$\cdot$nm$^2$ \cite{Wu2020_TDBG,Chou2021_RTG_SC}. Here, $D=30$eV is based on the experimentally extracted value \cite{Efetov2010}, and it might be off by a factor of $2$ \cite{Wu2019_phonon,Hwang2008}. In this work, we use $g=g_0\equiv474$ meV$\cdot$nm$^2$ unless noted otherwise. Our qualitative results are independent of the choice of $D$ and $g$. To simplify the calculations, we adopt the single-band approximation to where the Fermi energy $\mathcal{E}_F$ lies. This is a valid approximation keeping only the band because low-energy bands of the graphene multilayers are separated by a gap $\sim 2|\Delta_1|$ due to the applied displacement field, and the high energy bands are also away by at least $\sim 100$ meV. The BCS channel of the phonon-mediated interaction [Eq.~(\ref{Eq:S_ph})] is given by \begin{align} \label{Eq:S_ph_b}\mathcal{S}_{\text{ph}}=\frac{-1}{\beta\mathcal{A}}\sum_{k,k'}V_g^{(b)}(k,k')\bar{c}_{+bs,k}\bar{c}_{-bs',-k}c_{-bs',-k'}c_{+bs,k'}, \end{align} where \begin{align} \label{Eq:V_ph_bar}V_g^{(b)}(k,k')=&g_{\vex{k},\vex{k}'}^{(b)}\frac{\omega_{\vex{k}-\vex{k}'}^2}{\omega_{\vex{k}-\vex{k}'}^2+\left(\omega_n-\omega_n'\right)^2},\\ \label{Eq:g_kk'}g_{\vex{k},\vex{k}'}^{(b)}=&g\sum_{\sigma,l}\left|\Phi_{+b;l\sigma}(\vex{k})\right|^2\left|\Phi_{+b;l\sigma}(\vex{k}')\right|^2, \end{align} $b$ is the index of the projected band, $c_{\tau b s,k}$, $\bar{c}_{\tau b s,k}$ are the Grassmann variables representing the fermionic fields, $k=(\omega_n,\vex{k})$ denotes the frequency-momentum index, and $V_g^{(b)}(k,k')$ is the phonon-mediated BCS attractive potential after the single-band projection. Before we proceed, it is worthwhile discussing the pairing symmetry in the low-energy bands of graphene multilayers. We consider only the intervalley Cooper pairs here because $\mathcal{E}_{\tau,b}(\vex{k})\neq \mathcal{E}_{\tau,b}(-\vex{k})$ generically suppresses the intravalley superconductivity \cite{Einenkel2011,Sun2021}. Following the classification scheme based on valley and sublattice degrees of freedom \cite{Wu2019_phonon,Wu2020_TDBG,Chou2021correlation,Chou2021_RTG_SC}, the intervalley pairing symmetry (i.e., $s$-, $p$-, $d$-, $f$- wave) can be determined from $\mathcal{C}_{3z}$ (threefold rotation about hexagon center) and spin SU(2) symmetry. $s$-wave spin-singlet and $f$-wave spin-triplet pairings are intrasublattice; $p$-wave spin-triplet and $d$-wave spin-singlet are intersublattice. For graphene multilayers, we find that the intralayer intersublattice pairings are strongly suppressed in the low-energy bands since one of the sublattices in each layer is at high energy. Thus, we focus only on the intralayer intrasublattice pairings, i.e., $s$-wave spin-singlet and $f$-wave spin-triplet pairings. In fact, $s$-wave spin-singlet and $f$-wave spin-triplet pairings are degenerate due to the $SU(2)\times SU(2)$ symmetry in the acoustic-phonon-mediated attraction. In the standard BCS approximation, the frequency dependence is suppressed completely. As such, $V_g^{(b)}(k,k')$ is reduced to $g_{\vex{k},\vex{k}'}^{(b)}$. With the mean-field approximation, we derive the linearized gap equation as follows: \begin{align} \label{Eq:LGE_1}\Delta_{s's}(\vex{k})=&\frac{1}{\mathcal{A}}\sum_{\vex{k}'}g_{\vex{k},\vex{k}'}^{(b)}\frac{\tanh\left[\frac{\mathcal{E}_{+b}(\vex{k}')-\mathcal{E}_F}{2k_BT}\right]}{2\mathcal{E}_{+b}(\vex{k}')-2\mathcal{E}_F}\Delta_{s's}(\vex{k}'), \end{align} where $k_B$ is the Boltzmann constant, $\mathcal{E}_F$ is the Fermi energy, and the superconducting order parameter is defined by \begin{align} \label{Eq:Delta}\Delta_{s's}(\vex{k}')=&\frac{1}{\mathcal{A}}\sum_b\sum_{\vex{k}'}g_{\vex{k},\vex{k}'}^{(b)}\left\langle c_{-bs'}(-\vex{k}')c_{+bs}(\vex{k}')\right\rangle. \end{align} The transition temperature $T_c$ is determined by the highest $T$ such that Eq.~(\ref{Eq:LGE_1}) is satisfied. The obtained $T_c$ here is for both the $s$-wave spin-singlet and $f$-wave spin-triplet pairings because of the singlet-triplet degeneracy in the acoustic-phonon-mediated pairing. The validity of BCS theory relies on the retardation effect, indicating that phonon velocity is smaller than electron velocity. In such a case, the Migdal theorem applies, and vertex corrections can be ignored. However, the graphene multilayer systems contain VHS in the low energy bands, which can result in a small Fermi velocity, and our theory incorporating Migdal theorem would break down for $v_s$ exceeding the Femi velocity. To check this, we estimate the average Fermi velocity, $\bar{v}_F=2\sqrt{|n_e|}/(\hbar\sqrt{\pi}\rho)$, where $n_e$ is the carrier density and $\rho$ is the total DOS (incorporating spin and valley, assuming unpolarized states). In Fig.~\ref{Fig:vF}, we find that $\bar{v}_F$ is larger than the sound velocity $v_s$ (gray dashed line) at generic dopings, suggesting the validity of Migdal theorem and BCS approximation holding generally in BBG, RTG, and ABCA tetralayer graphene. For doping densities with $\bar{v}_F<v_s$ (e.g., near VHS), the non-adiabatic vertex corrections \cite{Cappelluti1996,Phan_2020} become important, and $T_c$ is generically suppressed by these vertex corrections except for situations that are deep in the anti-adiabatic limit \cite{Phan_2020}. In particular, the vertex correction for doping densities close to the VHS can increase $T_c$ \cite{Cappelluti1996}. We neglect all vertex corrections in the current work. For a fixed $|\Delta_1|$, one can see that the $\bar{v}_F$ away from VHS gets smaller for a larger $n$ (number of layers). This property is consistent with the effective two-band model description in Eq.~(\ref{Eq:h_two_band}), where dispersion is approximately proportional to $|\vex{k}|^{2n}$ near the band edge. \begin{figure}[t!] \includegraphics[width=0.4\textwidth]{vF.pdf} \caption{Estimate of averaged Fermi velocity $\bar{v}_F$ based on $\vex{k}\cdot\vex{p}$ bands. We use $\bar{v}_F=2\sqrt{|n_e|}/(\hbar\sqrt{\pi}\rho)$. (a) BBG (b) RTG (c) ABCA stacked tetralayer graphene. } \label{Fig:vF} \end{figure} We numerically solve Eq.~(\ref{Eq:LGE_1}) and plot $T_c$ versus $n_e$ in Fig.~\ref{Fig:BCS} for BBG, RTG, and ABCA tetralayer graphene. The numerical parameters are provided in Appendix~\ref{App:Numerics}. The results show that observable $T_c$ is produced for a wide range of doping for all systems, suggesting that acoustic phonons can induce superconductivity in these systems. We emphasize that $T_c$ is determined by a wide window of energy states near $\mathcal{E}_F$ but not just the states precisely at $\mathcal{E}_F$ \cite{Lothman2017,Chou2021_RTG_SC}. Thus, the Fermi energy precisely being at the VHS is not crucial for the emergent superconductivity. Technically, this is due to the kernel $\frac{\tanh\left[x/T\right]}{2x}$ in Eq.~(\ref{Eq:LGE_1}) having a finite width, which has a power-law falling off in $x$ for $x\gg T$. This is distinct from the Stoner-type instability where the kernel is reduced to a Dirac-delta function at $T=0$. Typically, the $T_c$ values predicted in Fig.~\ref{Fig:BCS}, without any Coulomb repulsion effects, overestimate the actual $T_c$ because Coulomb effects suppress $T_c$. To provide quantitative predictions, Coulomb repulsion has to be incorporated. Next, we turn to a framework incorporating both the phonon-mediated attraction and Coulomb repulsion. \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{BCS.pdf} \caption{Numerical $T_c$ based on pure electron-acoustic-phonon pairing. (a) BBG (b) RTG (c) ABCA. We sole Eq.~(\ref{Eq:LGE_1}) with 5000 energy levels from a fine momentum grid with a spacing $\Delta k \approx0.002 a_0^{-1}$. (a) is the same as Ref.~\cite{Chou2021_BBG}. (b) is slightly different from Ref.~\cite{Chou2021_RTG_SC} (in the low doping) due to the finer momentum mesh and the way of determining $n_e$. } \label{Fig:BCS} \end{figure} \subsection{Eliashberg theory and renormalization of Coulomb interaction} To investigate the interplay between phonon-mediated attraction and direct Coulomb repulsion, the frequency dependence, which is ignored in BCS theory, should be taken into account. We review the celebrated Eliashberg theory \cite{Marsiglio2020eliashberg,Chubukov2020eliashberg} within the single-band approximation (projection onto the $b$th band) in the following. There are two sets of equations in Eliashberg theory, a self-consistent equation for determining the Eliashberg self energy, and another self-consistent equation for determining the order parameter. See Appendix~\ref{App:Elia_Th} for a derivation based on path integral. The main results are summarized in the following. We focus on $T\approx T_c$ where the order parameter is infinitesimal. In such a situation, the Eliashberg self energy is determined by \begin{align}\label{Eq:Eliashberg_Xi} i\Xi_{+s}(k')=&\frac{1}{\beta \mathcal{A}}\sum_{k}\frac{- W(k',k)}{-i\omega_n+\mathcal{E}_{+,b}(\vex{k})-\mathcal{E}_F+i\Xi_{+s}(k)}, \end{align} where $i\Xi_{+s}(k)$ is the Eliashberg self energy of valley $+K$, spin $s$, $W(k,k')=V^{(b)}_g(k,k')-V^{(b)}_{\text{TF}}(k,k')$, and $V^{(b)}_{\text{TF}}(k,k')$ denotes Eq.~(\ref{Eq:V_C_TF}) after projecting onto the $b$th band. The Eliashberg self energy can be written as $i\Xi_{+s}(k)=\left(-Z_k+1\right)i\omega_n+\chi_k$, where $Z_k$ is the wavefunction renormalization and $\chi_k$ encodes the dispersion renormalization and quasiparticle life time. Using the Eliashberg self energy, the linearized gap equation is expressed as \begin{align}\label{Eq:LGE_Eliash} \Delta_{ss'}(k')=\frac{1}{\beta\mathcal{A}}\sum_{k}\frac{W(k',k)\Delta_{ss'}(k)}{\left(Z_k\omega_n\right)^2+\left[\mathcal{E}_{+,b}(\vex{k})-\mathcal{E}_F+\chi_k\right]^2}, \end{align} where we have ignored the infinitesimal $|\Delta_{ss'}(k)|$ term in the denominator. To simply the calculations, we set $Z_k=1$ and ignore $\chi_k$. Equation (\ref{Eq:LGE_Eliash}) becomes a frequency-dependent BCS gap equation given by \begin{align}\label{Eq:LGE_freq} \Delta_{ss'}(k')=\frac{1}{\beta\mathcal{A}}\sum_{k}\frac{W(k',k)\Delta_{ss'}(k)}{\omega_n^2+\left[\mathcal{E}_{+,b}(\vex{k})-\mathcal{E}_F\right]^2}. \end{align} This approximation is valid in the weak electron-phonon coupling limit, which certainly applied to the multilayer graphene systems under consideration in the current work. Our qualitative results do not rely on this assumption. Note that Eq.~(\ref{Eq:LGE_freq}) is reduced to the frequency-independent BCS gap equation Eq.~(\ref{Eq:LGE_1}) after suppressing the frequency dependence in $\Delta$ and $\tilde{W}$. \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{Ladder_resum.pdf} \caption{Diagrammatic representation of self-consistent ladder equation for renormalization from high energy states. The single wiggly lines denote the bare interaction $V $; the double wiggly lines denote the renormalized interaction $\tilde{V}$; the solid lines with arrows denote the electron propagators. Note that $k$, $k'$, and $p$ are in valley $K$ while $-k$, $-k'$, and $-p$ are in valley $-K$. } \label{Fig:Ladder} \end{figure} Solving the integral equation defined by Eq.~(\ref{Eq:LGE_Eliash}) or (\ref{Eq:LGE_freq}) is a highly challenging computational task since a large momentum mesh and a large frequency mesh are required. Our goal here is to map the frequency-dependent Eliashberg theory into an effective frequency-independent BCS theory incorporating the so-called $\mu^*$ effect of Coulomb repulsion. To derive the $\mu^*$ effect, we first assume that the phonon-mediated attraction is only nonzero around the Fermi level in the low frequency regime ($|\omega_n|,|\omega_n'|<\omega_c$) while the Coulomb repulsion is essentially frequency independent. Now, we rewrite the frequency-dependent BCS gap equation [given by Eq.~(\ref{Eq:LGE_freq})] as follows: \begin{align}\label{Eq:LGE_two_regime} \Delta(k)=\frac{1}{\beta\mathcal{A}}\sum_{k'}\left[\chi_C(\vex{k};\vex{k}')+\chi_{\text{ph}}(k;k')\right]\Delta(k'), \end{align} where \begin{align} \Delta(k)=&\Theta(\omega_c-|\omega_n|)\Delta_{<}(k)+\Theta(|\omega_n|-\omega_c)\Delta_{>}(k),\\ \chi_C(k;k')=&-V^{(b)}_{\text{TF}}(k,k')\frac{1}{\omega_n'^2+\left[\mathcal{E}_{+,b}(\vex{k})-\mathcal{E}_F\right]^2},\\ \chi_{\text{ph}}(k;k')=&V^{(b)}_g(k,k')\frac{\Theta(\omega_c-|\omega_n|)\Theta(\omega_c-|\omega_n'|)}{\omega_n'^2+\left[\mathcal{E}_{+,b}(\vex{k})-\mathcal{E}_F\right]^2}. \end{align} If we assume that the high frequency ($|\omega_n|>\omega_c$) gap functions does not depend on $\omega_n$, i.e., $\Delta_>(k)=\Delta_{\infty}(\vex{k})$, equation (\ref{Eq:LGE_two_regime}) reduces to two coupled equations as follows: \begin{align} \nonumber\Delta_<(k)=&\frac{1}{\beta\mathcal{A}}\sum_{k',|\omega_n'|<\omega_c}\left[\chi_C(k,k')+\chi_{\text{ph}}(k,k')\right]\Delta_{<}(k')\\ \label{Eq:Delta_<}&+\frac{1}{\beta\mathcal{A}}\sum_{k',|\omega_n'|>\omega_c}\chi_C(k,k')\Delta_{\infty}(\vex{k}'),\text{ for }|\omega_n|<\omega_c,\\ \nonumber\Delta_{\infty}(\vex{k})=&\frac{1}{\beta\mathcal{A}}\sum_{k',|\omega_n'|<\omega_c}\chi_C(k,k')\Delta_{<}(k')\\ \label{Eq:Delta_inf}&+\frac{1}{\beta\mathcal{A}}\sum_{k',|\omega_n'|>\omega_c}\chi_C(k,k')\Delta_{\infty}(\vex{k}'),\text{ for }|\omega_n|>\omega_c. \end{align} Formally, one can eliminate $\Delta_{\infty}(\vex{k})$ and derive an effective gap equation \cite{Marsiglio2020eliashberg} as follows: \begin{align}\label{Eq:LGE_eff} \Delta_<(k)=&\frac{1}{\beta\mathcal{A}}\sum_{k',|\omega_n'|<\omega_c}\left[\tilde{\chi}_C(k,k')+\chi_{\text{ph}}(k,k')\right]\Delta_{<}(k'), \end{align} where $\tilde{\chi}_C(k,k')$ encodes the Coulomb repulsion after integrating out the high frequency degrees of freedom. Note that $\chi_{\text{ph}}(k,k')$ is unchanged during this process as $\chi_{\text{ph}}(k,k')=0$ in the high frequency regime. The renormalization from the high energy states can also reduce the Coulomb repulsion in the BCS channel, and which we treat by solving the ladder self-consistent equation \cite{Coleman2015introduction} shown in Fig.~\ref{Fig:Ladder}. The self-consistent ladder Dyson equation corresponds to an algebraic equation as follows: \begin{align}\label{Eq:DysonE} \tilde{V}(k',k)=V(k',k)-\frac{1}{\beta \mathcal{A}}\sum_{\substack{\nu_n,\vex{q},\\ \omega_c<|\nu_n|<\Lambda,\\ |\tilde{\mathcal{E}}_{\vex{q}}|<\Lambda}}\frac{\tilde{V}(k',q)V(q,k)}{\nu_n^2+\tilde{\mathcal{E}}_{\vex{q}}}, \end{align} where $\tilde{\mathcal{E}}_{\vex{q}}=\mathcal{E}_+(\vex{q})-\mathcal{E}_F$, $\tilde{V}$ is the renormalized interaction, and $V$ is the bare interaction. This is equivalent to deriving $\tilde{\chi}_C$ in Eq.~(\ref{Eq:LGE_eff}). If we ignore the momentum dependence of the screened Coulomb interaction and use $U_0(\mathcal{E}_F)\equiv V_{\text{TF}}(k_F;\mathcal{E}_F)$, the renormalized interaction is given by \begin{align}\label{Eq:U_R} U_R(\mathcal{E}_F)=\frac{U_0(\mathcal{E}_F)}{1+U_0(\mathcal{E}_F)\Gamma(\mathcal{E}_F;\omega_c;\Lambda)}, \end{align} where $\Gamma(\mathcal{E}_F;\omega_c;\Lambda)$ encodes the renormalization from the energies satisfying $\omega_c<|\mathcal{E}_{\tau b}(\vex{k})-\mathcal{E}_F|<\Lambda$, $\omega_c=2v_sk_F$, and $\Lambda$ is the energy cutoff. We discuss how to numerically evaluate $\Gamma$ for arbitrary band structures in Appendix~\ref{App:Gamma}. If we assume a constant DOS ($\rho_0$), the well-established $\mu^*$ formula \cite{Coleman2015introduction} is reproduced, \begin{align} \mu^*=\frac{\mu}{1+\mu\ln(\Lambda/\omega_c)}, \end{align} where $\mu^*=U_R(\mathcal{E}_F) \rho_0$ and $\mu=U_0(\mathcal{E}_F) \rho_0$ are the dimensionless renormalized and bare interaction, respectively. \subsection{BCS theory with effective attraction}\label{Sec:g_star} \begin{figure}[t!] \includegraphics[width=0.4\textwidth]{mu_star.pdf} \caption{Effective attraction $g^*$ in unpolarized normal states. (a) BBG (b) RTG (c) ABCA tetralayer graphene. We choose energy cutoff $\Lambda =\min(2|\Delta_1|,100\text{meV})$ and $k_F$ is estimated by $\sqrt{4\pi |n_e|/f}$, where $f$ is the spin-valley degeneracy factor. } \label{Fig:mu_star} \end{figure} \begin{figure}[t!] \includegraphics[width=0.4\textwidth]{mu_star_SP.pdf} \caption{Effective attraction $g^*$ in spin-polarized normal states. (a) BBG (b) RTG (c) ABCA tetralayer graphene. We choose energy cutoff $\Lambda =\min(2|\Delta_1|,100\text{meV})$ and $k_F$ is estimated by $\sqrt{4\pi |n_e|/f}$, where $f$ is the spin-valley degeneracy factor. } \label{Fig:mu_star_SP} \end{figure} To achieve superconductivity, the phonon-mediated attraction must be stronger than the renormalized Coulomb repulsion in the low-energy regime so that effective Cooper pairing may occur, which then condense into the symmetry-broken superconducting BCS ground state. Equation~(\ref{Eq:U_R}) provides a quantitative estimate of the renormalized Coulomb repulsion within an energy window $[\mathcal{E}_F-\omega_c,\mathcal{E}_F+\omega_c]$. We can construct an effective BCS theory by replacing $g$ with the effective interaction $g^*=g-U_R(\mathcal{E}_F)$. We note that $g^*>0$ is a necessary condition for superconductivity, and the new gap equation is given by \begin{align} \label{Eq:LGE_mustar}\Delta_{s's}(\vex{k})=&\frac{1}{\mathcal{A}}\sum_{\vex{k}'}g_{\vex{k},\vex{k}'}^*\frac{\tanh\left[\frac{\mathcal{E}_{+b}(\vex{k}')-\mathcal{E}_F}{2k_BT}\right]}{2\mathcal{E}_{+b}(\vex{k}')-2\mathcal{E}_F}\Delta_{s's}(\vex{k}'), \end{align} where \begin{align} \label{Eq:g_kk'_mustar}g_{\vex{k},\vex{k}'}^{*}=&g^*\sum_{\sigma,l}\left|\Phi_{+b;l\sigma}(\vex{k})\right|^2\left|\Phi_{+b;l\sigma}(\vex{k}')\right|^2. \end{align} In Eq.~(\ref{Eq:LGE_mustar}), we have ignored the explicit frequency dependence and mapped the frequency-dependent gap equations [Eq.~(\ref{Eq:LGE_freq})] to an effective BCS (frequency-independent) gap equation incorporating the $\mu^*$ effect. We emphasize the obvious fact that any superconductivity can only emerge if the effective interaction is attractive, i.e., $g > U_R(\mathcal{E}_F)$. Also, $T_c$ would obviously depend on the relative strengths of the Coulomb repulsion $U_R(\mathcal{E}_F)$ and phonon-induced attraction $g$. The effective BCS approach here allows us to predict $T_c$ quantitatively without solving the extremely numerically demanding frequency-dependent Eliashberg equations. Strictly speaking, the Coulomb repulsion has a different form of matrix element after projecting to a single-band. However, the difference is negligible because of the layer-sublattice polarization in the low-energy bands of graphene multilayers. Thus, we stick to the current simplified approach, thus also avoiding possible large uncontrolled numerical errors in trying to solve the full frequency dependent self-consistent Eliashberg theory in a brute-force computational approach. The value of $g^*$ depends on the ``isospin'' polarization in the normal states. We discuss the unpolarized (four-fold degenerate) normal states in Fig.~\ref{Fig:mu_star} and the spin-polarized (two-fold degenerate) normal states in Fig.~\ref{Fig:mu_star_SP}. The $g^*$ with unpolarized normal states is larger than $g^*$ with spin-polarized normal states because the Thomas-Fermi screening (intraband screening) crucially depends on DOS in graphene multilayer. For BBG, $g^*$ is positive only in the vicinity of VHS, indicating that superconductivity is most likely found near VHS \cite{Chou2021_BBG}. For RTG and ABCA tetralayer graphene, we find that $g^*>0$ for a wide range of dopings, suggesting that superconductivity can prevail for a wide range of doping \cite{Chou2021_RTG_SC}. It is interesting to ask if tuning gate distance ($2d$) or dielectric constant ($\epsilon$) can considerably modify $g^*$. For most doping $n_e$, $g^*$ is not sensitive to $d$ or $\epsilon$ because the large DOS strongly screens the Coulomb interaction (and, therefore, any additional screening by the gate and the background dielectric constant is quantitatively unimportant). For the regime where $g^*\le 0.2$eV$.$nm$^2$, we find that smaller $d$ (for $d<5$nm) and larger $\epsilon$ can considerably increase $g^*$, implying an enhancement in $T_c$. We note that $g^*$ is not sensitive to $d$ for $d>5$nm. Similar conclusions were reported previously for RTG \cite{Ghazaryan2021} and for BBG \cite{Chou2021_BBG}. In the next section, we show our calculated superconducting $T_c$ in various cases and discuss the interplay between phonon-mediated attraction and Coulomb repulsion. \section{Numerical Results for superconducting $T_c$}\label{Sec:Results} In this section, we present our numerical results for superconducting $T_c$ incorporating Coulomb repulsion. The $T_c$ is obtained by solving Eq.~(\ref{Eq:LGE_mustar}) numerically with a fine momentum mesh as discussed in Appendix~\ref{App:Numerics}. The results here are qualitatively consistent with the phonon-mediated-attraction-only results (i.e., the $\mu^*$ effect unimportant) for RTG and ABCA tetralayer graphene in Fig.~\ref{Fig:BCS}(b) and \ref{Fig:BCS}(c); the Coulomb repulsion significantly suppresses the superconducting region in BBG [Fig.~\ref{Fig:BCS}(a)] because the DOS is not significantly large in BBG, making screening less significant and hence producing a relatively large $\mu^*$. A thorough study for BBG has been done by us in Ref.~\cite{Chou2021_BBG}. In this section, we focus on RTG and ABCA tetralayer graphene, especially for the experimentally relevant regimes in RTG. \subsection{Superconductivity from unpolarized normal states} \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{Tc_with_Coulomb_new.pdf} \caption{Superconducting $T_c$ for superconductivity incorporating Coulomb repulsion from unpolarized normal states. $\epsilon=10$ and $d=20$nm are used for all the data. (a) BBG (b) RTG (c) ABCA tetralayer graphene. } \label{Fig:Tc_e10_d20} \end{figure} We first discuss how the superconductivity arises from four-fold degenerate unpolarized normal states. As we discussed in the previous section, $g^*$ remains positive, indicating attractive interaction, for a wide range of doping in all three systems. In Fig.~\ref{Fig:Tc_e10_d20}, we plot $T_c$ as a function of $n_e$ with varied $\Delta_1$ for all three systems. Fig.~\ref{Fig:Tc_e10_d20}(b) and \ref{Fig:Tc_e10_d20}(c) show that observable superconductivity (say, $T_c> 20$mK) occurs in RTG and ABCA tetralayer graphene for a wide range of dopings, not just near VHS. However, this is not true for BBG as shown in Fig.~\ref{Fig:Tc_e10_d20}(a), where observable superconductivity exists only near VHS, and the highest $T_c$ is about $0.3$K ($1.2$K) for the hole (electron) doping. The very different results between BBG and other cases can be understood as arising from the quantitative difference in the DOS \cite{Chou2021_BBG}, which results in different $g^*$ in Fig.~\ref{Fig:mu_star} due to the quantitatively different $\mu^*$ effects. \subsection{Superconductivity from spin-polarized normal states} For spin-polarized (two-fold degenerate) normal states, $\rho(\mathcal{E}_F)$ is half of the value of an unpolarized state at the same $\mathcal{E}_F$, so the intraband screening is weaker, resulting in a smaller $g^*$. We plot $T_c$ as a function of $n_e$ with varied $\Delta_1$ for RTG and ABCA tetralayer graphene in Fig.~\ref{Fig:SP_Tc_e10_d20}. (The superconducting $T_c$ for BBG, which is simply too small, is not quite visible with the same scale, and it was reported previously \cite{Chou2021_BBG}.) As expected, we find that $T_c$ is smaller in Fig.~\ref{Fig:SP_Tc_e10_d20} as compared to Fig.~\ref{Fig:Tc_e10_d20}(b) and \ref{Fig:Tc_e10_d20}(c), due to larger Coulomb repulsion because of weaker screening. Despite the reduction in $T_c$, observable superconductivity still prevails for a range of dopings, qualitatively similar to the unpolarized case. Again, this is quite different from spin-polarized normal states in BBG where any observable superconductivity is only expected near VHS \cite{Chou2021_BBG}. In Ref.~\cite{Chou2021_BBG}, the highest $T_c$ is around $20$mK ($0.5$K) in the hole (electron) doping. \begin{figure}[t!] \includegraphics[width=0.45\textwidth]{SP_Tc_with_Coulomb.pdf} \caption{$T_c$ for superconductivity incorporating Coulomb repulsion. $\epsilon=10$ and $d=20$nm are used for all the data. (a) RTG (b) ABCA tetralayer graphene. } \label{Fig:SP_Tc_e10_d20} \end{figure} \subsection{Superconductivity in RTG: Tuning Coulomb repulsion}\label{Sec:Tuning_Coulomb} \begin{figure}[t!] \includegraphics[width=0.39\textwidth]{RTG_d20_varying_e.pdf} \caption{$T_c$ for RTG with different dielectric constant $\epsilon$. We use $d=20$nm for all the data. (a) SC1 regime corresponds to $\Delta_1=30$meV and unpolarized normal states. (b) SC2 regime corresponds to $\Delta_1=20$meV and spin-polarized normal states. } \label{Fig:Tuning_e} \end{figure} \begin{figure}[t!] \includegraphics[width=0.4\textwidth]{RTG_e10_varying_d.pdf} \caption{$T_c$ for RTG with different gate distance parameter $d$. We use $\epsilon=10$ for all the data. (a) SC1 regime corresponds to $\Delta_1=30$meV and unpolarized normal states. (b) SC2 regime corresponds to $\Delta_1=20$meV and spin-polarized normal states. } \label{Fig:Tuning_d} \end{figure} Based on the acoustic-phonon mechanism, suppressing Coulomb repulsion will boost $T_c$ because of the enhanced effective attraction near the Fermi surface. This can be achieved by decreasing the gate distance ($2d$) and increasing the effective dielectric constant ($\epsilon$). The main question is the amount quantitative change in $T_c$. In our previous work on BBG \cite{Chou2021_BBG}, we provided the evolution of $T_c$ with different values of $d$ and $\epsilon$. Here, we focus on the regime where superconductivity is observed in RTG \cite{Zhou2021_SC_RTG}. In the RTG experiment, both spin-singlet (coined SC1) and non-spin-singlet (coined SC2) superconducting states were observed \cite{Zhou2021_SC_RTG}. We assume $\Delta_1=30$ meV ($\Delta_1$ corresponds to the displacement field), $n_e\approx -1.9\times 10^{12}$cm$^{-2}$, and unpolarized normal states for SC1; we assume $\Delta_1=20$ meV, $n_e\approx -0.5\times 10^{12}$cm$^{-2}$, and spin-polarized normal states for SC2 regime. First of all, we do not find observable $T_c$ at $n_e\approx -1.9\times 10^{12}$cm$^{-2}$ for SC1 or $n_e\approx -0.5\times 10^{12}$cm$^{-2}$ for SC2. This is likely a quantitative issue due to parameters used in our theory (such as $g$, band parameters, $n_e$, etc). Therefore, we investigate the regimes with observable $T_c$ close to SC1 and SC2 dopings. In Fig.~\ref{Fig:Tuning_e}, $T_c$ with a few representative dielectric constants ($\epsilon$) is plotted. Larger $\epsilon$ indeed enhances $T_c$, but the enhancement is not significant for states near SC1 or SC2 regime. In Fig.~\ref{Fig:Tuning_d}, we vary the gate distance ($2d$) and plot the corresponding $T_c$. $T_c$ gets larger for a smaller $d$, but the enhancement is not outstanding for $d>5$nm, consistent with our finding in the $g^*$ previously. Note that the $T_c$ remains essentially independent of $\epsilon$ or $d$ for regimes with $T_c>0.5$K. This can be understood by the associated large DOS in those regimes, where Coulomb repulsion is strongly screened by the graphene carriers themselves, essentially independent of $\epsilon$ or $d$. \subsection{Superconductivity in RTG: Varying $g$}\label{Sec:New_G} Based on our theory with $g=g_0=474$ meV$\cdot$nm$^2$, we cannot reproduce observable $T_c$ in SC1 ($n_e\approx -1.9\times 10^{12}$cm$^{-2}$) or SC2 regime ($n_e\approx -0.5\times 10^{12}$cm$^{-2}$). This is most likely a quantitative issue because the value of $g$ is not precisely known since the deformation potential coupling is often unknown \cite{Hwang2008,Wu2019_phonon}, and the $T_c$ is quite sensitive to $g$. To investigate this issue, we vary the value of $g$ and plot the corresponding $T_c$ in Fig.~\ref{Fig:New_G}. We find that comparable $T_c$ (to the experiment \cite{Zhou2021_SC_RTG}) can be reproduced with an enhanced value of phonon-mediated attraction $1.4g_0$. Since $g=D^2/(\rho_m v_s^2)$, a slightly larger $D$ and/or a slightly smaller $v_s$ can result in a larger $g$. An interesting finding here is that our predicted $T_c$ for SC1 is slightly smaller than the $T_c$ for SC2, while it is found that SC1 is stronger than SC2 in the RTG experiment \cite{Zhou2021_SC_RTG}. We discuss possible explanations in Sec.~\ref{Sec:Discussion}. \begin{figure}[t!] \includegraphics[width=0.375\textwidth]{New_G.pdf} \caption{Superconductivity with different values of phonon-mediated attraction $g$. $\epsilon=10$ and $d=20$nm are used for all the data, and $g_0=474$ meV$\cdot$nm$^2$. (a) SC1 regime corresponds to $\Delta_1=30$meV and assume unpolarized normal states. (b) SC2 regime corresponds to $\Delta_1=20$meV and assume spin-polarized normal states. } \label{Fig:New_G} \end{figure} \section{Discussion}\label{Sec:Discussion} We study acoustic-phonon-mediated superconductivity in untwisted graphene multilayers-- BBG, RTG, and ABCA tetralayer graphene including effects of direct Coulomb repulsion. The $SU(2)\times SU(2)$ symmetric acoustic-phonon-mediated attraction naturally favors intrasublattice pairings in untwisted graphene multilayers, making $s$-wave spin-singlet and $f$-wave spin-triplet pairings dominant and degenerate. We develop a simplified, but quantitatively predictive, theory incorporating both phonon-mediated attraction and direct Coulomb repulsion. Within mean field approximation, we reproduce the recently experimentally-observed superconductivity phenomenology in BBG and RTG, and we further predict the existence of superconductivity in ABCA tetralayer graphene, which should be experimentally investigated. Our theory captures the qualitative and semi-quantitative features of the experiments \cite{Zhou2021_SC_RTG,Zhou2021_BBG}, suggesting that superconductivity in graphene untwisted multilayers is likely due to acoustic phonons. To understand why and how the acoustic-phonon mechanism can explain the BBG \cite{Zhou2021_BBG} and RTG \cite{Zhou2021_SC_RTG} experiments, one has to take into account the Coulomb repulsion which causes a suppression of the predicted $T_c$ leading to agreement with experiments. Because the BBG band generates a smaller DOS resulting int weaker screening, the Coulomb repulsion suppresses superconductivity for most doping densities except near VHS. On the other hand, the large DOS of RTG efficiently screens Coulomb repulsion and results in observable superconducting states for a wide range of doping. Thus, our results provide natural explanations to the BBG and RTG experiments without any fine-tuning or arbitrary data fitting, as we explain in the following. In the BBG experiment, a sufficiently large in-plane magnetic field, which suppresses a competing order, is required to observe superconductivity \cite{Zhou2021_BBG}. Based on our theory, observable superconductivity (i.e., $T>20$mK) can happen only near VHS. The applied in-plane magnetic field likely suppresses the competing order, which, if presents, preempts superconductivity, and spin-triplet superconductivity manifests itself in the absence of the competing order. In the RTG experiment, superconductivity is observed away from VHS without a magnetic field, and spin-singlet (spin-triplet) superconductivity emerges from unpolarzied normal states (spin-polarized normal states) \cite{Zhou2021_SC_RTG}. Our theory can naturally explain the results because the $SU(2)\times SU(2)$ symmetry in acoustic-phonon-mediated attraction favors $s$-wave spin-singlet and $f$-wave spin-triplet pairings \cite{Wu2019_phonon,Chou2021_RTG_SC}. The $s$-wave spin-singlet is usually the dominating pairing for unpolarized normal states because the subleading pairings (e.g., optical phonons \cite{Wu2018}) can enhance $s$-wave spin-singlet pairings also. For spin-polarized normal states, spin-singlet pairings are suppressed, and $f$-wave spin-triplet pairing becomes the leading pairing instability. In RTG, the absence of superconductivity near VHS or in the regime with large DOS is due to the competing correlation-induced instabilities from interaction. Note that the Stoner-type instability is very sensitive to the value of DOS at $\mathcal{E}_F$, but this is not true for the superconducting instability \cite{Lothman2017}. As a result, observable superconducting states can be found away from VHS in the RTG experiment. An interesting question is if the predicted superconductivity is robust against disorder or scatterings introduced at the sample boundary (e.g., Refs.~\cite{Wakabayashi2009electronic} and \cite{Walter2018}). While intervalley scattering can be suppressed in clean devices near perfect charge neutrality \cite{Akhmerov2008}, charge impurities at edges can cause intervalley scattering \cite{Wakabayashi2009electronic}. The $s$-wave spin-singlet superconductivity is robust against weak charge (but, not magnetic) disorder as showed by the Anderson's theorem. However, the $f$-wave spin-polarized spin-triplet (valley-singlet) pairing, which we predict for SC2 in RTG and superconductivity in BBG, is fragile in the presence of intervalley scatterings. This is mathematically analogous to the suppression of $s$-wave spin-singlet pairing due to spin-flipping scatterings \cite{Fulde1966}. To examine the role of intervalley scattering, we first estimate the coherence lengths of SC2 in Ref.~\cite{Zhou2021_SC_RTG}. We obtain the BCS coherence length $\xi_{\text{BCS}}=\frac{\hbar v_F}{\pi \Delta_0}\approx 1.38\mu$m (using $T_c=50$mK and $v_F=5\times10^4$m/s) and Ginzburg-Landau coherence length $\xi_{\text{GL}}=\sqrt{\frac{h/(2e)}{2\pi B_{c,\perp}}}\approx0.57\mu$m (using $B_{c,\perp}=1$mT). The distance between nearby contacts is around $2\mu$m, which is not significantly larger than the coherence lengths, suggesting that scatterings at boundary might affect the superconductivity. Assuming an intervalley scattering time $\tau_s$ and following Ref.~\cite{Sau2012}, we obtain an expression of the pairing potential strength ($\tilde\Delta$) perturbed by intervalley scatterings (at the second order) as follows (see Appendix~\ref{App:Intervalley} for derivations): \begin{align}\label{Eq:Intervalley} |\tilde\Delta(\tau_s,\Delta)|=|\Delta|\left[1-\frac{1}{2^{1/3}}\left(\frac{\mathcal{C}}{|\Delta|\tau_s}\right)^{2/3}\right], \end{align} where $\Delta$ is the pairing potential without intervalley scatterings and $\mathcal{C}$ is a constant encoding the average over Fermi surface and DOS at $\mathcal{E}_F$. Equation~(\ref{Eq:Intervalley}) describes the pair breaking effect due to weak intervalley scatterings in spin-polarized spin-triplet superconductivity. Assuming that the intervalley scattering is strong at the edges, the intervalley scattering rate is limited by the sample size ($L$), i.e. $\tau_s\sim L/v_F$. This results in the quantity $|\Delta|\tau_s\sim L/\xi_{\text{BCS}}$. Thus the superconductivity can survive for devices that are larger than the coherence length. This can also be understood as superconductivity in the presence of pair breaking edge disorder, where the superconducting order parameter goes to zero at the edges. The superconductivity is expected to revive on the scale of $\xi$, which can occur if the system is larger than the coherence length. The results here also qualitatively apply to superconductivity in BBG. Since the pairing glue comes from phonons in our theory, suppressing Coulomb repulsion (i.e., increasing $\epsilon$ or decreasing $d$) generically enhances superconducting $T_c$. As we discussed in Sec.~\ref{Sec:g_star} and \ref{Sec:Tuning_Coulomb}, changing $\epsilon$ and $d$ might not result in significantly different $T_c$ in RTG because the Coulomb repulsion is in the strong screening regime. In particular, the gate distance dependence is quite weak for $d>5$nm. (A similar conclusion was drawn in Ref.~\cite{Ghazaryan2021}.) For BBG, $T_c$ is more sensitive to the value of $\epsilon$ and $d$ as we pointed out previously in Ref.~\cite{Chou2021_BBG}. This can be understood by the smaller DOS in BBG, such that the intraband screening is not fully suppressing the dependence of $d$ or $\epsilon$. The possible enhancement of $T_c$ by reducing gate/dielectric screening is a testable theoretical prediction of our theory. Another important question is whether the electron-phonon coupling constant is correctly estimated in our theory as the deformation potential $D$ is not precisely known \cite{Hwang2008,Wu2019_phonon}. In Sec.~\ref{Sec:New_G}, we find that $g=1.4g_0$ can reproduce the comparable $T_c$ for SC1 and SC2 in RTG. In the RTG experiments, $T_{\text{BKT}}\approx 100$mK for SC1 and $T_{\text{BKT}}< 50$mK for SC2 were reported \cite{Zhou2021_SC_RTG}. However, our theory with $g=1.4g_0$ gives $T_c\approx80$mK for SC1 and $T_c\approx 100$mK for SC2. This raises a puzzle as our predicted $T_c$'s are in the opposite order of the experimental results. The discrepancy might be understood by the fragile nature of non-spin-singlet superconductivity (SC2), which can be suppressed easily by disorder or intervalley scatterings (e.g., scattering at sample boundary) in the experiments. Another possible explanation is that there exists a subleading pairing mechanism (such as optical phonons \cite{Wu2018} or other interaction-induced pairings \cite{Chatterjee2021,Ghazaryan2021,Dong2021,Cea2021,Szabo2021,You2021}) that contributes to SC1 but not SC2. Regardless of the possible explanation, the acoustic-phonon-mediated pairing is still the dominating gluing mechanism. We leave the puzzling discrepancy as an open question for future studies, which also requires the availability of more experimental results. Now, we discuss a number of predictions based on the acoustic-phonon theory. An interesting prediction based on our theory is that a sufficient large in-plane magnetic field can destroy the $s$-wave spin-singlet pairing, and then the $f$-wave equal-spin pairing becomes the leading superconducting instability \cite{Chou2021_RTG_SC}. In addition, it is possible that an applied in-plane magnetic field can induce new superconducting phases in RTG by suppressing the competing ordered states, similar to the field-induced superconductivity in BBG. Our theory predicts observable superconductivity not just for hole doping but also for electron doping for BBG, RTG, and ABCA tetralayer graphene, and we find that a larger $|\Delta_1|$ generally enhances the superconducting $T_c$ for electron doping. One important consequence of strong electron-acoustic phonon coupling is that the finite-temperature resistivity should develop a linear-in-$T$ resistivity for $T>T_{\text{onset}}$ and a $T^4$ resistivity for $T<T_{\text{onset}}$ \cite{Hwang2008}. We estimate that $T_{\text{onset}}$ is above 10K-20K \cite{Hwang2008,Min2011} for both BBG and RTG. The electron-phonon coupling parameter extracted from such a linear-in-$T$ resistivity should have approximate consistency with the observed $T_c$ \cite{Min2011,Wu2019_phonon,Li2020,Polshyn2019,Cao2020PRL,Chu2021phonons}. The same is true for spin or valley fluctuation mediated superconductivity, too. In the RTG experiment \cite{Zhou2021_SC_RTG} (BBG experiment \cite{Zhou2021_BBG}), a linear-in-$T$ resistivity is not seen for $T\le 20$K ($T\le 1.5$K), where the highest temperature appears to be smaller than our estimated $T_{\text{onset}}>20$K. Again, based on our theory, there should be a phonon-induced linear-in-$T$ resistivity for $T>20$K above the superconducting state. This should be investigated experimentally by extending the conductivity measurements to $T=10$K$-50$K regime. Finally, we comment on the universal theory for superconductivity in graphene-based materials (including twisted and untwisted materials). It is likely that electron-acoustic-phonon mechanism accounts for superconductivity in all graphene-based materials provided that acoustic phonons can explain the distinct superconductivity phenomenology in BBG \cite{Zhou2021_BBG}, in RTG \cite{Zhou2021_SC_RTG}, and in twisted bilayer graphene \cite{Wu2019_phonon}. In addition, several experiments on magic-angle moir\'e graphenes show that superconductivity is robust \cite{Lu2019,Saito2020independent,Stepanov2020untying,Liu2021tuning,Lewandowski2021,Shavit2021}, i.e., it can exist without any nearby correlated insulating states, hence arguing against a correlation-induced mechanism. Thus, it is natural to suspect that superconductivity and correlated states most likely come from different origins \cite{Chou2019,Wu2018,Lian2019,Wu2019_phonon}, and the acoustic-phonon mechanism can explain the superconductivity phenomenology. We emphasize that the interactions are still essential as they can induce competing orders, suppressing and preempting superconductivity. Our qualitative picture is that all graphene superconductivity is induced by acoustic phonons, but competing strongly correlated non-superconducting phases arising from electron-electron interactions may arise, competing with and occasionally suppressing the superconducting phase. In summary, we present a systematic theory, incorporating electron-phonon couplings and Coulomb repulsion, for superconductivity in untwisted graphene multilayers. We obtain $T_c$ in reasonable agreement with the experimental observation, and we speculate that acoustic phonons are responsible for all superconductivity in graphene-based materials in general. \begin{acknowledgments} We thank Anton Akhmerov and Matt Foster for useful discussions. This work is supported by the Laboratory for Physical Sciences (Y.-Z.C. and S.D.S.), by JQI-NSF-PFC (Y.-Z.C.), and NSF DMR1555135 (J.D.S.). F.W. is supported by startup funding of Wuhan University. \end{acknowledgments}
proofpile-arXiv_067-15345
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Background\label{sec:background}} \input{images/restrictions/restrictions} We develop an algorithm which grows sparse roadmaps over fiber bundles to efficiently exploit high-dimensional planning problems. As background for this task, we review the topics of optimal motion planning, multilevel abstractions (modelled using fiber bundles) and sparse roadmaps. \subsection{Optimal Motion Planning} Let $X$ be an $n$-dimensional state space and let $\x_I$ and $\x_G$ be two states in $X$ which we call the initial and the goal state. To each state space, we associate a metric function $d: X \times X \rightarrow \R$ and a constraint function $\phi: X \rightarrow \{0,1\}$ which evaluates to zero if a state is feasible and to one otherwise. The state space thus splits into two components, the constraint-free subspace $\X_{\text{free}} = \{x \in X \mid \phi(x) = 0\}$ and its complement. We define the optimal motion planning problem as the tuple $A = (\X_{\text{free}}, \x_I, \x_G, J)$, which requires us to design an algorithm to find a continuous path from $\x_I$ to $\x_G$ while (1) staying exclusively inside $\X_{\text{free}}$ and (2) minimizing the cost functional $J$ which maps paths in $\X_{\text{free}}$ to real numbers. We define a motion planning algorithm (a planner) as a mapping from $A$ to a path through $\X_{\text{free}}$. A planner can have different desirable properties. First, we like a planner to be \emph{probabilistically complete}, meaning the probability of finding a solution path if one exists approaches one as time goes to infinity. Second, we like a planner to be \emph{asymptotically near-optimal}, meaning the probability of finding a path is at least $\epsilon$ worse than the optimal solution path (under cost functional $J$). Third, we like a planner to be \emph{asymptotically sparse}, meaning the probability of adding new nodes and edges converges to zero if time goes to infinity \cite{dobson_2014}. \subsection{Multilevel Motion Planning} Because state spaces are often too high-dimensional to plan in, we use multilevel abstractions which we model using fiber bundles \cite{steenrod_1951, lee_2003}. A fiber bundle is a tuple $(X, B, F, \pi)$, consisting of a bundle space $X$, a base space $B$, a fiber space $F$ and a projection mapping $\pi$ from $X$ to B. We assume that both state space and base space have associated constraint functions $\phi$ and $\phi_B$ and that the projection mapping $\pi$ is admissible w.r.t. the constraint functions, i.e. $\phi_B(\pi(x)) \leq \phi(x)$ for any $x$ in $X$ \cite{Orthey2019}. The admissibility condition ensures that we preserve feasible solution paths under projection. While we exclusively use product spaces in this work, we model them using fiber bundles since they provide a useful vocabulary (restrictions and sections) and since they are required for extensions to task-space projections. Our approach uses the following three concepts. First, we define fibers over a base element $b$ in $B$ as $F(b) = \{x \in X\mid \pi(x) = b\}$, which is the set of points in $X$ projecting onto $b$. Please see Fig. \ref{fig:restriction:fiber} for an example of a fiber on the torus $T^2 = S^1 \times S^1$ with base space $S^1$. We additionally define the method $\textsc{Lift}: B \times F \rightarrow X$, which takes a base element $b$ and a fiber element $f$ in $F(b)$ to the bundle space. In the case of product spaces, we can define $\textsc{Lift}(b,f) = (b,f)$. Second, we define path restrictions over a base path $p: I \rightarrow B$ as $r(p) = \{x \in X \mid \pi(x) \in p[I]\}$, whereby $I$ is the unit interval and $p[I]$ is the image of the base path in $B$. Please see Fig. \ref{fig:restriction:path}. Third, we define graph restrictions over a graph $G_B = (V_B, E_B)$ on $B$ as $r(G_B) = \{x \in X \mid \pi(x) \in e[I], e \in E_B\}$ whereby $V_B$ are vertices in $B$, $E_B$ is the set of edges in $B$ and $e[I]$ is the image of an edge on the base space. Fig. \ref{fig:restriction:graph} provides a visualization of a graph restriction (individual edge restrictions have different distances from torus for better visualization). For more details, please see \cite{Orthey2020IJRR} or \cite{steenrod_1951}. \subsection{Sparse Roadmaps} To grow a sparse roadmap, we use the algorithm by Dobson and Bekris \cite{dobson_2014}. The sparse roadmap planner is similar to probabilistic roadmaps \cite{Kavraki1996, Karaman2011}, but uses a visibility region $\ensuremath{\delta}$, which consists of all feasible states in the hypersphere of radius $\ensuremath{\delta}$ around a state, to prune samples. To implement the pruning step, we add a new feasible sample if and only if it fulfills a sparseness condition. The sparseness condition consists of four elementary tests \cite{dobson_2014}. First, we test for coverage, meaning we add the sample if it does not lie in the visibility region of any sample in the graph. Second, we test for connectivity, meaning we add the sample, if it lies in multiple visibility regions, which belong to disconnected components of the sparse graph. Third, we test for interfaces, meaning we add the sample, if it lies in multiple visibility regions, which are not yet connected by an edge. Fourth and finally, we test for shortcuts, meaning we add the sample, if it provides proof of a shorter path through the free state space. We terminate the algorithm, if we either find a feasible path or if we fail $M$ consecutive times to add a sample to the sparse roadmap. For more details please see \cite{dobson_2014}. The sparse roadmap planner is probabilistically complete and asymptotically near-optimality \cite{dobson_2014} and depends on the following parameters. First, the visibility region $\ensuremath{\delta}$, which is usually a fraction of the measure of the state space. Second, the maximum number of consecutive failures $M$. $M$ is important in the analysis of the algorithm, because it provides a probabilistic estimation of the free state space covered, which is defined as the percentage $1-\frac{1}{M}$ \cite{simeon_2002}. As an example, if we stop with $M=100$, our probabilistic estimate of the free state space covered is $99\%$. Finally, we have an additional parameter for testing for shortcuts, which provides a trade-off between optimality and efficiency \cite{dobson_2014}. \section{Conclusion} We presented the sparse multilevel roadmap planner (SMLR\xspace), which we believe to be the first algorithm to generalize sparse roadmap spanners \cite{dobson_2014} to fiber bundles \cite{Orthey2020IJRR}, which are models of multilevel abstractions. Our algorithm exploits multilevel abstraction using the notion of restriction sampling with visibility regions. We have shown SMLR to be asymptotically near-optimal and asymptotically sparse by showing restriction sampling to produces a dense sampling sequence. In evaluations, we showed SMLR to efficiently and correctly terminate on feasible and infeasible problems, even when those problems have narrow passages, intricate geometries or state spaces with dimensions of up to $34$-dof. \section{Evaluation\label{sec:evaluation}} \input{src/evaluations_table} \input{images/evaluations/scenarios} To evaluate SMLR\xspace, we compare its performance on eight scenarios against the algorithms SPARS and SPARS2 from the open motion planning library (OMPL). Both SPARS and SPARS2 are the only algorithms in OMPL we know of which can return on infeasible scenarios while not timing out. To ensure a fair comparison, we set the parameters of SMLR, SPARS and SPARS2 all to $M=1000$, $\ensuremath{\delta} = 0.25\mu$ with $\mu$ being the measure of the state space (removing effects stemming from different parameter values). For SMLR, we use the parameter $\eta = 1000$ which designates how fast we expand the graph visibility region for restriction sampling. While we like our algorithm to correctly declare an infeasible problem as infeasible, we also like to make sure that the algorithm does not show false negatives, i.e. declaring a feasible problem to be infeasible. To ensure correctness, we always use two similar scenarios, one which is feasible and one which is infeasible. For all scenarios, we run each algorithm $10$ times with a time limit of $60$s. Our setup is a 8GB RAM 4-core 2.5GHz laptop running Ubuntu 16.04. \subsection{6-dimensional Bugtrap} Our first scenario is the classical narrow-passage Bugtrap scenario, where a cylindrical robot (the bug with $6$ degrees of freedom (dof)) has to escape a spherical object with a narrow exit (the trap), as shown in Fig.~\ref{fig:scenarios}. We use two versions, a feasible one with a bug which barely fits through the exit, and an infeasible one where the bug does not fit. As a simplification, we use an inscribed sphere which we describe using the fiber bundle $SE(3) \rightarrow \R^3$. We show the results in Table~\ref{table:evaluation}. While SMLR can solve (on average) both scenarios in $4.37$ and $2.47$s, respectively, both SPARS and SPARS2 time out after $60$s. \subsection{6-dimensional Drone} In the second scenario, we use a free-floating drone with $6$-dof. The drone has to traverse a room which is separated by a net. In the first version of the problem, we make the net large enough to let the drone fly trough (the feasible problem). In the second version, we make the net finely woven to prevent the drone from passing (the infeasible problem). As a simplification, we use a sphere at the center of the drone. We model this situation with the fiber bundle $SE(3) \rightarrow \R^3$. For the feasible scenario, all three planners solve the problem with SPARS2 taking $0.16$s, SMLR taking $0.23$s and SPARS taking $0.37$s. In the infeasible scenario, only SMLR solves the problem in $0.72$s, while SPARS and SPARS2 both time out. \subsection{7-dimensional KUKA LWR} In the third scenario, we use a fixed-base KUKA LWR robot with $7$-dof, which has to transport a windshield through a gap in a wall (Fig.~\ref{fig:scenarios}). We create two versions, a feasible one with the gap in the wall and an infeasible one where we close the gap. As a simplification, we use a projection onto the first two links of the manipulator arm, which we describe using the fiber bundle $\R^7 \rightarrow \R^3$. With our algorithm SMLR, we can solve both scenarios in $1.42$s and $5.34$s. For the feasible scenario, SPARS requires $33.66$s (but times out in $4$ cases) and SPARS2 requires $34.86$s (but times out in $3$ cases). Both SPARS algorithms time out for the infeasible scenario in all runs. \subsection{34-dimensional PR2} In the fourth scenario, we use the mobile-base PR2 robot with $34$-dof, which has to enter a room with a small opening as shown in Fig.~\ref{fig:scenarios}. We use again two scenarios, the feasible one with the opening and an infeasible one where we close the opening. As a simplification, we use two projections, first we remove the arms of the robot and second we project onto the mobile base. We model this situation by the fiber bundle sequence $\R^{34} \rightarrow \R^{7} \rightarrow \R^{2}$. Our algorithm SMLR requires $9.25$s to solve the feasible scenario (but times out in $1$ case) and it requires $0.32$s to terminate on the infeasible scenario. Both SPARS and SPARS2 cannot solve any of the runs in the time limit given. \section{Introduction} Sparse roadmaps \cite{dobson_2014} are essential in motion planning tasks to reduce model complexity and terminate motion planning in finite time, thereby providing (probabilistic) infeasibility proofs. Such infeasibility proofs are essential if we like to use a motion planner as building block for larger action skeletons \cite{Kaelbling2011} or symbolic planning systems \cite{Toussaint2018}. However, sparse roadmaps often operate on the full state space of the robot(s), thereby taking too much time to converge---making them often inapplicable for higher-dimensional systems. To address this problem, we propose to use sparse roadmaps \cite{dobson_2014} in conjunction with multilevel abstractions of the state space \cite{Orthey2020IJRR}. By exploiting multilevel abstractions---which we model using fiber bundles \cite{steenrod_1951}---we can often terminate the algorithm significantly faster than state-of-the-art sparse roadmap planners operating on the full state space. While multi-resolution roadmaps exists \cite{Ichnowski2019, Saund2020}, we are not aware of any algorithm to compute sparse roadmaps over multilevel abstractions. We therefore believe to be the first to combine both concepts into one concise algorithm. Let us summarize our contributions as follows. \begin{enumerate} \item We present the Sparse MultiLevel Roadmap planner (SMLR\xspace), which generalizes sparse roadmaps \cite{dobson_2014} to efficiently exploit fiber bundle structures \cite{Orthey2020IJRR} \item We evaluate SMLR\xspace on eight challenging feasible and infeasible motion planning problems involving high-dimensional state spaces up to $34$-degrees of freedom (dof) \end{enumerate} \section{Sparse Multilevel Roadmaps} \input{src/pseudocode} Let $(\x_I, \x_G, X_1,\ldots,X_K)$ be a fiber bundle sequence with $\x_I$ and $\x_G$ being start and goal state. Our task is to generalize the sparse roadmap planner \cite{dobson_2014} to fiber bundle sequences by growing $K$ graphs $(G_1, \ldots, G_K)$ on the bundle spaces $(X_1,\ldots,X_K)$, whereby we grow the $k$-th graph using restriction sampling \cite{Orthey2020IJRR} of the $(k-1)$-th graph. We call our algorithm the sparse multilevel roadmap planner (SMLR). SMLR depends on three parameters, the two parameters $\ensuremath{\delta}$ and $M$ from sparse roadmaps, and the additional parameter $\eta$, which we detail later. We show the algorithm in Alg.~\ref{alg:smlr}. We start to create a priority queue (Line \algref{alg:smlr}{alg:smlr:priorityqueue}), which orders bundle spaces depending on an importance criterion $i$, which we detail later. We sort the queue such that the space with the maximum value is on top. We then iterate over the bundle spaces from $X_1$ to $X_K$ (Line \algref{alg:smlr}{alg:smlr:forcur}) and push the current space onto the priority queue with an importance of $1$ (Line \algref{alg:smlr}{alg:smlr:pushcur}). We then execute a section test (Line \algref{alg:smlr}{alg:smlr:section}), where we search for a feasible solution over the path restriction of the solution path (if any) on the previous bundle space $X_{\text{cur}-1}$. The \textsc{SectionTest} method helps to overcome narrow passages, but is not essential for the understanding of this paper -- we use it as a black box within SMLR. Please see our previous publication \cite{Orthey2020TRO} for more information. We then grow the roadmaps $(G_1,\ldots,G_{\text{cur}})$ as long as the planner terminate condition (PTC) of the current bundle space $\X_{\text{cur}}$ is not fulfilled (Line \algref{alg:smlr}{alg:smlr:while}). In our case, we terminate if a solution is found or if we reach either the infeasibility criterion or a time limit. Inside the while loop, we take the top bundle space $\X_{\text{top}}$ with the largest importance value (Line \algref{alg:smlr}{alg:smlr:poptop}) and sample a random point using \textsc{RestrictionSampling} (Line \algref{alg:smlr}{alg:smlr:restrictionsampling}). We then add the point to the graph with \textsc{AddConditional} (Line \algref{alg:smlr}{alg:smlr:addconditional}), if it fulfills the sparseness condition \cite{dobson_2014}, which we detail in Sec.~\ref{sec:background}. Finally, we recompute the importance of the bundle space (Line \algref{alg:smlr}{alg:smlr:importance}) and push the space back onto the queue (Line \algref{alg:smlr}{alg:smlr:pushtop}). The two methods \textsc{RestrictionSampling} and \textsc{ComputeImportance} are further detailed in the next two subsections. To facilitate understanding, we give first a brief overview of each. First, in \textsc{RestrictionSampling}, we restrict sampling on the bundle space by using information from the graph on its base space. We differ from dense roadmaps by using the visibility region of the sparse graph which depends on the visibility range $\ensuremath{\delta}$. Second, in \textsc{ComputeImportance}, we use the sampling density of the sparse graph together with the number of consecutive failures to estimate the importance of the bundle space and thereby its position in the priority queue. Next, we discuss each method in more detail and provide an analysis of the algorithm. \subsection{Restriction Sampling with Visibility Regions} Let $X_k$ be a bundle space with graph $G_k$, and let $X_{k-1}$ be its base space with graph $G_{k-1}$. To grow the graph $G_k$, we use the framework of restriction sampling \cite{Orthey2020IJRR}. In restriction sampling, we sample states on $X_k$ by uniformly sampling from the graph restriction of $G_{k-1}$ (see Sec.~\ref{sec:background}). To give guarantees on asymptotic optimality, we would need the vertices of $G_{k-1}$ to become dense in the free state space. To avoid using a dense graph for sampling \cite{dobson_2014} while giving guarantees on asymptotic near-optimality, we opt to exploit the graph visibility region. The visibility region of a graph $G$ is the set $V(G, \ensuremath{\delta}) = \{x \in X \mid d(x,e[I]) \leq \ensuremath{\delta} \text{ for some } e \text{ in } G\}$, whereby $d$ is the metric on $X$, $e$ is an edge from $G$ and $e[I]$ is the image of the edge in $X$. \begin{wrapfigure}{r}{0.5\linewidth} \includegraphics[width=0.95\linewidth]{images/graphvisibility.pdf} \caption{Visibility region $V(G, \ensuremath{\delta})$ of a graph $G$.\label{fig:visibilityregion}} \end{wrapfigure} To sample the graph visibility region, we use the restriction sampling algorithm depicted in Alg. \ref{alg:restriction_sampling}. The algorithm requires an existing base graph $G_{k-1}$ (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:exists}), then samples a random state on a random edge (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:sampleedge}). Sampling the visibility region directly would be too uninformative. We thus use a smoothly varying parameter $\visRegion_{\text{bias}} \in [0,\ensuremath{\delta}]$, which first restricts sampling to the sparse graph ($\visRegion_{\text{bias}}=0$), then smoothly increase in each iteration until the whole visibility region $\delta$. This situation is visualized in Fig.~\ref{fig:visibilityregion}. To control the rate of change of $\visRegion_{\text{bias}}$, we use the parameter $\eta$. In particular for narrow passages, it is often crucial to sample directly on the graph restriction. We thus sample the visibility region (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:uniformnear}) only in a certain percentage of cases, depending on $\visRegion_{\text{bias}}$. Once a base element is chosen, we sample a corresponding fiber space element (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:samplefiber}), lift the states (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:lift}) and return the state (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:return}). If no base graph exists, we revert to a uniform sampling of the space (Line \algref{alg:restriction_sampling}{alg:restriction_sampling:nobase}). \subsection{Importance and Ordering of Bundle Spaces} To grow sparse multilevel roadmaps, we need to decide which roadmap on which level we should grow next, i.e.~we need an ordering of bundle spaces. In prior work \cite{Orthey2020IJRR}, we advocated the use of an exponential importance criterion $i(X_k) = 1/(|V_k|^{1/{n_k}}+1)$, with $|V_k|$ being the vertices on the graph $G_k$ on $X_k$ and $n_k$ being the dimensionality of $X_k$, which was motivated by the sampling density of the graph which is proportional to $|V_k|^{1/n_k}$ \cite{Hastie2009}. However, sampling density is not good criterion for sparse roadmaps, because we care more about the coverage of the free space. To account for the coverage of the free space, we advocate an importance criterion using $M_k$, the number of consecutive sample failures. The number $M_k$ provides an estimate of the free space coverage, namely as the percentage $1-\frac{1}{M_k}$ \cite{Simeon2000}. The higher $M_k$, the less often we should sample $X_k$. We formulate the importance criterion thus as \begin{equation} i(X_k) = \frac{1}{M_k+1}.\label{eq:importance} \end{equation} Note that we stop the algorithm only if $M_k > M$ \emph{and} $X_k$ is the current bundle space $\X_{\text{cur}}$. Since $i(X_k)$ will eventually converge to zero, we ensure that every bundle space up until $k$ would be chosen infinitely many times. This is an important requirement to provide asymptotic guarantees of the algorithm. \subsection{Analysis of Algorithm} To prove SMLR to be asymptotically near-optimal and asymptotic sparse, we need to prove that restriction sampling with visibility regions is dense in the free state space of the last bundle space $X$. Since the importance criterion in Eq.~\eqref{eq:importance} eventually converges to zero, we can thus ensure that we produce an infinite sampling sequence on the free state space $\X_{\text{free}}$. Therefore, when using sparse roadmap spanner \cite{dobson_2014} to grow the roadmap on $X$, we retain all their properties, which include asymptotic near-optimality and asymptotic sparseness. However, we might reduce the number of vertices considerably. Let us prove that restriction sampling with visibility regions is dense in the \emph{free} state space $\X_{\text{free}}$ on the fiber bundle $(X,B,F,\pi)$. This argument can be applied recursively to prove the same for fiber bundle sequences \cite{Orthey2020IJRR}. Note that we use the set-theoretic definition of dense, which states that a set $A$ is dense in a space $X$ if the intersection of $A$ with any non-empty open subset $U$ of $X$ is non-empty \cite{munkres_1974}. \begin{theorem} Restriction sampling with visibility regions on $X$ produces a sampling sequence $A = \{x_m\}$, which is dense in $\X_{\text{free}}$. \end{theorem} \begin{proof} Let $U$ be an arbitrary open set in $\X_{\text{free}}$. Since $\pi$ is admissible, the projection $\pi(U)$ of $U$ onto $B$ is an open subset of the free base space \cite{Orthey2019}. Since uniform sampling on $B$ with visibility regions will eventually cover the free base space \cite{Simeon2000}, $\pi(U)$ will be a subset of the visibility region of the graph on $B$. When the number of samples goes to infinity, we revert to uniform sampling of the graph restriction and will thus sample $\pi(U)$ infinitely many times. By sampling the fiber over $\pi(U)$, we thus eventually obtain a sample $x$ in $U$. Since $U$ was arbitrary, the sequence is dense in $\X_{\text{free}}$. \end{proof} \section{Related Work} We review two aspects of (sampling-based) motion planning \cite{lavalle_2006}. First, we discuss multilevel motion planning, where we plan over multiple levels of abstraction. Second, we discuss sparse roadmaps on general state spaces. We will investigate both topics in detail in Sec. \ref{sec:background}. \subsection{Multilevel Motion Planning} To efficiently solve high-dimensional motion planning problems, we can use the framework of multilevel motion planning \cite{Ferbach1997, Sekhavat1998, Reid2020, Vidal2019, Orthey2020IJRR}, where (admissible) lower-dimensional projections are used to simplify the state space of a robot. We can construct multilevel abstractions either manually \cite{Reid2019, Orthey2019} or learn them from data \cite{Ichter2019, Brandao2020}. Our approach is complementary, in that we assume a multilevel abstraction to be given and we concentrate on computing sparse roadmaps over those abstractions. Once we fix a multilevel abstraction, we can utilize classical motion planning algorithms to exploit them. A popular choice is the rapidly-exploring random tree algorithm \cite{Kuffner2000}, which we can generalize to selectively grow samples towards regions informed by lower-dimensional abstractions \cite{Ichter2019, Orthey2019} or workspace information \cite{Rickert2014}. While algorithms often show speed-ups of two to three orders of magnitude \cite{Rickert2014, Tonneau2018}, they usually lack guarantees on asymptotic optimality \cite{Karaman2011}. There are, however, two planner which provide those guarantees. First, the quotient-space roadmap planner (QMP*) \cite{Orthey2020IJRR, Orthey2018}, which generalizes the probabilistic roadmap planner (PRM*) \cite{Karaman2011}. Second, the hierarchical bi-directional fast marching tree (HBFMT*) \cite{Reid2019, Reid2020}, which generalizes the fast marching trees algorithm (FMT*) \cite{Janson2015}. While both guarantee asymptotic optimality \cite{Orthey2020IJRR, Reid2020}, they support, however, either only euclidean spaces \cite{Reid2020} or rely on dense roadmaps \cite{Orthey2020IJRR}. Our approach differs significantly, in that we are the first to compute sparse roadmaps over general multilevel abstractions---while providing guarantees on asymptotic near-optimality. \subsection{Sparse Roadmaps} The history of sparse roadmaps essentially begins with the pioneering work by Sim{\'e}on et al. \cite{Simeon2000}, who were the first to prune states based on visibility regions. With visibility regions, we try to find a minimal set of states from which the full state space is visible, similar to the concept of guards in the art gallery problem \cite{Orourke1987}. However, visibility roadmaps often sacrifice on path quality. As remedies, we could introduce cycles \cite{Schmitzberger2002, Nieuwenhuisen2004} or use edge visibility \cite{jaillet_2008} to improve path quality. While cycles and edge visibility can improve path quality, there are no guarantees on optimality. This changed with the advent of near-optimal sparse roadmaps \cite{Marble2013}. Using dense asymptotic optimal roadmaps \cite{Karaman2011}, we can use graph spanners to sparsify a dense roadmaps while providing guarantees on path quality. We can achieve this by either removing edges \cite{Marble2013, Wang2015} or edges and vertices \cite{Salzman2014}. Computing dense roadmaps before sparsification is, however, computationally expensive. Later work introduces incremental sparse graph spanners, with which we can remove dependence on dense roadmaps altogether \cite{dobson_2014}. Our work is complementary to sparse graph spanners, in that we also use incremental sparse graph spanners \cite{dobson_2014}. We differ, however, in building not one, but multiple sparse roadmaps on different abstraction levels. When using sparse roadmaps, we often face the problem of explicitly defining a visibility or connection radius to define the sparseness of the graph. To handle this trade-off between optimality and efficiency, we can often create multi-resolution roadmaps \cite{Du2020}. Multi-resolution roadmaps are sets of roadmaps which differ in how sparse they are. To vary roadmap sparsity, we could change the connection radius \cite{Saund2020} or we can selectively remove edges, either evenly distributed \cite{Ichnowski2019} or based on a reliability criterion \cite{Murray2020}. To exploit those multi-resolution roadmaps, we could plan on the highest resolution roadmap and selectively refine the roadmap whenever we hit an obstacle \cite{Saund2020}. Such a strategy is efficient, because solutions on sparser roadmaps act as admissible heuristics for planning \cite{Aine2016, Du2020}. While multi-resolution roadmaps exist on the same state space, our approach is complementary, in that we create sparse multilevel roadmaps on different state spaces, whereby each state space represents a relaxed planning problem.
proofpile-arXiv_068-4
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The motivation for our work comes from the celebrated conjecture of Witten \cite{W} saying that the intersection theory on the Deligne--Mumford moduli spaces $\overline{\mathcal{M}}_{g,n}$ is governed by the KdV hierarchy. The conjecture was first proved by Kontsevich \cite{K}. One of the key ingredients in Kontsevich's argument is a certain Hermitian matrix model, which is now known as the {\em Kontsevich matrix model}. The latter provides both a tau-function solution to the KdV hierarchy and a Feynman diagram expansion which can be related to the intersection theory on the moduli spaces $\overline{\mathcal{M}}_{g,n}$. Witten's conjecture can be generalized in various ways, such as Gromov--Witten theory \cite{W} and Fan--Jarvis--Ruan--Witten (FJRW) theory \cite{FJRW}. It is natural to ask to what extent the Kontsevich matrix model can be generalized too. There are only two cases in which the answer to the above question is known to be positive, that is, Gromov--Witten theory of the projective line $\mathbb{P}^1$ and FJRW theory of the simple singularities of type $A$. The case of a simple singularity of type $D$ is arguably the next on the list and this is exactly the construction carried out in this paper. We expect that there exists a matrix model for the FJRW invariants corresponding to the remaining simple singularities $E_N$ ($N=6,7,8$). However, at this point the construction of such a model looks quite challenging. The main issue from our point of view is that the fermionic realizations of the basic representations of the affine Lie algebras of type $E_N$ ($N=6,7,8$) are not known. In the rest of this introduction let us concentrate on stating our results. \subsection{$(h,2)$-reduction of the 2-component BKP hierarchy} The explicit form of the Kac--Wakimoto hierarchies of type D was determined by ten Kroode and van de Leur \cite{KvL}. We are interested in the principal case, that is, the Kac--Wakimoto hierarchy corresponding to the conjugacy class of the Coxeter transformation. It turns out that the principal Kac--Wakimoto hierarchy of type D is a reduction of the so-called 2-component BKP hierarchy (see \cite{LWZ} and Corollary 1 in \cite{CM}). Let \beq\new\begin{array}{c}\nonumber \mathbf{t}_a:=(t_{a,m})_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}_{\rm odd}^+}, \quad a=1,2 \end{array}\eeq be two sequences of formal variables. We denote by $\mathbb{Z}_{\rm odd}$ the set of all odd integers and by $\mathbb{Z}_{\rm odd}^+$ the set of all positive odd integers. Suppose that $h:=2N-2>0$ is an even integer. A formal power series \beq\new\begin{array}{c}\nonumber \tau(\mathbf{t}_1,\mathbf{t}_2)\in \mathbb{C}[\![\mathbf{t}_1,\mathbf{t}_2]\!] \end{array}\eeq is said to be a {\em tau-function} of the $(h,2)$-reduction of the 2-component BKP hierarchy if the following Hirota bilinear equations hold: \beq\new\begin{array}{c}\n \Omega_m(\tau\otimes \tau)=0,\quad m\geq 0. \end{array}\eeq Here $\Omega_m$ is the following bi-linear operator acting on $\mathbb{C}[\![\mathbf{t}_1,\mathbf{t}_2]\!] ^{\otimes 2}$ \beq\new\begin{array}{c}\nonumber \operatorname{Res}_{z=0} \frac{dz}{z} \Big( z^{hm} \Gamma_1(\mathbf{t},z) \otimes \Gamma_1(\mathbf{t},-z) - z^{2m} \Gamma_2(\mathbf{t},z) \otimes \Gamma_2(\mathbf{t},-z) \Big), \end{array}\eeq where \beq\new\begin{array}{c}\nonumber \Gamma_a(\mathbf{t}, z) := \exp\Big( \sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}_{\rm odd}^+} t_{a,m} z^m\Big) \exp\Big(- \sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}_{\rm odd}^+} 2\partial_{t_{a,m}} \frac{z^{-m}}{m}\Big) \end{array}\eeq are {\em vertex operators}. Our main interest is in tau-functions satisfying the so-called {\em string equation} \beq\new\begin{array}{c}\label{str_eqn} L_{-1}\, \tau=0,\quad \quad L_{-1} := -\mathbf{i}\, \frac{\partial}{\partial t_{1,1}} + \sum_{a=1,2}\sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}_{\rm odd}} : J^a_m J^a_{-m- h_a}:, \end{array}\eeq where $\mathbf{i}:=\sqrt{-1}$, $h_1:=h:=2N-2$, $h_2=2$, \beq\new\begin{array}{c}\label{Jam} J^a_m := 2\frac{\partial}{\partial t_{a,m}},\quad J^a_{-m} := m t_{a,m},\quad m\in \mathbb{Z}_{\rm odd}^+, \end{array}\eeq and the normal ordering means that the annihilation operators, i.e., all $J^a_m$ with $m>0$, should be applied first. The existence of a tau-function satisfying the string equation was proved by Vakulenko \cite{V}. Cheng and Milanov proved \cite{CM} that the total descendant potential of $D_N$-singularity is also a solution of the $(h,2)$-reduction satisfying the string equation. Vakulenko did not discuss the uniqueness of his construction, while Cheng and Milanov proved that the total descendant potential satisfies also a dilaton constraint and that the string equation and the dilaton constraint uniquely determine the tau-function. Our first result is that the dilaton constraint is redundant, that is, the following theorem holds: \begin{theorem}\label{t1} There exists a unique tau-function of the $(h,2)$-reduction of the 2-component BKP hierarchy satisfying the string equation (\ref{str_eqn}). \end{theorem} This result is a direct analog of the well-known statement for the $A_N$ singularities (for example see \cite{KS}). The problem of characterizing tau-functions of the Drinfeld--Sokolov hierarchies via the string equation was studied in a recent paper by Cafasso and Wu \cite{CaWu}. We expect that Theorem \ref{t1} can also be derived from their results. Theorem \ref{t1} is very important for identifying tau-functions of the 2-component BKP hierarchy that have different origin. In fact, the unique tau-function specified by Theorem \ref{t1} arises in at least 3 other different ways: the total descendent potential of $D_N$-singularity, a tau-function of the principal Kac--Wakimoto hierarchy of type $D_N$, and the generating function of FJRW-invariants of the Berglund--H\"ubsch dual singularity $D_N^T$. We refer to Section \ref{sec:geometry} for more details. Let us point out that our matrix model will be obtained by using the theory of the 2-component BKP hierarchy. Section \ref{sec:geometry} is written only for people who might be interested in the applications of our result to geometry. Especially, the FJRW theory might provide a geometric approach to our model similar to Kontsevich's argument in \cite{K}. In other words, our matrix model could be viewed as a motivation to look for a combinatorial model for the virtual fundamental cycle in the FJRW theory of the Berglund--H\"ubsch dual singularity $D_N^T$. In the rest of this paper, except for Section \ref{S_3}, we will be working with the unique tau-function $\tau(\mathbf{t}_1,\mathbf{t}_2)$ specified by Theorem \ref{t1}. Sometimes, we denote it also by $\tau^{\rm CM}(\mathbf{t}_1,\mathbf{t}_2)$ in order to emphasize that we follow the normalization of Cheng--Milanov \cite{CM}. The precise identification between $\tau^{\rm CM}(\mathbf{t}_1,\mathbf{t}_2)$, the total descendant potential of $D_N$-singularity, and the generating function of FJRW-invariants of the Berglund--H\"ubsch dual $D_N^T$ is given in Section \ref{sec:BKP-D}. \subsection{Matrix model}\label{sec:mm} Let us introduce two non-degenerate diagonal matrices \beq\new\begin{array}{c}\nonumber Z_a={\rm diag}\,(z_{a,1},\dots,z_{a,N_a}),\quad a=1,2, \end{array}\eeq such that, $\operatorname{Arg}(z_{1,i}) =\tfrac{\pi}{2(h+1)}$, $\operatorname{Arg}(z_{2,j}) =0$, that is, $z_{2,j}\in \mathbb{R}_{> 0}$, and $N_1+N_2$ is an even number. We will refer to the substitution \beq\new\begin{array}{c}\nonumber t_{a,m} = -\frac{2}{m} \, \hbox{Tr }(Z_a^{-m}) = -2 \sum_{i=1}^{N_a} \frac{(z_{a,i})^{-m}}{m} \end{array}\eeq as the {\em Miwa parametrization}, while the formal series $\tau(Z_1,Z_2):=\tau(\mathbf{t}_1,\mathbf{t}_2)|_{t_{a,m}:= -\tfrac{2}{m}\operatorname{Tr}(Z_a^{-m})}$ will be called the tau-function in the Miwa parametrization. Let us denote by ${\mathcal H}_N$ the linear space of all $N\times N$ Hermitian matrices, and by ${\mathcal H}_N^+$ the space of all positive definite $N\times N$ Hermitian matrices. The space $\mathcal{H}_N$ is equipped with a canonical measure \begin{align} \nonumber \left[d X\right] & := p_*\left( \Delta_{N}(x)^2 \left[d U\right] \, \prod_{i=1}^{N} d x_i \right), \end{align} where for a diagonal matrix $a=\operatorname{diag}(a_1,\dots,a_N)$ we denote by $\Delta_N(a):=\prod_{1\leq i<j\leq N} (a_j-a_i)$ the Vandermonde determinant, $\left[d U\right] $ is the Haar measure on the unitary group $U(N)$, and $p_*$ is the pushforward operation, i.e., integration along the fiber, with respect to the proper surjective map $p:U(N)\times \mathbb{R}^N \to \mathcal{H}_N$, $p(U,x):= U {\rm diag}\,(x_1,x_2,\dots,x_N)U^\dagger$. For our purposes, it is more convenient to work with the following measure: \begin{align} \nonumber \widetilde{\left[d X\right]} & :=\frac{\left[d X\right]}{\sqrt{\det\left(X\otimes I_N+I_N\otimes X\right)}}, \end{align} where $I_N$ is the identity matrix of size $N$. Let us introduce also the interaction term \beq\new\begin{array}{c}\nonumber S(X,Y)=\det\left(\frac{X\otimes I_{N_2}-I_{N_1}\otimes Y}{X\otimes I_{N_2}+I_{N_1}\otimes Y}\right), \end{array}\eeq and the potential \beq\new\begin{array}{c}\label{eq_Wpot} W(X,Z)= \frac{{\bf i}^h}{h+1}X^{h+1}-XZ. \end{array}\eeq The main result of this paper can be stated as follows. \begin{theorem}\label{t2} Under the above notation, the tau-function in the Miwa parametrization coincides with the asymptotic expansion of a matrix integral: \beq\new\begin{array}{c} \nonumber \tau(Z_1,Z_2)\sim \frac{e^{\frac{\mathbf{i} h}{h+1} \hbox{Tr } Z_1^{h+1}}}{{\cal N}} \int _{e^{\frac{(h+2)\pi}{2(h+1)}\mathbf{i}}{\mathcal H}_{N_1}} \widetilde{\left[d X\right]} \, e^{\hbox{Tr } W(X,Z_1^h) } \int _{{\mathcal H}_{N_2}^+} \widetilde{\left[d Y\right]}\, e^{\hbox{Tr } W(Y,Z_2^2)} S(X,Y), \end{array}\eeq where the normalization factor $\mathcal{N}$ is given explicitly by \eqref{formula-N}. \end{theorem} The asymptotic equality in Theorem \ref{t2} is interpreted via the formal expansion of the matrix integrals according to the steepest descent method, see Section \ref{sec:di} for more details. Establishing the analytic properties of our matrix integral seems to be a separate project, so we do not pursue it in this paper. Let us just make a comment about a possible relation to the notion of {\em strongly asymptotically developable} functions (see \cite{Ma} for some background). We expect that the matrix integral in Theorem \ref{t2}, after choosing appropriate integration cycles, is analytic and strongly asymptotically developable as $z=(z_{a,j})\in (\mathbb{P}^1)^{N_1+N_2}$ tend to the divisor $\prod_{a,j} z_{a,j}=\infty$ in an appropriate multi-sector and that $\tau(Z_1,Z_2)$ is the formal series of the corresponding asymptotic expansion. Note however, that since the interacting term $S(X,Y)$ is a meromorphic function, the multivariable asymptotics that we need are slightly more complicated than the ones in \cite{Ma}. \begin{remark} The symmetry algebra of the $BKP$ hierarchy is a certain version of the central extension of the Lie algebra $B_\infty$. Because of that it might be a bit surprising that topological solution of the 2-component $BKP$ hierarchy is described in terms of a Hermitian (not orthogonal) matrix integral. \end{remark} \begin{remark} While the integrals over $X$ and $Y$ looks pretty similar, they are essentially different. The integral over $X$ is an asymptotic expansion at the vicinity of the critical point $X=\mathbf{i} Z_1$. It is typical for the generalized Kontsevich models, in particular, to the integral for the $A_N$ singularity, that can be described as a perturbative expansion of the Gaussian integral. The integral over $Y$ is an asymptotic expansion at the vicinity of the point $Y=0$. It is essentially the matrix Laplace transform and, in a certain sense, is much simpler. \end{remark} \subsection{Organization of the paper} In Section \ref{sec:geometry} we give a precise identification of our tau-function with the total descendent potential of the simple singularity of type $D_N$ and the generating function of FJRW-invariants of the Berglund--H\"ubsch dual singularity $D_N^T$. The reader not interested in the applications to geometry could skip this Section, because it is not logically connected with the rest of the paper. In Section \ref{S_3} we introduce the main tools of this paper, i.e., the formalism of neutral fermions and prove that in the Miwa parametrization every tau-function of the 2-component BKP hierarchy can be written as a ratio of two Pfaffians, where the numerator is given by a Pfaffian of a matrix whose entries are given by 2-point correlators. This formula is a direct analog of the well-known determinant description of the KP tau-function in the Miwa parametrization. In Section \ref{sec:Gr} we give a proof of Theorem \ref{t1}. Our strategy is similar to \cite{KS}, that is, first, we recall the Grassmannian description of the 2-component BKP hierarchy. The string equation yields a certain ordinary differential equation that can be solved in terms of certain steepest descent integrals. Then we prove that the subspace corresponding to the point of the BKP Grassmannian parametrizing the tau-function of interest has a basis which can be reconstructed uniquely from the steepest descent asymptotic of the one-dimensional integrals. Finally, in Section \ref{sec:mi} we extend the methods from Section \ref{sec:Gr} in order to obtain formulas for the entries of the Pfaffian matrix from Section \ref{S_3} in terms of asymptotic expansion of certain double integrals. This is done in Proposition \ref{kern_basis}, which could be viewed as the key step in proving Theorem \ref{t2}. Let us also point out that in order to prove Proposition \ref{kern_basis} we had to use a certain symmetry of the tau-function (see Lemma \ref{le:symm}), which on the other hand is established via Theorem \ref{t1}. In other words, Theorem \ref{t1} plays an important role in the proof of Theorem \ref{t2}! Finally, in the last section we make some further remarks and outline some further direction for investigation. {\bf Acknowledgements.} The work of A.A. is partially supported by IBS-R003-D1 and by RFBR grant 18-01-00926. The work of T.M. is partially supported by JSPS Grant-In-Aid (Kiban C) 17K05193 and by the World Premier International Research Center Initiative (WPI Initiative), MEXT, Japan. T.M. would like to thank Emil Horozov and Mikhail Kapranov for useful conversations on matrix models and multivariable asymptotics. A.A. and T.M. would like to thank anonymous referees for the suggested improvements. \section{Geometric interpretation of the tau-function}\label{sec:geometry} The goal of this section is to give the precise identification between the tau-function from Theorem \ref{t1}, the total descendant potential of a simple singularity of type $D$, and the generating function of the FJRW invariants of type $D^T$, where $D^T$ denotes the so-called Berglund--H\"ubsch (BH) dual. \subsection{FJRW-invariants}\label{sec:FJRW-inv} The BH dual singularity $D_N^T$ is given by the polynomial $W(x,y):=x^{N-1} y +y^2$. Note that this polynomial is not a singularity of type $D_N$. According to \cite{FJRW}, the intersection theory on the moduli space of $W$-spin curves corresponding to the potential $W(x,y)= x^{N-1} y +y^2$ yields a Cohomological Field Theory (CohFT) on an $N$-dimensional vector space \beq\new\begin{array}{c}\nonumber \mathcal{H}_W:=\mathbb{C} e_0 \bigoplus \bigoplus_{i=1}^{N-1} \mathbb{C} e_{2i-1}, \end{array}\eeq that has the following properties: Let us assign degrees \beq\new\begin{array}{c}\nonumber \operatorname{deg} (e_0)=\frac{N-2}{2N-2},\quad \operatorname{deg} (e_{2i-1}) = \frac{i-1}{N-1} \quad (1\leq i\leq N-1). \end{array}\eeq \begin{enumerate} \item[(i)] {\em Dimension constraint}: if $\alpha_i\in \mathcal{H}_W$ are homogeneous elements, then the correlator \begin{equation}\label{correlator} \left < \alpha_1\psi^{k_1},\dots,\alpha_m\psi^{k_m}\right >^{\rm FJRW}_{g,m} \end{equation} is non-zero only if \beq\new\begin{array}{c}\nonumber \sum_{i=1}^m (\operatorname{deg} (\alpha_i) + k_i) = 3g-3 + m + D(1-g), \end{array}\eeq where $D:=1-\tfrac{1}{N-1}$ is the {\em conformal dimension} of the CohFT. Let us point out that $D=1-\tfrac{2}{h}$, where $h=2N-2$ is the Coxeter number for the root system of type $D_N$. \item[(ii)] {\em Euler characteristics constraint}: Put $\Theta^0:=(0,0)$, $\Theta^{2i-1} = (\tfrac{2i-1}{h},\tfrac{1}{2})$ ($1\leq i\leq N-1$). Then the correlator (\ref{correlator}) is non zero only if \beq\new\begin{array}{c}\nonumber \frac{1}{h} (2g-2+m) - (\Theta^{\alpha_1}_1 +\cdots + \Theta^{\alpha_m}_1) \in \mathbb{Z}, \\ \frac{1}{2} (2g-2+m) - (\Theta^{\alpha_1}_2 +\cdots + \Theta^{\alpha_m}_2) \in \mathbb{Z}, \end{array}\eeq where $\Theta^\alpha_s$ for a homogeneous element $\alpha\in \mathbb{C} e_i$ stand for the $s$-component of $\Theta^i$. \item[(iii)] Genus-0 three point correlators (see \cite{FJRW}, Section 5.2.2): a) All 3-point correlators $\left < \alpha_1,\alpha_2,\alpha_3\right >^{\rm FJRW}_{0,3}$ satisfying the dimension constraint $\operatorname{deg} (\alpha_1)+\operatorname{deg} (\alpha_2)+\operatorname{deg} (\alpha_3)=D$ are equal to 1 unless one of the insertions $\alpha_i\in \mathbb{C} e_0$. b) The only non-zero 3-point correlator $\left < \alpha_1,\alpha_2,\alpha_3\right >^{\rm FJRW}_{0,3}$, involving an insertion $\alpha_i\in \mathbb{C}e_0$ is \beq\new\begin{array}{c}\nonumber \left < e_0, e_0,e_1\right >^{\rm FJRW}_{0,3} = -\frac{1}{N-1}. \end{array}\eeq \item[(iv)] The 4-point correlator (see \cite{FJRW}, Section 6.3.7): \beq\new\begin{array}{c}\nonumber \left < e_3, e_3, e_{2N-5},e_{2N-3}\right >^{\rm FJRW}_{0,4} = \frac{1}{h}. \end{array}\eeq \item[(v)] $e_1$ is a unit of the CohFT. \end{enumerate} Fan--Jarvis--Ruan prove the following reconstruction result (see \cite{FJRW}, Theorem 6.2.10, part (3)): properties (iii) and (iv) uniquely determine the CohFT, that is, the correlators (\ref{correlator}) are uniquely determined from (iii), (iv), and the axioms of a CohFT. Let us recall the total descendant potential of the FJRW theory, that is, the following generating function of FJRW-invariants: \beq\new\begin{array}{c}\nonumber \mathcal{D}^{\rm FJRW}(\hbar,\mathbf{t}^{\rm FJRW}):= \exp\left( \sum_{g,\kappa} \hbar^{g-1} \left < e_{i_1} \psi^{k_1},\dots , e_{i_m} \psi^{k_m} \right >_{g,m} \frac{ t^{\rm FJRW}_{k_1,i_1}\cdots t^{\rm FJRW}_{k_m,i_m} }{m!} \right), \end{array}\eeq where the sum is over all integers $g\geq 0$ and over all sequences $\kappa=((k_1,i_1),\dots,(k_m,i_m))$ of pairs. \subsection{Mirror symmetry}\label{Mirror} Let us recall the mirror symmetry result of Fan--Jarvis--Ruan (see \cite{FJRW}, Theorem 6.1.3, part (3)). Let \beq\new\begin{array}{c}\nonumber f(x_1,x_2,x_3):= x_1^2 x_2-x_2^{N-1} + x_3^2 \end{array}\eeq be the $D_N$ singularity. Note that $f$ is quasi-homogeneous: if we assign degrees $\tfrac{N-2}{2N-2}, \tfrac{1}{N-1},$ and $\tfrac{1}{2}$ to respectively $x_1, x_2,$ and $x_3$, then $f$ is homogeneous of degree 1. Therefore, the {\em local algebra} \beq\new\begin{array}{c}\nonumber H_f:=\mathbb{C}[x_1,x_2,x_3]/(\partial_{x_1}f,\partial_{x_2}f,\partial_{x_3}f) \end{array}\eeq is naturally a graded vector space. Let us recall also the so-called {\em Grothendieck residue} on $H_f$. Note that the determinant of the Hessian matrix $\operatorname{Hess}(f):=\operatorname{det}(\partial^2f/\partial x_i\partial x_j)$ is a homogeneous polynomial of degree $D=\tfrac{N-2}{N-1}$. The maximal possible degree of a homogeneous subspace in the local algebra is $D$. The corresponding homogeneous subspace is 1 dimensional and hence it is spanned by the class $[\operatorname{Hess}(f) ]$. Here if $\psi\in \mathbb{C}[x_1,x_2,x_3]$, then $[\psi]$ denotes the equivalence class of $\psi$ in the local algebra. Given an element $\psi$ in the local algebra, let us define the Grothendieck residue $\operatorname{Res}(\psi)$ by the formula $N \psi_D= \operatorname{Res}(\psi) [\operatorname{Hess}(f)]$, where $\psi_D$ is the homogeneous component of $\psi$ of maximal degree. Alternatively, the Grothendieck residue coincides with the multidimensional residue \beq\new\begin{array}{c}\nonumber \operatorname{Res}(\psi) = \operatorname{Res}\ \frac{\psi dx_1 \wedge dx_2\wedge dx_3}{\partial_{x_1}(f)\partial_{x_2}(f) \partial_{x_3}(f)}, \end{array}\eeq that is, the LHS above is by definition $(2\pi \mathbf{i})^{-3}\times$ the integral of the meromorphic 3-form along an appropriate toroidal cycle $|\partial_{x_1}(f)|=|\partial_{x_2}(f)|= |\partial_{x_2}(f)|=\epsilon$. A direct computation yields $[\operatorname{Hess}(f) ]=[-4N x_1^2]$. Let us fix a basis of $H_f$ represented by the monomials $\phi_i(x)=x_2^{i-1}$ ($1\leq i\leq N-1$) and $\phi_N(x)=2x_1$. Then the residue pairing takes the form \beq\new\begin{array}{c}\nonumber \operatorname{Res}(\phi_i \phi_j) = -\frac{1}{2h}\delta_{i+j,N},\quad 1\leq i,j\leq N-1, \\ \operatorname{Res}(\phi_i\phi_N) = -\delta_{i,N},\quad 1\leq i\leq N. \end{array}\eeq Using Saito's theory of primitive forms, we can define a Frobenius structure on the space of miniversal deformations of $f$ with primitive form $\omega=dx_1\wedge dx_2\wedge dx_3$. Givental's higher genus reconstruction yields a total descendant potential \beq\new\begin{array}{c}\nonumber \mathcal{D}^{\rm SG}(\hbar,\mathbf{t}^{\rm SG}) =\exp \left( \sum_{g,\kappa} \hbar^{g-1} \left < \phi_{i_1}\psi^{k_1},\dots, \phi_{i_m}\psi^{k_m}\right >_{g,m}^{SG} \frac{ t^{\rm SG}_{k_1,i_1}\cdots t^{\rm SG}_{k_m,i_m} }{m!}\right). \end{array}\eeq Put $c:=\tfrac{\mathbf{i}}{\sqrt{2h}} \, 2^{-(N-2)/h}$ and let us define the following map: \beq\new\begin{array}{c}\n \operatorname{Mir}: H_f \to \mathcal{H}_W \\ \phi_i:= x_2^{i-1}\mapsto 2^{\tfrac{i-1}{N-1}} \, e_{2i-1}\quad (1\leq i\leq N-1) \\ \phi_N:= 2x_1\mapsto -h\mathbf{i}\, 2^{\tfrac{N-2}{2N-2}} \, e_{0}, \end{array}\eeq where $\mathbf{i}:=\sqrt{-1}$. Mirror symmetry between the FJRW-invariants and the SG-invariants can be stated as follows: \beq\new\begin{array}{c}\label{FJRW=SG} c^{2g-2} \left < \phi_{i_1}\psi^{k_1},\dots,\phi_{i_m}\psi^{k_m} \right >^{\rm SG}_{g,m} = \left < \operatorname{Mir}(\phi_{i_1})\psi^{k_1},\dots, \operatorname{Mir}(\phi_{i_m})\psi^{k_m} \right >^{\rm FJRW}_{g,m}. \end{array}\eeq Let us sketch the proof of the above formula. The idea is to establish first two special cases. The remaining identities follow from the special cases and the axioms of CohFT according to the reconstruction theorem of Fan--Jarvis--Ruan (see \cite{FJRW}, Theorem 6.2.10, part (3)). The first special case is to prove \eqref{FJRW=SG} when $g=0$, $m=3$, and $k_1=\cdots =k_m=0$. This however is straightforward, because the explicit formulas for the FJRW invariants of this type are already known (see property (iii) in Section \ref{sec:FJRW-inv}), while for the SG-invariants we have \beq\new\begin{array}{c}\nonumber \left < \phi_i,\phi_j,\phi_k\right >^{\rm SG} = \operatorname{Res}(\phi_i\phi_j\phi_k) \end{array}\eeq and the residue pairing is easy to compute. We leave the details as an exercise. The second special case is the identity involving the SG-correlator $\left < x_2, x_2,x_2^{N-3},x_2^{N-2} \right >_{0,4}^{\rm SG}$. In that case the RHS of \eqref{FJRW=SG} is \beq\new\begin{array}{c}\nonumber 2^{\frac{2}{N-1}+\frac{N-3}{N-1}+\frac{N-2}{N-1}} \left < e_3, e_3, e_{2N-5}, e_{2N-3}\right >^{\rm FJRW}_{0,4} = 2^{1 +\frac{N-2}{N-1}} h^{-1} = -c^{-2} h^{-2}, \end{array}\eeq where we used the formula from property (iv) in Section \ref{sec:FJRW-inv}. Therefore, the identity follows from the following lemma: \begin{lemma}\label{le:4pt-cor} The 4-point correlator \beq\new\begin{array}{c}\nonumber \left < x_2, x_2, x_2^{N-3},x_2^{N-2}\right >_{0,4}^{\rm SG} = -\frac{1}{h^2}. \end{array}\eeq \end{lemma} \begin{proof} Let $f_t(x)=f(x)+t x_2$ be a deformation of $f$. The 4-point correlator can be computed by \beq\new\begin{array}{c}\label{4pt-cor} \partial_t\left.\Big( \left < x_2,x_2^{N-3},x_2^{N-2} \right >^{\rm SG}_{0,3}(t)\Big)\right|_{t=0}, \end{array}\eeq where the correlator involved is a deformation of $\left < x_2,x_2^{N-3},x_2^{N-2} \right >^{\rm SG}_{0,3}$ constructed via flat (or Frobenius) structure of Saito and Givental's higher genus reconstruction. On the other hand, \beq\new\begin{array}{c}\nonumber \left < x_2,x_2^{N-3},x_2^{N-2} \right >^{\rm SG}_{0,3}(t) = \operatorname{Res}\, \frac{ x_2\cdot x_2^{N-3}\cdot x_2^{N-2} }{ \partial_{x_1}(f_t) \partial_{x_2}(f_t) \partial_{x_3}(f_t) }\, dx = \frac{t}{N-1}\, \operatorname{Res}\, \frac{ x_2^{N-2} \, dx }{ \partial_{x_1}(f_t) \partial_{x_2}(f_t) \partial_{x_3}(f_t) } , \end{array}\eeq where $dx:=dx_1 \wedge dx_2\wedge dx_3$ and we used that $x_2^{2(N-2)}= \tfrac{1}{N-1} x_2^{N-2} t$ in the local algebra of $f_t$. Since the Grothendieck residue of $x_2^{N-2}$ is $-\tfrac{1}{2h}$, the above formula and (\ref{4pt-cor}) yield the formula that we have to proof. \end{proof} In terms of the generating functions, we get \beq\new\begin{array}{c}\nonumber \mathcal{D}^{\rm FJRW}(\hbar,\mathbf{t}^{\rm FJRW} ) = \mathcal{D}^{\rm SG} (\hbar c^2, \mathbf{t}^{\rm SG}), \end{array}\eeq where the formal variables are related by the following linear change: \beq\new\begin{array}{c}\label{change:FJRW-SG} t^{\rm FJRW}_{k,0} =-h\mathbf{i}\, 2^{\tfrac{N-2}{2N-2}} t^{\rm SG}_{k,N},\quad t^{\rm FJRW}_{k,2i-1} = 2^{\tfrac{i-1}{N-1}} t^{\rm SG}_{k,i},\quad (1\leq i\leq N-1) . \end{array}\eeq \subsection{The 2-BKP hierarchy and the total descendant potential}\label{sec:BKP-D} Let us explain the identification of $\mathcal{D}^{\rm SG}(\hbar,\mathbf{t}^{\rm SG})$ with a tau-function of the 2-BKP hierarchy. This is done in three steps: construction of Hirota Bilinear Equations (HBEs) for the total descendant potential $\mathcal{D}^{\rm SG}(\hbar,\mathbf{t}^{\rm SG})$, identifying the HBEs with the HBEs of the principal Kac--Wakimoto hierarchy, and finally identifying the Kac--Wakimoto hierarchy with the $(h,2)$-reduction of the 2-BKP hierarchy. These steps are considered respectively in \cite{GM},\cite{FGM}, and \cite{CM}. Let us recall the construction of HBEs from \cite{GM}. Recall that the set of vanishing cycles in the Milnor lattice $H_2(f^{-1}(1),\mathbb{Z})$ of the $D_N$-singularity $f$ is a root system of type $D_N$. Therefore, there exists an orthonormal basis $v_i$ ($1\leq i\leq N$) of $H_2(f^{-1}(1),\mathbb{Q})$ with respect to the intersection pairing, such that the set of vanishing cycles is given by $\pm(v_i \pm v_j)$. Furthermore, the Milnor lattice can be embedded in the dual vector space of the local algebra $H_f$ via the following period map: \beq\new\begin{array}{c}\nonumber \Pi:H_2(f^{-1}(1),\mathbb{Z})\to H_f^*,\quad \left <\Pi(\alpha),\phi_i\right >:= \frac{1}{2\pi} \int_\alpha \phi_i(x) \frac{dx}{df}, \end{array}\eeq where $\tfrac{dx}{df}$ is a holomorphic 2-form $\eta$ defined in a tubular neighborhood of $f^{-1}(1)$, such that $dx=df\wedge \eta$. Although the form $\eta$ is not unique, its restriction to $f^{-1}(1)$ is unique and it determines a holomorphic 2-form on the Milnor fiber $f^{-1}(1)$. For each $\alpha\in H_2(f^{-1}(1),\mathbb{Z})$ we have a multi-valued analytic map $I_\alpha^{(-1)}: \mathbb{C}\setminus{0}\to H_f$, such that \beq\new\begin{array}{c}\nonumber (I^{(-1)}_\alpha(\lambda),\phi_i):=\frac{1}{2\pi} \int_{\alpha\subset f^{-1}(\lambda)} \phi_i(x) \frac{dx}{df},\quad \forall i=1,2,\dots, N, \end{array}\eeq where $(\ ,\ )$ is the Grothendieck residue pairing. Due to homogeneity, we have the following simple formula \beq\new\begin{array}{c}\nonumber I^{(-1)}_\alpha(\lambda) = \sum_{i=1}^N \lambda^{m_i/h} \, \left <\Pi(\alpha),\phi_i\right >\, \phi^i, \end{array}\eeq where $\{\phi^i\}$ is the basis of $H_f$ dual to $\{\phi_i\}$ with respect to the residue pairing and \beq\new\begin{array}{c}\nonumber m_i:=\begin{cases} 2i-1 & \mbox{ if } 1\leq i\leq N-1 \\ N-1 & \mbox{ if } i=N. \end{cases} \end{array}\eeq coincide with the Coxeter exponents of the $D_N$ root system. Let us choose an eigenbasis $H_i$ ($1\leq i \leq N$) for the Coxeter transformation, satisfying $(H_i|H_j)=h\delta_{i,j^*}$, where $(\ |\ )$ is the invariant bilinear form of the $D_N$-root system and the involution ${}^*$ is defined by \beq\new\begin{array}{c}\nonumber i^*=\begin{cases} N-i & \mbox{ for } 1\leq i\leq N-1, \\ N & \mbox{ for } i=N. \end{cases} \end{array}\eeq More precisely, we choose $H_i$ to be the solutions to the following system of equations: \beq\new\begin{array}{c}\nonumber v_i=\frac{\sqrt{2}}{h} \Big( \eta^{m_1 i}H_1 +\cdots + \eta^{m_{N-1}i} H_{N-1}\Big),\quad 1\leq i\leq N-1, \\ v_N=\frac{1}{\sqrt{h}} H_N, \end{array}\eeq where $\eta=e^{2\pi\mathbf{i}/h}$. The Coxeter transformation $\sigma$ corresponds to analytic continuation around $\lambda=0$, that is, the analytic continuations of $I^{(-1)}_\alpha(\lambda)$ around $\lambda=0$ is $I^{(-1)}_{\sigma(\alpha)}(\lambda)$. Therefore, \beq\new\begin{array}{c}\nonumber I^{(-1)}_{H_i}(\lambda) = \frac{\mathbf{i}}{\sqrt{2}} \frac{\lambda^{m_i/h}}{m_i/h} \rho_i \phi^i,\quad 1\leq i\leq N-1,\\ I^{(-1)}_{H_N}(\lambda) = \mathbf{i}\, \sqrt{h} \frac{\lambda^{m_N/h}}{m_N/h} \rho_N \phi^N, \end{array}\eeq where $\rho_i$ ($1\leq i\leq N$) are some non-zero constants. \begin{lemma}\label{le:ci} There exists a choice of the orthonormal basis $v_i$ ($1\leq i\leq N$), such that, the constants $\rho_i=-\mathbf{i} \xi^{m_i}$ ($1\leq i\leq N$), where $\xi=e^{\pi\mathbf{i}/h}$. \end{lemma} \begin{proof} The period map image of the Milnor lattice of the $D_N$-singularity is computed explicitly in \cite{MZ}, Proposition 5. In order to quote the result, we have to switch to different linear coordinates on $\mathbb{C}^3$, that is, put $x_1= \xi^{-1}\, X_1$, $x_2=\xi^2 \, X_2$, $x_3=X_3$. Then $f= X_1^2 X_2+X_2^{N-1}+X_3^2$. According to \cite{MZ}, Proposition 5, there exists an orthonormal basis $v_k$ ($1\leq k\leq N$) of $H_2(f^{-1}(1);\mathbb{Q})$, such that, \beq\new\begin{array}{c}\nonumber I^{(-1)}_{v_k}(\lambda) = \frac{\lambda^{\theta+1/2}}{\Gamma(\theta+3/2)} \, \xi\, \Psi(v_k). \end{array}\eeq Here the extra factor $\xi$ (compared to formula (1) in \cite{MZ}) comes from the relation $dx_1dx_2dx_3=\xi dX_1dX_2dX_3$, the linear operator $\theta:H_f\to H_f$ defined by $\theta(\Phi_i) := \Big(\tfrac{m_{N-i}}{h}-\tfrac{1}{2}\Big)\Phi_i$ is the so-called {\em Hodge grading operator}, \beq\new\begin{array}{c}\nonumber \Psi(v_k) = 2\sum_{i=1}^{N-1} \eta^{m_i k} \Gamma(m_i/h) \Phi_{N-i},\quad (1\leq k\leq N-1), \end{array}\eeq and $\Psi(v_N)=-\mathbf{i} \Gamma(m_N/h)\Phi_N$, where $\Phi_i:=X_2^{i-1}$ ($1\leq i\leq N-1$) and $\Phi_N:=2X_1$. Let us point out that, compared to \cite{MZ}, here we changed the sign of $v_N$ and that the notation $v_k$ in \cite{MZ} corresponds to $\Psi(v_k)$ here. We get the following formulas: \beq\new\begin{array}{c}\nonumber I^{(-1)}_{v_k}(\lambda) = 2\sum_{i=1}^{N-1} \eta^{m_i k} \, \frac{\lambda^{m_i/h}}{m_i/h}\, \xi\, \Phi_{N-i} = 2\sum_{i=1}^{N-1} \eta^{m_i k} \, \frac{\lambda^{m_i/h}}{m_i/h}\, \xi^{-m_{N-i}}\, \phi_{N-i} \quad (1\leq k\leq N-1) \end{array}\eeq and $I^{(-1)}_{v_N}(\lambda) = -2\mathbf{i} \lambda^{1/2}\, \xi\, \Phi_N = -2\mathbf{i} \lambda^{1/2} \phi_N$. On the other hand, we have \beq\new\begin{array}{c}\nonumber I^{(-1)}_{v_k}(\lambda) = \frac{\sqrt{2}}{h} \sum_{i=1}^{N-1} \eta^{m_i k} \frac{\mathbf{i}}{\sqrt{2}} \frac{\lambda^{m_i/h}}{m_i/h} \rho_i \phi^i = 2\sum_{i=1}^{N-1} \eta^{m_i k} \frac{\lambda^{m_i/h}}{m_i/h} \rho_i (-\mathbf{i} )\phi_{N-i}, \end{array}\eeq for $1\leq k\leq N-1$ and $I^{(-1)}_{v_N}(\lambda) = 2\mathbf{i} \lambda^{1/2} \rho_N \phi^N = -2\mathbf{i} \lambda^{1/2} \rho_N \phi_N$, where we used that $\phi^i=-2h \phi_{N-i} $ ($1\leq i\leq N-1$) and $\phi^N=-\phi_N$. Comparing the two formulas for $I^{(-1)}$, we get $\rho_i=\mathbf{i} \xi^{-m_{N-i}}=\mathbf{i}\xi^{-h+m_i} = -\mathbf{i} \xi^{m_i}$ for $1\leq i\leq N-1$ and $\rho_N=1$. \end{proof} Let us define the higher order periods $I_\alpha^{(n)}(\lambda) := \partial_\lambda^{n+1}I_\alpha^{(-1)}(\lambda)$. The vertex operators in singularity theory (see \cite{GM}) are defined by \beq\new\begin{array}{c}\nonumber \Gamma^\alpha(\lambda) =\exp\Big( \sum_{k=0}^\infty I^{(-k-1)}_\alpha(\lambda) (-z)^{-k-1}\Big)\sphat \exp\Big( \sum_{k=0}^\infty I^{(k)}_\alpha(\lambda) (-z)^{k}\Big)\sphat, \end{array}\eeq where the quantization rules are \beq\new\begin{array}{c}\nonumber (\phi^i(-z)^{-k-1})\sphat := q^{\rm SG}_{k,i}/\sqrt{\hbar},\quad (\phi_i (-z)^k)\sphat := (-1)^{k+1} \sqrt{\hbar} \partial_{q^{\rm SG}_{k,i}}. \end{array}\eeq The formal variables $q^{\rm SG}_{k,i}$ are related to $t^{\rm SG}_{k,i}$ by the {\em dilaton shift}: $t^{\rm SG}_{k,i} = q^{\rm SG}_{k,i}+ \delta_{k,1}\delta_{i,1}$. Note that $i=1$ is the index of the basis vector $\phi_1=1$, that is, the identity in the local algebra $H_f$. Using the explicit formulas for the periods, we get \beq\new\begin{array}{c}\nonumber I^{(-k-1)}_{v_i}(\lambda) = \frac{\mathbf{i}}{h}\, \sum_{s=1}^{N-1} \frac{ (\eta^i \lambda^{1/h})^{m_s+k h}}{ \tfrac{m_s}{h} \left(\tfrac{m_s}{h} +1\right)\cdots \left( \tfrac{m_s}{h} +k\right)}\, \rho_s \phi^s, \end{array}\eeq \beq\new\begin{array}{c}\nonumber I^{(k)}_{v_i}(\lambda) = 2\mathbf{i}\, \sum_{s=1}^{N-1} (-1)^{k+1} \tfrac{m_s}{h} \left(\tfrac{m_s}{h} +1\right)\cdots \left( \tfrac{m_s}{h} +k-1\right)\, (\eta^i \lambda^{1/h})^{-m_s-k h}\, \rho_{s^*}\phi_s, \end{array}\eeq where $1\leq i\leq N-1$. Similarly, \beq\new\begin{array}{c}\nonumber I^{(-k-1)}_{v_N}(\lambda) = \mathbf{i}\, \frac{ \lambda^{\tfrac{1}{2}+k} }{ \tfrac{1}{2} \left(\tfrac{1}{2} +1\right)\cdots \left( \tfrac{1}{2} +k\right)}\, \rho_N \phi^N \end{array}\eeq and \beq\new\begin{array}{c}\nonumber I^{(k)}_{v_N}(\lambda) = \mathbf{i}\, \sum_{s=1}^{N-1} (-1)^{k+1} \tfrac{1}{2} \left(\tfrac{1}{2} +1\right)\cdots \left( \tfrac{1}{2} +k-1\right)\, \lambda^{-\tfrac{1}{2}-k}\, \rho_N\phi_N. \end{array}\eeq According to Givental--Milanov \cite{GM} the total descendant potential $\tau:=\mathcal{D}^{\rm SG}(\hbar;\mathbf{q}^{\rm SG}) $ is a solution to the following HBEs: \begin{align}\nonumber \operatorname{Res}_{\lambda=\infty} \frac{d\lambda}{\lambda} \left( \sum_{\alpha\in R} a_\alpha \Gamma^{\alpha}(\lambda)\otimes \Gamma^{-\alpha}(\lambda) \right) \ \tau\otimes \tau = \frac{N(h+1)}{12h}\, \tau\otimes \tau + & \\ \nonumber +\frac{1}{h}\sum_{k=0}^\infty\sum_{i=1}^N (m_i+k h) \Big(q_{k,i}^{\rm SG} \otimes 1 - 1\otimes q_{k,i}^{\rm SG} \Big) \Big(\frac{\partial}{\partial q_{k,i}^{\rm SG} }\otimes 1 - 1\otimes \frac{\partial}{\partial q_{k,i}^{\rm SG} }\Big) \tau\otimes \tau, \end{align} where $R$ is the set of all vanishing cycles and the coefficients $a_\alpha$ are given by some explicit formulas, which would not be needed. Moreover, according to Givental \cite{Gi}, the total descendant potential satisfies the Virasoro constraints, in particular the {\em string equation} $L_{-1} \mathcal{D}^{\rm SG}(\hbar;\mathbf{q}^{\rm SG})=0$, where \beq\new\begin{array}{c}\nonumber L_{-1}:=-\frac{1}{4h\hbar} \sum_{i=1}^{N-1} q^{\rm SG}_{0,i} q^{\rm SG}_{0,i^*} - \frac{1}{2\hbar} q^{\rm SG}_{0,N} q^{\rm SG}_{0,N} + \sum_{k,i} q^{\rm SG}_{k+1,i} \partial_{q^{\rm SG}_{k,i}}. \end{array}\eeq Following Frenkel--Givental--Milanov \cite{FGM}, we can identify the above HBEs with the HBEs of the principal Kac--Wakimoto hierarchy (of type D). Let us recall the definitions and work out the precise identification. Put $[1,N]$ for the set of integers $\{1,2,\dots,N\}$ and let $E_+ = [1,2]\times \mathbb{Z}^+_{\rm odd}$. Let $i:E_+\to [1,N]$ be the function defined by $i(2,l)=N$ for all $l\in \mathbb{Z}^+_{\rm odd}$ and $i(1,l)$ is defined to be the unique integer $i\in [1,N-1]$, such that $l\equiv 2i-1 \ ({\rm mod}\ h)$. Let $y_e(e\in E_+)$ be a set of formal variables. Let $m:E_+\to \mathbb{Z}_{>0}$ be the function defined by $m(1,l) = l$ and $m(2,l) = l (N-1)$. The numbers $m(e)$ ($e\in E_+$) coincide with the so-called {\em exponents} of the affine Kac--Moody Lie algebra (of type $D$). The HBEs of the principal Kac--Wakimoto hierarchy of type D have the form \begin{align}\nonumber \operatorname{Res}_{\zeta=\infty} \frac{d\zeta}{\zeta} \left( \sum_{\alpha\in R} a_\alpha \Gamma^{\alpha}(\zeta)\otimes \Gamma^{-\alpha}(\zeta) \right) \ \tau\otimes \tau = \frac{N(h+1)}{12h}\, \tau\otimes \tau+ & \\ \nonumber +\frac{1}{h} \sum_{e\in E_+} m(e) \Big(y_e\otimes 1 - 1\otimes y_e \Big) \Big(\frac{\partial}{\partial y_e }\otimes 1 - 1\otimes \frac{\partial}{\partial y_e}\Big) \tau\otimes \tau, \end{align} where the coefficients $a_\alpha$ are the same as the coefficients $a_\alpha$ in the HBEs in singularity theory (this is one of the main results in \cite{FGM}) and the vertex operators \beq\new\begin{array}{c}\nonumber \Gamma^\alpha(\zeta) := \exp\Big( \sum_{e\in E_+} (\alpha | H_{i(e)^*}) y_e \zeta^{m(e)}\Big) \exp\Big( \sum_{e \in E_+} (\alpha | H_{i(e)}) \partial_{y_e} \frac{\zeta^{-m(e)}}{m(e)}\Big). \end{array}\eeq Recall that we have $(v_i|H_s) = \sqrt{2}\eta^{-i m_s}$, $(v_i|H_N)=(v_N|H_i)=0$ for $1\leq i, s\leq N-1$ and $(v_N|H_N)=\sqrt{h}.$ Therefore, the vertex operators of the Kac--Wakimoto hierarchy can be computed explicitly. Comparing with the vertex operators in singularity theory, we get that they coincide under the substitution $\lambda=\zeta^h$ and an appropriate rescaling of the formal variables. More precisely, if $m=2l+1$ ($l\geq 0$) is an odd integer, then let us construct unique integers $k$ and $s$, such that \beq\new\begin{array}{c}\nonumber l+1= (N-1)k +s,\quad k\geq 0,\quad 1\leq s\leq N-1. \end{array}\eeq Then the substitution that identifies the vertex operators takes the following form: \beq\new\begin{array}{c}\label{y_1m} y_{1,2l+1} = \frac{\mathbf{i} \rho_s}{h\sqrt{2\,\hbar} }\, \frac{ q^{\rm SG}_{k,s} }{ \tfrac{m_s}{h} \left(\tfrac{m_s}{h} +1\right)\cdots \left(\tfrac{m_s}{h} +k\right)} \end{array}\eeq and \beq\new\begin{array}{c}\label{y_2m} y_{2,2l+1} = \frac{\mathbf{i} \rho_N}{\sqrt{h\, \hbar} }\, \frac{ q^{\rm SG}_{l,N} }{ \tfrac{1}{2} \left(\tfrac{1}{2} +1\right)\cdots \left(\tfrac{1}{2} +l\right)}. \end{array}\eeq As we already mentioned in the introduction, the Kac--Wakimoto hierarchy can be identified with the $(h,2)$-reduction of the 2-component BKP hierarchy. The identification consists of expressing the generators of the principal Heisenberg algebra, which in the Kac--Wakimoto representation are given by \beq\new\begin{array}{c}\nonumber H_{i(e),m(e)}:=\frac{\partial}{\partial y_e},\quad H_{i(e)^*,-m(e)}:= m(e) y_e,\quad e\in E_+, \end{array}\eeq in terms of the operators $J^a_m$ defined by (\ref{Jam}). We have (see \cite{CM}, Section 1.4 for more details) \beq\new\begin{array}{c}\nonumber H_{i,m} = \frac{1}{\sqrt{2}} \, J^1_m,\quad i\in [1,N-1],\quad m\equiv 2i-1\ ({\rm mod}\ h) \end{array}\eeq and \beq\new\begin{array}{c}\nonumber H_{N,m(N-1)} = \sqrt{\frac{N-1}{2}}\, J^2_m. \end{array}\eeq Therefore, the following formulas \beq\new\begin{array}{c}\label{KW-BKP} t_{1,m}:= \sqrt{2}\, y_{1,m},\quad t_{2,m}:=\sqrt{h}\, y_{2,m},\quad m\in \mathbb{Z}^+_{\rm odd} \end{array}\eeq provide an identification between the Kac--Wakimoto hierarchy and the $(h,2)$-reduction of the 2-component BKP. Recalling the substitution \eqref{y_1m}--\eqref{y_2m}, we get that the total descendant potential $\mathcal{D}^{\rm SG}(\hbar,\mathbf{q}^{\rm SG})$ can be identified with a solution of the $(h,2)$-reduction of the 2-component BKP via the following substitutions: \beq\new\begin{array}{c}\label{t_1m} q_{1,2l+1} = \frac{\mathbf{i} \rho_s}{h \sqrt{\hbar} }\, \frac{ q^{\rm SG}_{k,s} }{ \tfrac{m_s}{h} \left(\tfrac{m_s}{h} +1\right)\cdots \left(\tfrac{m_s}{h} +k\right)} \end{array}\eeq and \beq\new\begin{array}{c}\label{t_2m} q_{2,2l+1} = \frac{\mathbf{i} \rho_N}{ \sqrt{\hbar} }\, \frac{ q^{\rm SG}_{l,N} }{ \tfrac{1}{2} \left(\tfrac{1}{2} +1\right)\cdots \left(\tfrac{1}{2} +l\right)}. \end{array}\eeq We replaced here the dynamical variables of 2-BKP with $q_{a,2l+1}$, because we would like to work out also how the dilaton shift is transformed under the substitution. In order to do this, let us identify the standard dynamical variables $t_{1,2l+1}$ and $t_{2,2l+1}$ of 2-BKP with $t^{\rm SG}_{k,i}$ by the same substitutions (\ref{t_1m}) and (\ref{t_2m}), that is replace $q$ by $t$ in both formulas. In the new variables the dilaton shift, i.e., the relation between $q_{a,m}$ and $t_{a,m}$, takes the form \beq\new\begin{array}{c}\nonumber t_{a,m}=q_{a,m} + \delta_{a,1} \delta_{m,2N-1}\, \frac{\mathbf{i} \rho_1}{\sqrt{\hbar}}\, \frac{h}{h+1}. \end{array}\eeq The string operator in the variables $t_{a,m}$ takes the form \beq\new\begin{array}{c}\nonumber L_{-1}= -\frac{\mathbf{i} \rho_1}{\sqrt{\hbar}} \, \partial_{t_{1,1}}+ \sum_{i=1}^{N-1} \frac{(2i-1)(2i^*-1)}{4h} t_{1,2i-1} t_{1,2i^*-1} + \frac{1}{8} t_{2,1} t_{2,1} +\\ +\sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^{\rm odd}_+ } \Big( \tfrac{m+h}{h} t_{1,m+h}\partial_{t_{1,m}}+ \tfrac{m+2}{2} t_{2,m+2}\partial_{t_{2,m}} \Big). \end{array}\eeq In order to work in the settings of \cite{CM}, we have to set $\sqrt{\hbar}=\rho_1$, that is, the unique tau-function $\tau^{\rm CM}(\mathbf{t}_1,\mathbf{t}_2)$ of the 2-component BKP hierarchy satisfying $L_{-1}\tau^{\rm CM}=0$ coincides with $\mathcal{D}^{\rm SG}(\rho_1^2, \mathbf{t}^{\rm SG})$. Recalling the mirror symmetry result of Fan--Jarvis--Ruan, we get that \beq\new\begin{array}{c}\nonumber \tau^{\rm CM}(\mathbf{t}_1,\mathbf{t}_2) = \mathcal{D}^{\rm FJRW}(\rho_1^2/c^2,\mathbf{t}^{\rm FJRW}). \end{array}\eeq Note that $\rho_1^2/c^2= 2^{2-\tfrac{1}{N-1}}\, h \, \eta$. Combining the linear changes \eqref{change:FJRW-SG} and \eqref{t_1m}--\eqref{t_2m}, and Lemma \ref{le:ci} we get that the 2-BKP time variables are related to the FJRW-variables via the following linear substitutions: \begin{align} \nonumber t_{1, hk+2i-1} = & \ \phantom{-}\frac{\mathbf{i}}{h}\, (2^{-1/h} \, \xi)^{m_i-1}\, \frac{t^{\rm FJRW}_{k,2i-1} }{\tfrac{m_i}{h} \left(\tfrac{m_i}{h}+1\right) \cdots \left(\tfrac{m_i}{h}+k \right)}\quad (1\leq i\leq N-1) \\ \nonumber t_{2, 2k+1} = & -\frac{1}{h}\, (2^{-1/h}\, \xi )^{m_N-1}\, \frac{t^{\rm FJRW}_{k,0}}{\tfrac{m_N}{h} \left(\tfrac{m_N}{h}+1\right) \cdots \left(\tfrac{m_N}{h}+k \right)}, \end{align} where $\xi=e^{\pi\mathbf{i}/h}$. \section{Free fermions and Pfaffians}\label{S_3} In this section we use the neutral free fermion description of BKP hierarchy, introduced in \cite{DJKM1,DJKM2}, to derive a Pfaffian expression for the tau-function of 2-component BKP hierarchy in the Miwa parametrization. Let us recall the set up from Section 1.3 in \cite{CM}. Namely, let $\phi_a(k)$, $a=1,2$, $k\in \mathbb{Z}$ be a set of neutral fermions, satisfying the commutation relations \beq\new\begin{array}{c}\nonumber \phi_a(k)\phi_b(l)+\phi_b(l)\phi_a(k)=(-1)^k \delta_{a,b}\delta_{k,-l}. \end{array}\eeq The fermionic Fock space $\mathcal{F}$ is generated from a vacuum vector $|0\rangle$ by the action of the above fermions with the only constraints \beq\new\begin{array}{c}\nonumber \phi_a(k)|0\rangle=0\quad (k<0),\quad (\phi_1(0)+\mathbf{i}\phi_2(0))|0\rangle =0. \end{array}\eeq The vector space $\mathcal{F}$ is equipped with a positive definite Hermitian form $H(\ ,\ )$, uniquely determined by the properties: $H(|0\rangle,|0\rangle)=1$ and $\phi_a(k)^\dagger=(-1)^k\phi_a(-k)$. Here $T^\dagger$ is a Hermitian conjugate of $T$, which satisfies \beq\new\begin{array}{c}\nonumber H(T^\dagger U,V)=H(U,T V),\quad \forall U,V\in \mathcal{F}. \end{array}\eeq If $T\in \operatorname{End}(\mathcal{F})$ is a linear operator, then we define \beq\new\begin{array}{c}\nonumber \langle v_1| T |v_2\rangle:= H(v_1,T(v_2)). \end{array}\eeq \subsection{Boson--Fermion correspondence} Put \beq\new\begin{array}{c}\label{Jam:fermionic} J^a_m=\sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^k :\phi_a(-k-m)\phi_a(k) :,\quad \end{array}\eeq where the fermionic normal ordering is defined by $: ab :=ab-\langle 0| ab|0\rangle$. These operators satisfy the following commutation relations: \beq\new\begin{array}{c}\nonumber [J^a_k,J^b_l]=2k\delta_{k,-l}\delta_{a,b}. \end{array}\eeq Put \beq\new\begin{array}{c}\nonumber \phi_a(z) := \sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} \phi_a(k) z^k. \end{array}\eeq Then we have \beq\new\begin{array}{c}\label{nf-vo} \phi_a(z)=Q_a e^{\sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} J_{-m}^a \frac{z^m}{m}} e^{-\sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} J_{m}^a \frac{z^{-m}}{m}}, \end{array}\eeq where $Q_a:\mathcal{F}\to \mathcal{F}$ is the linear operator defined by \beq\new\begin{array}{c}\label{Q-action} Q_a\phi_a(k)=\phi_a(k)Q_a,\quad Q_a\phi_{3-a}(k)=-\phi_{3-a}(k)Q_a,\quad Q_a|0\rangle = \phi_a(0)|0\rangle. \end{array}\eeq The operator $Q_1Q_2$ has eigenvalues $\pm \mathbf{i}/2$. Let $\mathcal{F}_0$ be the eigensubspace corresponding to eigenvalue $\mathbf{i}/2$. Let $\mathbf{t}=(\mathbf{t}_1,\mathbf{t}_2)$ be a pair of two sequences of formal variables of the form $\mathbf{t}_a=(t_{a,1},t_{a,3},\dots)$. The Boson--Fermion isomorphism $\mathcal{F}_0\cong \mathbb{C}[\![\mathbf{t}]\!]$ can be defined as follows \beq\new\begin{array}{c}\label{eq_BF} v\in \mathcal{F}_0\mapsto \tau(v,\mathbf{t}):= \langle 0| \exp\Big( \frac{1}{2}\sum_{a=1,2} \sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} t_{a,m} J^a_m \Big) |v\rangle. \end{array}\eeq We have \beq\new\begin{array}{c} mt_{a,m} \tau(v,\mathbf{t}) = \tau(J^a_{-m} v,\mathbf{t}), \\ 2\partial_{t_{a,m}}\tau(v,\mathbf{t}) = \tau(J^a_mv,\mathbf{t}),\\ \Gamma_a(\mathbf{t}, z)\tau(v,\mathbf{t}) = \tau(Q_a^{-1} \phi_a(z)v, \mathbf{t}), \label{bf-iso-vo} \end{array}\eeq where $J^a_m$ are the fermionic operators (\ref{Jam:fermionic}) and \beq\new\begin{array}{c}\label{vo:bosonic} \Gamma_a(\mathbf{t}, z) = \exp\Big(\sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} t_{a,m} z^m\Big) \exp\Big(-\sum_{m\in\raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} 2\frac{\partial}{\partial t_{a,m}} \frac{z^{-m}}{m} \Big) \end{array}\eeq are vertex operators. Note that the 3rd formula in (\ref{bf-iso-vo}) is a consequence of the preceding two ones and (\ref{nf-vo}). The fermionic definition of the 2-component BKP (2-BKP) hierarchy and its $(h_1,h_2)$-reduction is given in terms of the following set of bilinear operators: \beq\new\begin{array}{c}\nonumber \Omega_m:= \sum_{a=1,2}\sum_{k\in\raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^k \phi_a(k)\otimes \phi_a(-k-m h_a),\quad m\in \mathbb{Z}. \end{array}\eeq Namely, a function $\tau\in \mathcal{F}_0$ is said to be a {\em tau-function} of the 2-BKP hierarchy if $\Omega_0(\tau\otimes \tau)=0$. A function $\tau\in \mathcal{F}_0$ is said to be a tau-function of the $(h_1,h_2)$-reduction of 2-BKP if $\Omega_m(\tau\otimes \tau)=0$ for all $m\in \mathbb{Z}_{\geq 0}$. We will be interested in the case when $h_1=h=2N-2$ and $h_2=2$, where $N\geq 2$, that is, the $(h,2)$-reduction. \begin{remark} The 2-component BKP hierarchy can also be described in terms of 1-component neutral fermions. The expression for the tau-function in this case is analogous to the expression for 2D Toda lattice hierarchy in terms of 1-component charged fermions, namely for the 1-component neutral fermions one has \beq\new\begin{array}{c}\nonumber \tau(\mathbf{t})=\langle 0| \exp\Big( \frac{1}{2} \sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} t_{1,m} J_m \Big) G \exp\Big( \frac{1}{2} \sum_{m\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}^+_{\rm odd}} t_{2,m} J_{-m} \Big) |0\rangle, \end{array}\eeq where $G$ is the corresponding group element. This representation allows to represent this tau-function as a square of 2D Toda tau-function \cite{LO}. It would be interesting to find a 2D Toda tau-function, corresponding to the main object of this paper, that is, the $D_N$ singularity solution of 2-BKP. \end{remark} \subsection{Miwa parametrization} Suppose that $v\in \mathcal{F}_0$ is arbitrary and let $\tau(v,\mathbf{t})$ be the function corresponding to $v$ via the Boson--Fermion correspondence (\ref{eq_BF}). Then the tau-function in the Miwa parametrization $\tau (Z_1,Z_2):=\left.\tau(v,\mathbf{t})\right|_{t_{a,m} = -\frac{2}{m} \, \hbox{Tr }(Z_a^{-m}) }$ takes the form \beq\new\begin{array}{c}\nonumber \tau (Z_1,Z_2) = \langle 0| : \Gamma_1(z_{1,1})\cdots \Gamma_1(z_{1,N_1}) \Gamma_2(z_{2,1})\cdots \Gamma_2(z_{2,N_2}):|v\rangle, \end{array}\eeq where the normal ordering puts all $J^a_m$ with positive $m$ to the right of all $J^a_m$ with negative $m$ and slightly abusing the notation we denote by \beq\new\begin{array}{c}\nonumber \Gamma_a(z):=Q_a^{-1}\phi_a(z)= \exp\Big( \sum_{m\in \mathbb{Z}_{\rm odd}^+ } J^a_{-m} \frac{z^m}{m} \Big) \exp\Big(- \sum_{m\in \mathbb{Z}_{\rm odd}^+ } J^a_{m} \frac{z^{-m}}{m} \Big) \end{array}\eeq the image of the vertex operator (\ref{vo:bosonic}) under the Boson--Fermion isomorphism (cf. (\ref{bf-iso-vo})). Let \beq\new\begin{array}{c}\nonumber \tilde{K}(z,w):=\frac{z-w}{z+w}, \end{array}\eeq and \beq\new\begin{array}{c}\nonumber K(z,w):=\iota_{|z|>|w|}\tilde{K}(z,w), \end{array}\eeq where $\iota_{|z|>|w|}$ is the operation of Laurent series expansion in the region $|z|>|w|$. Using the OPE formula \beq\new\begin{array}{c}\nonumber \Gamma_a(z)\Gamma_b(w) = K(z,w)^{\delta_{a,b}}: \Gamma_a(z)\Gamma_b(w) : \end{array}\eeq for $|z_{1,1}|>|z_{1,2}|>\dots |z_{1,{N_1}}|$ and $|z_{2,1}|>|z_{2,2}|>\dots |z_{2,{N_2}}|$ we get \beq\new\begin{array}{c}\nonumber \tau (Z_1,Z_2) =\frac{ \langle 0| \Gamma_1(z_{1,1})\cdots \Gamma_1(z_{1,N_1}) \Gamma_2(z_{2,1})\cdots \Gamma_2(z_{2,N_2})|v\rangle}{ \prod_{a=1,2}\prod_{1\leq i<j\leq N_a} K(z_{a,i},z_{a,j})}. \end{array}\eeq Using that $\Gamma_a(z)=Q_a^{-1}\phi_a(z)$ and recalling the definition (\ref{Q-action}) of the operators $Q_a$ we get \begin{equation} \begin{split}\nonumber \tau (Z_1,Z_2) & = \frac{\langle 0| Q_1^{-1} \phi_1(z_{1,1})\cdots Q_1^{-1} \phi_1(z_{1,N_1}) Q_2^{-1} \phi_2(z_{2,1})\cdots Q_2^{-1} \phi_2(z_{2,N_2}) |v\rangle}{ \prod_{a=1,2}\prod_{1\leq i<j\leq N_a} K(z_{a,i},z_{a,j})} =\\ &=(-1)^{N_1N_2}\, \frac{\langle 0| \phi_1(z_{1,1})\cdots \phi_1(z_{1,N_1}) \phi_2(z_{2,1})\cdots \phi_2(z_{2,N_2}) |Q_1^{-N_1}Q_2^{-N_2} v\rangle}{ \prod_{a=1,2}\prod_{1\leq i<j\leq N_a} K(z_{a,i},z_{a,j})}. \end{split} \end{equation} Recalling the definition (\ref{Q-action}) of the operators $Q_a$ ($a=1,2$) we get that they satisfy the following relations: \beq\new\begin{array}{c}\nonumber Q_1^2=Q_2^2=1/2,\quad Q_1Q_2+Q_2Q_1=0. \end{array}\eeq Therefore \beq\new\begin{array}{c}\nonumber Q_1^{-N_1}Q_2^{-N_2} = Q_1^{-N_1-N_2} Q_1^{N_2} (2Q_2)^{N_2} = (-1)^{N_2(N_2-1)/2} 2^{(N_1+N_2)/2} (2Q_1Q_2)^{N_2}. \end{array}\eeq Note that $(-1)^{N_2(N_2-1)/2} = \mathbf{i}^{N_2^2-N_2}$. By definition $\mathcal{F}_0$ is an eigensubspace with eigenvalue $\mathbf{i}$ for $2Q_1Q_2$. The above identity yields \beq\new\begin{array}{c}\nonumber Q_1^{-N_1}Q_2^{-N_2} v = \mathbf{i}^{N_2^2} 2^{(N_1+N_2)/2} v. \end{array}\eeq Therefore, in the Miwa variables the tau-function takes the form \beq\new\begin{array}{c}\label{Miwa-tau} \tau(Z_1,Z_2) := B(Z_1,Z_2) \, \langle 0| \phi_1(z_{1,1})\cdots \phi_1(z_{1,N_1}) \phi_2(z_{2,1})\cdots \phi_2(z_{2,N_2}) |v\rangle, \end{array}\eeq where \beq\new\begin{array}{c}\nonumber B(Z_1,Z_2) = \frac{\mathbf{i}^{-N_1^2}\, 2^{(N_1+N_2)/2}}{ \prod_{a=1,2}\prod_{1\leq i<j\leq N_a} K(z_{a,i},z_{a,j})}, \end{array}\eeq or \beq\new\begin{array}{c}\label{eq_B} B(Z_1,Z_2)^{-1}= \langle 0| \phi_1(z_{1,1})\cdots \phi_1(z_{1,N_1}) \phi_2(z_{2,1})\cdots \phi_2(z_{2,N_2}) |0\rangle. \end{array}\eeq \subsection{Pfaffian Wick's theorem} The first step in the proof of Theorem \ref{t2} is to express the tau-function of the 2-BKP hierarchy in terms of Pfaffians. For this purpose we need the following Pfaffian version of Wick's theorem, which is a direct neutral fermion analog of Wick's theorem of charged free fermions (see, e.g., \cite{AZ}). The idea of the proof is also similar to the charged fermions (KP hierarchy) case. Suppose that $v\in \mathcal{F}_0$ is a solution to the bilinear equation $\Omega_0(v\otimes v)=0$. According to van de Leur and Kac \cite{KL} there is a linear operator $G\in \operatorname{GL}(\mathcal{F})$ with $v=G|0\rangle$, such that $\Omega_0(G\otimes G)=(G\otimes G)\Omega_0$. In particular, we get \beq\new\begin{array}{c}\nonumber \sum_{a=1,2}\sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^k \langle U|\phi_a(k) G |V\rangle \langle U'|\phi_a(-k) G |V'\rangle = \sum_{a=1,2}\sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^k \langle U|G\phi_a(k) |V\rangle \langle U'|G\phi_a(-k) |V'\rangle \end{array}\eeq for any $U,U',V,V' \in \mathcal{F}_0$. Following \cite{AZ} we call this identity the {\em basic bilinear condition}. Below we assume that $\langle 0|v\rangle\neq 0$. Let \beq\new\begin{array}{c}\nonumber v_i=\phi_{b_i}(z_i)=\sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} \phi_{b_i}(k) z_i^k,\quad 1\leq i\leq 2n,\quad b_i\in \{1,2\}, \end{array}\eeq be a set of $2n$ fermionic fields. \begin{proposition}\label{prop:pfaffian} Suppose that $v=G|0\rangle$ is a solution to $\Omega_0(v\otimes v)=0$. Then a) The following identity holds \beq\new\begin{array}{c}\nonumber \sum_{a=1,2} \sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^k \langle 0|v_{2n} \phi_a(k) |v\rangle \, \langle 0|v_1\cdots v_{2n-1} \phi_a(-k) |v\rangle = 0. \end{array}\eeq b) The following recursion holds \beq\new\begin{array}{c}\nonumber \langle 0|v_1\cdots v_{2n}|v\rangle = \sum_{i=1}^{2n-1} (-1)^{i-1} \frac{\langle 0|v_i v_{2n}|v\rangle}{\langle 0|v\rangle}\, \langle 0|v_1\cdots v_{i-1} v_{i+1} \cdots v_{2n-1}|v\rangle . \end{array}\eeq c) The following formula holds $$ \frac{\langle 0|v_1\cdots v_{2n}|v\rangle }{\langle 0|v\rangle} = \operatorname{Pf}\Big( (2\theta(j-i)-1)\frac{\langle 0| v_i v_j|v\rangle}{\langle 0|v\rangle} \Big)_{1\leq i,j\leq 2n}, $$ where $$ \theta(m)= \begin{cases} 1 & \mbox{ if } m>0,\\ \frac{1}{2} & \mbox{ if } m=0, \\ 0 & \mbox{ if } m<0, \end{cases} $$ is the Heaviside function. \end{proposition} \begin{proof} a) Note that $v_i^\dagger:=\phi_{b_i}(-z_i^{-1})$ is the Hermitian conjugate of $v_i$. Let us recall the basic bilinear condition for $$ U=v_{2n}^\dagger |0\rangle,\quad U'=v_{2n-1}^\dagger\cdots v_1^\dagger|0\rangle,\quad V'=V=|0\rangle. $$ We just need to check that the RHS of the basic bilinear identity is $0$. If $k>0$ then $\phi_a(-k)V' =0$. If $k<0$ then $\phi_a(k)V=0.$ Therefore, only the terms with $k=0$ do not vanish, i.e., $$ \langle U|G\phi_1(0)|0\rangle\, \langle U'|G\phi_1(0)|0\rangle + \langle U|G\phi_2(0)|0\rangle\, \langle U'|G\phi_2(0)|0\rangle . $$ The above expression vanishes because $\phi_1(0)|0\rangle = -\mathbf{i}\phi_2(0)|0\rangle$. b) The idea is to use the identity proved in part a) and move the fermions $\phi_a(k)$ and $\phi_a(-k)$ to the left side of the corresponding correlator using the commutation relations \beq\new\begin{array}{c}\nonumber v_i \phi_a(k)+\phi_a(k) v_i = (-1)^k\delta_{a,b_i} z_i^{-k}. \end{array}\eeq Let us first do this with the first correlator, i.e., replace \beq\new\begin{array}{c}\nonumber v_{2n} \phi_a(k)=-\phi_a(k) v_{2n} + (-1)^k\delta_{a,b_{2n}} z_{2n}^{-k}. \end{array}\eeq We get that the following two sums are equal: \beq\new\begin{array}{c}\label{sum-1} \sum_{a=1,2} \sum_{k\in\raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^k \langle 0|\phi_a(k) v_{2n}|v\rangle \, \langle 0|v_1\cdots v_{2n-1} \phi_a(-k)|v\rangle \end{array}\eeq and \beq\new\begin{array}{c}\label{sum-2} \sum_{a=1,2}\sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} \delta_{a,b_{2n}}\, z_{2n}^{-k}\, \langle 0|v\rangle\, \langle 0| v_1\cdots v_{2n-1} \phi_a(-k) |v\rangle= \langle 0|v\rangle\, \langle 0| v_1\cdots v_{2n-1} v_{2n} |v\rangle . \end{array}\eeq Let us split the sum (\ref{sum-1}) into $2n$ parts according to the RHS of the identity \beq\new\begin{array}{c}\nonumber v_1\cdots v_{2n-1} \phi_a(-k) = -\phi_a(-k) v_1\cdots v_{2n-1} + \sum_{i=1}^{2n-1} (-1)^{k+i-1}\, \delta_{a,b_i} \, z_i^k\, v_1\cdots v_{i-1} v_{i+1}\cdots v_{2n-1} . \end{array}\eeq The first part of the sum is \beq\new\begin{array}{c}\nonumber -\sum_{a=1,2}\sum_{k\in\raise-1pt\hbox{$\mbox{\Bbbb Z}$}} \langle 0|\phi_a(k) v_{2n} |v\rangle \langle 0|\phi_a(-k) v_1\cdots v_{2n-1} |v\rangle =0, \end{array}\eeq where the terms with $k\neq 0$ vanish, because either $\phi_a(k)$ or $\phi_a(-k)$ annihilates the vacuum, while the remaining terms have only $k=0$ and they add up to $0$ thanks to the identity $\phi_1(0)|0\rangle =-\mathbf{i} \phi_2(0)|0\rangle$. The remaining parts have the form \beq\new\begin{array}{c}\nonumber \sum_{a=1,2}\sum_{k\in \raise-1pt\hbox{$\mbox{\Bbbb Z}$}} (-1)^{i-1} \delta_{a,b_i} \, z_i^k\, \langle 0|\phi_a(k) v_{2n} |v\rangle \langle 0| v_1\cdots v_{i-1} v_{i+1}\cdots v_{2n-1} |v\rangle, \end{array}\eeq so the sum (\ref{sum-1}) turns into \beq\new\begin{array}{c}\nonumber \sum_{i=1}^{2n-1} (-1)^{i-1} \langle 0|v_i v_{2n} |v\rangle \langle 0| v_1\cdots v_{i-1} v_{i+1}\cdots v_{2n-1} |v\rangle. \end{array}\eeq Comparison with (\ref{sum-2}) completes the proof of part b). c) Let $A$ be the $2n\times 2n$ skew-symmetric matrix whose upper-triangular entries are defined by \beq\new\begin{array}{c}\nonumber a_{ij} :=\frac{\langle 0| v_i v_j|v\rangle}{\langle 0|v\rangle},\quad 1\leq i<j\leq 2n . \end{array}\eeq We argue by induction on $n$. For $n=1$ the matrix has the form \beq\new\begin{array}{c}\nonumber A= \begin{bmatrix} 0 & a_{12} \\ -a_{12} & 0 \end{bmatrix} \end{array}\eeq and its Pfaffian is $a_{12}$. For $n>1$, we use that \beq\new\begin{array}{c}\nonumber \operatorname{Pf}(A) = \sum_{i=1}^{2n-1} (-1)^{i-1} a_{i,2n} \operatorname{Pf}(A_{i,2n}), \end{array}\eeq where $A_{i,j}$ denotes the matrix obtained from $A$ by removing both $i$-th and $j$-th rows and columns. Our inductive assumption implies that \beq\new\begin{array}{c} \nonumber \operatorname{Pf}(A_{i,2n}) = \frac{\langle 0| v_1 \cdots v_{i-1} v_{i+1}\cdots v_{2n-1}|v\rangle }{ \langle 0 |v\rangle}. \end{array}\eeq It remains only to recall part b). \end{proof} \subsection{Pfaffian formula for the tau-function} Let us apply part c) of Proposition \ref{prop:pfaffian} to compute the tau-function (\ref{Miwa-tau}). Let \beq\new\begin{array}{c}\nonumber \label{phi_to_gamma} \phi_{a,b}(z,w) := \frac{1}{2} \left.\frac{ \Gamma_a(z)\Gamma_b(w)\tau(\mathbf{t}) }{ \tau(\mathbf{t}) } \right|_{\mathbf{t}=0} \end{array}\eeq for $a\leq b$. We get \beq\new\begin{array}{c}\label{tau-Pf} \tau(Z_1,Z_2)=\tau(0)\, B(Z_1,Z_2)\, \operatorname{Pf}(\Phi(Z_1,Z_2)), \end{array}\eeq where \beq\new\begin{array}{c}\nonumber \Phi(Z_1,Z_2)= \begin{bmatrix} \Phi^{11} & \Phi^{12}\\ \Phi^{21} & \Phi^{22} \end{bmatrix}. \end{array}\eeq Here $\Phi^{aa}$ ($a=1,2$) are skew-symmetric matrices whose upper triangular entries are defined by \beq\new\begin{array}{c}\label{phiaa} \Phi^{aa}_{i,j} =\phi_{a,a}(z_{a,i},z_{a,j}), \quad 1\leq i<j\leq N_a, \end{array}\eeq $\Phi^{21}=-(\Phi^{12} )^T$, and the entries of $\Phi^{12}$ are defined by \beq\new\begin{array}{c}\label{phi12} \Phi^{12}_{i,j} ={\bf i} \phi_{1,2}(z_{1,i},z_{2,j}), \quad 1\leq i\leq N_1, \quad 1\leq j\leq N_2. \end{array}\eeq The factor $B(Z_1,Z_2)^{-1}$ in the formula (\ref{tau-Pf}) can also be expressed as a Pfaffian. To derive such an expression it is enough to apply part c) of Proposition \ref{prop:pfaffian} to (\ref{eq_B}). Let $\Phi_0(Z_1,Z_2)$ be the matrix corresponding to the vacuum, that is, to $\tau(\mathbf{t})=1$. Then, for $|z_{1,1}|>|z_{1,2}|>\dots > |z_{1,{N_1}}|$ and $|z_{2,1}|>|z_{2,2}|>\dots > |z_{2,{N_2}}|$ we have \beq\new\begin{array}{c}\label{tau_assum} \tau(Z_1,Z_2)=\tau(0)\, \frac{\operatorname{Pf}(\Phi(Z_1,Z_2))}{\operatorname{Pf}(\Phi_0(Z_1,Z_2))}. \end{array}\eeq If $\mathbf{t}=(\mathbf{t}_1,\mathbf{t}_2)$, where $\mathbf{t}_a=(t_{a,m})_{m\in\raise-1pt\hbox{$\mbox{\Bbbb Z}$}^{\rm odd}_{>0}}$ ($a=1,2$), then we denote by $\tau(\mathbf{t}_1-[z_1^{-1}],\mathbf{t}_2-[z_2^{-1}])$ the function obtained from $\tau(\mathbf{t}):=\tau(\mathbf{t}_1,\mathbf{t}_2)$ via the translation $t_{a,m}\mapsto t_{a,m}-2z_a^{-m}/m$. Then \begin{equation}\label{eq_phi} \begin{split} \phi_{1,1}(z,w) & =K(z,w) \frac{\tau(-[z^{-1} ]-[w^{-1}],0)}{2\tau(0)}, \\ \phi_{1,2}(z,w) & = \frac{\tau(-[z^{-1} ],-[w^{-1}])}{2\tau(0)}, \\ \phi_{2,2}(z,w) & =K(z,w) \frac{\tau(0,-[z^{-1} ]-[w^{-1}])}{2\tau(0)}. \end{split} \end{equation} Let \begin{align}\label{tildephi} \nonumber \tilde{\phi}_{1,1}(z,w) & := \frac{1}{2}\frac{z-w}{z+w}\, \frac{\tau(-[z^{-1} ]-[w^{-1}],0)}{\tau(0)}, \\ \tilde{\phi}_{1,2}(z,w) & := \frac{1}{2}\frac{\tau(-[z^{-1} ],-[w^{-1}])}{\tau(0)}, \\ \nonumber \tilde{\phi}_{2,2}(z,w) & := \frac{1}{2}\frac{z-w}{z+w}\, \frac{\tau(0,-[z^{-1} ]-[w^{-1}])}{\tau(0)}, \end{align} so that ${\phi}_{a,b}(z,w) = \iota_{|z|>|w|} \tilde{\phi}_{a,b}(z,w) $. The following proposition describes the difference between $\phi_{a,a}$ and $\tilde{\phi}_{a,a}$: \begin{proposition}\label{prop_exp} \beq\new\begin{array}{c}\nonumber \tilde{\phi}_{a,a}(z,w)- \frac{1}{2}\, \frac{z-w}{z+w} \in {\mathbb C}[\![z^{-1},w^{-1}]\!], \end{array}\eeq moreover, the difference vanishes when $|z|=|w|=\infty$. \end{proposition} \begin{proof} Note that a tau-function of the 2-BKP hierarchy depends only on odd variables $t_{a,2k+1}$. So, for $a= b$ the ratio of the tau-functions on the RHS of (\ref{tildephi}) is in ${\mathbb C}[\![z^{-1}+w^{-1},z^{-3}+w^{-3},\dots]\!]$, and all terms, which contain at least one of the variables $t_{a,2k+1}$, are proportional to \beq\new\begin{array}{c} \nonumber \frac{z-w}{z+w}\left(\frac{1}{z^{2k+1}}+\frac{1}{w^{2k+1}}\right)=\left(\frac{1}{w}-\frac{1}{z}\right)\left(\frac{1}{z^{2k}}-\frac{1}{z^{2k-1}w}+\dots+\frac{1}{w^{2k}}\right). \end{array}\eeq Therefore, the only term singular at $z=-w$ comes from the constant term in the tau-function. Moreover, the RHS of this equation vanishes when $|z|=|w|=\infty$. \end{proof} It is obvious that ${\phi}_{1,2}(z,w)=\tilde{\phi}_{1,2}(z,w).$ Thus, we have \begin{corollary}\label{cor_phiphi} \beq\new\begin{array}{c}\n \tilde{\phi}_{a,b}(z,w)- \frac{1}{2}\delta_{a,b}\tilde{K}(z,w)={\phi}_{a,b}(z,w)-\frac{1}{2} \delta_{a,b} K(z,w). \end{array}\eeq \end{corollary} Now we can relax the assumptions $|z_{1,1}|>|z_{1,2}|>\dots > |z_{1,{N_1}}|$ and $|z_{2,1}|>|z_{2,2}|>\dots > |z_{2,{N_2}}|$. Therefore for arbitrary tau-function of the 2-BKP hierarchy in the Miwa parametrization we have the following Pfaffian formula: \begin{proposition}\label{tau_as_Pf} Let $\tilde{\Phi}$ be the matrix defined in the same way as $\Phi$, except that in the definitions (\ref{phiaa})--(\ref{phi12}) of the entries $\phi$'s are replaced by $\tilde{\phi}$'s. Then \beq\new\begin{array}{c}\nonumber \tau(Z_1,Z_2)=\tau(0)\frac{2^{(N_1+N_2)/2} \operatorname{Pf}(\tilde{\Phi}(Z_1,Z_2))} {{\bf i}^{N_1^2} \prod_{a=1,2}\prod_{1\leq i<j\leq N_a} \frac{z_{a,i}-z_{a,j}}{z_{a,i}+z_{a,j}}}. \end{array}\eeq \end{proposition} \begin{proof} It is easy to show that the numerator vanishes when $z_{a,i}=z_{a,j}$ for some $a\in \{1,2\}$ and $i\neq j$. Thus the RHS is in ${\mathbb C}[\![z_{1,1}^{-1},\dots,z_{1,N_1}^{-1},z_{2,1}^{-1},\dots,z_{2,N_2}^{-1}]\!]$. Moreover, for $|z_{1,1}|>|z_{1,2}|>\dots > |z_{1,{N_1}}|$ and $|z_{2,1}|>|z_{2,2}|>\dots > |z_{2,{N_2}}|$ it coincides with (\ref{tau_assum}), which completes the proof. \end{proof} \section{Grassmannian point for the simple singularity of type D}\label{sec:Gr} In this section we recall the Grassmannian description of the 2-BKP hierarchy \cite{Sh} and construct the integral description of the point of the BKP Grassmannian for the tau-function which governs the simple singularity of type D. \subsection{BKP Grassmannian} We follow the notation from \cite{CM}, Section 1. Let $V=\mathbb{C}(\!(z^{-1})\!) \oplus \mathbb{C}(\!(z^{-1})\!)$ be the vector space of formal Laurent series in $z^{-1}$ with coefficients in $\mathbb{C}^2$. For $f(z)=(f_1(z),f_2(z)) \in V$ and $g(z)=(g_1(z),g_2(z)) \in V$ put \beq\new\begin{array}{c}\nonumber (f(z),g(z)) := \sum_{i=1,2} \operatorname{Res}_{z=0} f_i(z)g_i(-z)\frac{dz}{z}. \end{array}\eeq Note that $(\ ,\ )$ is a non-degenerate symmetric bilinear pairing on $V$. Let us define \begin{align}\nonumber U_0 & = \mathbb{C}(e_1+\mathbf{i} e_2) +\mathbb{C}[z]z e_1 + \mathbb{C}[z]z e_2,\\ \nonumber V_0 & = \mathbb{C}(e_1-\mathbf{i} e_2) + \mathbb{C}[\![z^{-1}]\!]z^{-1} e_1 + \mathbb{C}[\![z^{-1}]\!]z^{-1} e_2, \end{align} where $e_1=(1,0)$ and $e_2=(0,1)$ is the standard basis of $\mathbb{C}^2$. Both $U_0$ and $V_0$ are maximally isotropic subspaces and we have a direct sum decomposition $V=V_0\oplus U_0$. Let $\pi:V\to U_0$ be the projection along $V_0$. The big cell $\operatorname{Gr}_2^{(0)}$ of the 2-BKP Grassmannian is the set of all linear subspaces $U\subset V$ satisfying the following two conditions: \begin{enumerate} \item[(i)] $\pi|_U:U\to U_0$ is an isomorphism. \item[(ii)] $U$ is a maximally isotropic subspace. \end{enumerate} Recall that a subspace $U\subseteq V$ is said to be {\em isotropic} if $(u_1,u_2)=0$ for all $u_1,u_2\in U$. If $U$ is a maximal element in the set of all isotropic subspaces of $V$, then $U$ is called maximally isotropic. Suppose now that $\tau(\mathbf{t})\in \mathbb{C}[\![\mathbf{t}]\!]$ is a formal power series, such that $\tau(0)\neq 0$. Then we define \beq\new\begin{array}{c}\label{wave-1} \Psi(\mathbf{t},z) := \Psi^{(1)}(\mathbf{t},z) e_1 +\mathbf{i} \Psi^{(2)}(\mathbf{t},z) e_2 \quad \in \quad V[\![\mathbf{t}]\!], \end{array}\eeq where \beq\new\begin{array}{c}\label{wave-2} \Psi^{(a)}(\mathbf{t},z) =\frac{\Gamma_a(\mathbf{t},z) \tau(\mathbf{t})}{\tau(\mathbf{t})}. \end{array}\eeq Let $U_\tau\subset V$ be the subspace spanned by the coefficients of the Taylor's series expansion of $\Psi(\mathbf{t},z)$ at $\mathbf{t}=0$. According to Shiota (see \cite{Sh}, Section 3.1), the formal power series $\tau(\mathbf{t})$ is a tau-function of the 2-BKP hierarchy if and only if $U_\tau\in \operatorname{Gr}_2^{(0)}$. Moreover, the map $\tau\mapsto U_\tau$ is a one-to-one correspondence between the tau-functions of the 2-BKP hierarchy and the points of $\operatorname{Gr}_2^{(0)}$. If $\tau(\mathbf{t})$ is a tau-function of the 2-BKP hierarchy, then the corresponding $\Psi(\mathbf{t},z)$ defined by (\ref{wave-1})--(\ref{wave-2}) is called the {\em wave function}. The main goal of this section is to prove Theorem \ref{t1} and to construct the corresponding point in the Grassmannian $\operatorname{Gr}_2^{(0)}$ in terms of steepest descent asymptotic of certain integrals. Our proof of Theorem \ref{t1} is based on the notion of the {\em Kac--Schwarz operators} for a given $U\in \operatorname{Gr}_2^{(0)}$, that is, differential operators $a$ such that \beq\new\begin{array}{c}\nonumber a \, U \subset U. \end{array}\eeq Such operators were introduced first in \cite{KS}, where they proved to be very convenient for the investigation of the solutions of the KP hierarchy associated to the simple singularities of type A. \subsection{Wave function and quantum spectral curve} Let us reformulate the statement of Theorem \ref{t1} in terms of the Grassmannian $\operatorname{Gr}_2^{(0)}$. Following \cite{CM}, Section 1, let us introduce two operators $a$ and $b$ ($a=\ell_{-1}$ in the notation of \cite{CM}) \beq\new\begin{array}{c}\nonumber a=(a_1,a_2), \end{array}\eeq where \beq\new\begin{array}{c}\label{aoper} a_1:=-{\bf i}z+z^{-h}\left(\frac{z}{h}\frac{\partial}{\partial z}-\frac{1}{2}\right),\\ a_2:=\frac{1}{2z^2}\left(z\frac{\partial}{\partial z}-1\right), \end{array}\eeq are the first order differential operators and \beq\new\begin{array}{c}\nonumber b=(z^h,z^2) \end{array}\eeq acts by multiplication. We have the following proposition. \begin{proposition}\label{prop:ref_t1} Let $U\in \operatorname{Gr}_2^{(0)}$ be a subspace corresponding to a tau-function $\tau$ of the 2-BKP hierarchy. Then a) $\tau$ is a tau-function of the $(h,2)$-reduction if and only if $b \, U \subset U$. b) $\tau$ satisfies the string equation (\ref{str_eqn}) if and only if $a \, U\subset U$. \end{proposition} Part a) is Corollary 1b in \cite{CM} and part b) is Lemma 9 in \cite{CM}. Therefore, in order to prove Theorem \ref{t1} we have to prove that there exists a unique subspace $U\in \operatorname{Gr}_2^{(0)}$, such that, $a \, U\subset U$ and $b \, U\subset U$, that is, $a$ and $b$ are Kac--Schwarz operators for $U$. In fact, we will construct an explicit basis of $U$ in terms of stationary phase asymptotics of certain steepest descent integrals. To begin with, note that $a$ and $b$ satisfy the canonical commutation relation \beq\new\begin{array}{c}\nonumber [a,b]=1. \end{array}\eeq Let $\Psi=\Psi^{(1)}e_1+{\bf i} \Psi^{(2)} e_2 \in U$ be such that \beq\new\begin{array}{c}\label{psi12} \Psi^{(a)}(z)=1+O(z^{-1}). \end{array}\eeq To describe it, let us introduce degree grading in the space of differential operators $\mathbb{C}[z,z^{-1}][\partial_z]$, such that, \beq\new\begin{array}{c}\nonumber \operatorname{deg} z^{-1} = \operatorname{deg} \frac{\partial}{\partial z}=-1. \end{array}\eeq Then \beq\new\begin{array}{c}\nonumber a^{h+1}=\left((-{\bf i}z)^{h+1}+\frac{h+1}{h}(-{\bf i})^h z\frac{\partial}{\partial z}+\dots,\dots \right), \end{array}\eeq where by $\dots$ we denote the terms of negative degree. The Kac--Schwarz operator \beq\new\begin{array}{c}\nonumber A:=ba+\frac{1}{2}-{\bf i}^h a^{h+1}=\left(-z\frac{\partial}{\partial z}+\dots,\frac{z}{2}\frac{\partial}{\partial z}+\dots\right) \end{array}\eeq does not contain terms of positive degree, therefore \beq\new\begin{array}{c}\nonumber A \Psi=O(z^{-1})e_1+O(z^{-1})e_2. \end{array}\eeq The LHS belongs to $U$, the RHS belongs to $V_0$, and since by definition $\pi|_U$ is an isomorphism, we conclude that \beq\new\begin{array}{c}\label{qsceq} A \Psi=0. \end{array}\eeq We refer to (\ref{qsceq}) as the {\emph {quantum spectral curve}} equation. \begin{lemma}\label{lemma1} Suppose that $a$ and $b$ are Kac--Schwarz operators for some subspace $U\in {\rm Gr}\,_2^{(0)}$ and let $\Psi(\mathbf{t},z)$ be the corresponding wave function. The quantum spectral curve equation (\ref{qsceq}) has a unique, up to normalization, solution in $V$. Being normalized, this solution has the asymptotics (\ref{psi12}) and coincides with $\Psi(0,z)$. \end{lemma} \begin{proof} Let us prove the uniqueness. Suppose that $\Psi(z)\in V$ is a solution to the quantum spectral curve equation. Then the leading term of the Laurent series expansion is a non-vanishing constant. Substituting the Laurent series expansion in $z^{-1}$ of $\Psi(z)$ in (\ref{qsceq}) and comparing the coefficients in front of the powers of $z$, we get a recursion which uniquely determines the coefficients of $\Psi(z)$. By definition both $\Psi(z)$ and $\Psi(0,z)$ belong to $U$. Note that their projections via $\pi:V\to U_0$ coincide (with $e_1+\mathbf{i} e_2$). Since $U\in {\rm Gr}\,_2^{(0)}$ the projection $\pi|_{U}$ is an isomorphism, so $\Psi(z)=\Psi(0,z)$. \end{proof} Put $f(x)=x^{2h+2}-(h+1) x^2$. Let $u_a=f(\xi_a)$, where $\xi_1=1$ and $\xi_2=0$ are two critical points of $f$, that is $u_1=-h$, $u_2=0$. Then the components of the wave function can be identified with the steepest descent asymptotics of the following integrals: \beq\new\begin{array}{c}\label{oi} \Psi^{(a)} \sim c_a \sqrt{\frac{\lambda_a}{\pi}} \int_{\gamma_a} e^{\lambda_a (f(x)-u_a) } dx,\quad \lambda_a\to \infty,\,\,\,\,\,\,\,\,\, a \in \{1,2\}. \end{array}\eeq Here $\lambda_a(z):=\tfrac{\mathbf{i} z^{h_a+h_a/h}}{h+1}$, that is, \beq\new\begin{array}{c}\nonumber \lambda_1(z):=\tfrac{\mathbf{i} z^{h+1}}{h+1},\,\,\,\,\,\, \lambda_2(z):=\tfrac{\mathbf{i} z^{2+\frac{2}{h} }}{h+1}. \end{array}\eeq The contours $\gamma_a$ ($a=1,2$) are chosen as follows. Let us denote by $\mathbb{D}_a\subset \mathbb{C}$ the disk with center at the critical point $\xi_a$ and a sufficiently small radius, so that the Morse lemma applies, i.e., there exists a holomorphic coordinate $X_a(x)$ in $\mathbb{D}_a$, such that, $f(x)=u_a-\tfrac{X_a(x)^2}{2}$ for all $x\in \mathbb{D}_a$. Let \beq\new\begin{array}{c}\nonumber \mathbb{D}_a^-:=\{ x\in \partial \mathbb{D}_a\ |\ \operatorname{Re}(\lambda_a (f(x)-u_a)<0 \}, \end{array}\eeq where $\partial \mathbb{D}_a$ denotes the boundary of $\mathbb{D}_a$. Using the Morse coordinate $X_a$ it is easy to see that $\mathbb{D}^-_a$ consists of two disconnected arcs. Let us choose the integration path $\gamma_a$ to be a path in $\mathbb{D}_a$ whose endpoints are on $\mathbb{D}_a^-$. The asymptotic expansion of the integral in (\ref{oi}) depends only on the homology class of $\gamma_a$ in $H_1(\mathbb{D}_a,\mathbb{D}_a^-;\mathbb{Z}) \cong \mathbb{Z}$. This fact is easy to prove by modifying the standard argument of the steepest descent method (see \cite{Sha}, Chapter 5, Section 16). In fact, there is a more general theory of asymptotic expansions which applies to our case -- see Chapter 16 in \cite{AGV} for more details. Let us choose $\gamma_a$ to be such that its homology class is a $\mathbb{Z}$-basis of $H_1(\mathbb{D}_a,\mathbb{D}_a^-;\mathbb{Z})$. Later on (see Lemma \ref{le:Ip-de} below) we will have to work with asymptotic expansions of integrals of the form \beq\new\begin{array}{c}\label{oi-pole} \int_{\gamma_a} e^{\lambda_a (f(x)-u_a)} \varphi(x) dx, \end{array}\eeq where $\varphi(x)\in \mathbb{C}[x^2,x^{-2}]$. Note that the integrand of (\ref{oi-pole}) is a meromorphic form on $\mathbb{C}$ with a possible pole at $x=0$ and that its residue at $x=0$ is $0$. If $a=1$, then the asymptotic expansion is obtained via the standard theory. In the case $a=2$, since $\varphi(x)$ might have a pole at $0\in \mathbb{D}_2$, we have to be a little bit more careful. It turns out that the usual asymptotic estimates work, that is, if $\gamma_2$ does not contain $0$, then there is a well defined asymptotic expansion which depends only on the homology class of $\gamma_2$ in $H_1(\mathbb{D}_2,\mathbb{D}_2^-;\mathbb{Z})$. Let us agree that $\gamma_2$ is a contour, such that, it does not go through $x=0$ and its homology class in $H_1(\mathbb{D}_2,\mathbb{D}_2^-;\mathbb{Z})$ is a $\mathbb{Z}$-basis. We need only to specify the orientations of $\gamma_a$. Here and below the fractional powers of $\lambda$ are defined via the principal branch of the logarithm, e.g., $\sqrt{\lambda}:=e^{\tfrac{1}{2}\log \lambda}$. We fix the normalization constants \beq\new\begin{array}{c}\label{c12} c_1=\mathbf{i}\sqrt{2h(h+1)}, \,\,\,\,\,\, c_2=\sqrt{h+1} \end{array}\eeq and the orientation of the contours $\gamma_a$ to be such that the asymptotic expansions have the form (\ref{psi12}). \begin{remark} It is possible to replace the local contours $\gamma_a$ in (\ref{oi}) with global ones, such that, the asymptotic expansion does not change. Let us give an example of global contours, asymptotically equivalent to the local ones. Let $\lambda_a$ be positive for $a \in \{1,2\}$. Then one can replace $\gamma_1$ with a path which goes from the sector $\left(\frac{\pi}{4(h+1)},\frac{3\pi}{4(h+1)}\right)$ to the sector $\left(-\frac{3\pi}{4(h+1)},-\frac{\pi}{4(h+1)}\right)$ trough the point $x=1$. The second contour $\gamma_2$ can be replaced by a path which goes from the sector $\left(\pi-\frac{3\pi}{4(h+1)},\pi-\frac{\pi}{4(h+1)}\right)$ to the sector $\left(\frac{\pi}{4(h+1)},\frac{3\pi}{4(h+1)}\right)$ and contains a half circle $C_\epsilon=\{ x=\epsilon e^{\mathbf{i} \theta}\ :\ -\pi<\theta <0\}$ with a sufficiently small radius $\epsilon$. We will not use the global contours in this paper. They might be important if one is interested in constructing an analytic matrix model. Unfortunately, we could not establish the analyticity of our matrix model due to complications in the asymptotic expansions of certain double integrals -- see formula (\ref{Phi_ab}) below. \end{remark} \begin{lemma}\label{lemma33} We have $ \Psi^{(a)}\in{\mathbb Q}[\![({\bf i}z^{h+1})^{-a}]\!] $ for $a=1,2$. \end{lemma} \begin{proof} Since $\Psi^{(1)}$ and $\Psi^{(2)}$ are steepest descent asymptotics of integrals, we can identify them with the formal perturbative expansion of Gaussian integrals, that is, \begin{align}\nonumber \Psi^{(1)} & =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty dy\, e^{-\frac{y^2}{2}-\frac{2}{h\alpha^2}\sum_{j=3}^{2h+2}\frac{(2h+1)!}{j!(2h+2-j)!}\left(\frac{\alpha y}{2}\right)^{j}},\\ \nonumber \Psi^{(2)} & =\frac{1}{\sqrt{2\pi}}\int_{-\infty}^\infty dy \, e^{-\frac{y^2}{2}+\frac{{\bf i}^h}{(h+1)(2z^2)^{h+1}}y^{2h+2}}, \end{align} where the RHS of the first formula is interpreted as a formal power series in $\alpha^2={\bf i}z^{-h-1}/h$. The statement of the lemma follows from the above formulas. \end{proof} Let us find the first coefficients of the expansion for $a=1$: \beq\new\begin{array}{c}\nonumber \Psi^{(1)}=1+(h+2)(2h+1)\left(\frac{1}{24}\alpha^2+\frac{1}{1152}(2h^2+53h+50)\alpha^4-\right.\\ \left.-\frac{1}{414720}(556h^4-1972h^3-41853h^2-76492h-36164)\alpha^6+O(\alpha^8)\right), \end{array}\eeq while for $a=2$ we easily find an expression for all coefficients: \beq\new\begin{array}{c}\nonumber \Psi^{(2)}=1+\sum_{k=1}^\infty \left(\frac{{\bf i}^h}{(h+1)(2z^2)^{h+1}}\right)^k\frac{(2hk+2k-1)!!}{k!}. \end{array}\eeq For brevity, let us introduce the differential operator \beq\new\begin{array}{c}\label{do:g} g:=-\frac{{\bf i}^h a_2^{h+1}}{h+1}. \end{array}\eeq We will also write $g_z$ if we would like to emphasize that $g$ acts on functions in $z$. It is easy to see that \beq\new\begin{array}{c}\label{psi_as_aop} \Psi^{(2)}=e^{g}\cdot 1, \end{array}\eeq that is, $\Psi^{(2)}(z)=e^{g_z}\cdot 1.$ \subsection{Higher vectors}\label{sec_hi} In this section we explicitly describe a basis for the point $U\in {\rm Gr}\,^{(0)}_2$. Put \beq\new\begin{array}{c} \nonumber \Psi_k:=({\bf i}a)^{k} \Psi. \end{array}\eeq Note, that this definition makes sense for all $k \in {\mathbb Z}$. Indeed, operator $a_1^{-1}$ is well defined on ${\mathbb C}(\!(z^{-1})\!)$. Operator $a_2^{-1}$ is well defined on ${\mathbb C}(\!(z^{-2})\!)$ by $a_2^{-1} z^{2m}=\frac{2}{2m+1}z^{2m+2}$ for $m \in {\mathbb Z}$, and from Lemma \ref{lemma33}, $\Psi^{(2)}(z) \in {\mathbb Q}[\![z^{-2}]\!]$. \begin{lemma}\label{le:Ip-de} Let $I_p(\lambda)$ be the asymptotic expansion of the integral $\int_\gamma e^{\lambda f(x)} x^{2p} dx$, where $\gamma=\gamma_1$ or $\gamma_2$ and $p\in \mathbb{Z}$. Then \beq\new\begin{array}{c}\nonumber \Big(\lambda \partial_\lambda +\tfrac{2p+1}{2h+2}\Big) I_p(\lambda) = -\lambda h I_{p+1}(\lambda). \end{array}\eeq \end{lemma} The proof is a direct computation with integration by parts. The above Lemma yields the following formulas: \beq\new\begin{array}{c}\label{vir-psi_2} \Psi_k^{(a)} \sim c_a \sqrt{\frac{\lambda_a}{\pi}} \int_{\gamma_a} e^{\lambda_a (f(x)-u_a) } (z^{h_a/h} x^2)^{k} dx,\quad \lambda_a\to \infty, \end{array}\eeq Note that the RHS of formula (\ref{vir-psi_2}) makes sense also for $k<0$. \begin{lemma}\label{lemma_45} $\Psi_{k}(z)=\Psi^{(1)}_{k}(z) e_1 + \mathbf{i} \Psi^{(2)}_{k}(z) e_2 \in U$ for all $k\in {\mathbb Z}$. \end{lemma} \begin{proof} For $k\geq 0$ the statement follows from the definition of the Kac--Schwarz operators. To prove the statement for the negative $k$ let us introduce the Kac--Schwarz operator \beq\new\begin{array}{c}\nonumber c:=b-({\bf i} a)^h. \end{array}\eeq Using the quantum spectral curve equation (\ref{qsceq}) and the commutation relation $[a,b]=1$ we conclude that $\Psi_{-1}= ({\bf i}a)^{-1} \Psi =-2{\bf i} c\,\Psi$, thus $\Psi_{-1}\in U$. From the same commutation relation it immediately follows that \beq\new\begin{array}{c}\nonumber \Psi_{-k-1}(z) := -\frac{2\mathbf{i}}{2k+1} \, c \Psi_{-k} \in U. \end{array}\eeq \end{proof} \begin{corollary} For $k \in {\mathbb Z}$ we have \beq\new\begin{array}{c}\nonumber A\Psi_k =-k \Psi_k. \end{array}\eeq \end{corollary} Then \begin{lemma}\label{lemma36} For $k\in {\mathbb Z}$ the components of $\Psi_{k}(z)=\Psi^{(1)}_{k}(z) e_1 + \mathbf{i} \Psi^{(2)}_{k}(z) e_2$ have the following leading order terms: \beq\new\begin{array}{c}\nonumber \Psi_k^{(1)}=z^k \left(1+O(z^{-1})\right),\\ \Psi_k^{(2)}=\frac{(2k-1)!!}{(2{\bf i}z^2)^k}\left(1+O(z^{-1})\right), \end{array}\eeq where for negative $k$ we define $(2k-1)!!:=\frac{(-1)^k}{(2|k|-1)!!}$. \end{lemma} \begin{proof} The statement follows from (\ref{aoper}) and (\ref{psi12}). \end{proof} It is also easy to see that \beq\new\begin{array}{c}\n \Psi_k^{(1)}\in z^{k}{\mathbb Q}[\![{\bf i}z^{-h-1}]\!],\\ \Psi_k^{(2)}\in \left({\bf i }z^2\right)^{-k}{\mathbb Q}[\![z^{-2h-2}]\!]. \end{array}\eeq Vectors $\Psi_k$ do not completely generate $U$. Indeed, $U$ should contain an element of the form \beq\new\begin{array}{c}\nonumber \Phi_1=O(z^{-1})e_1+\mathbf{i} z(1+O(z^{-1}))e_2. \end{array}\eeq This element cannot be expressed as a linear combination of $\Psi_k$. However, it has a simple form and can be determined explicitly. Indeed, since $a$ is a Kac--Schwarz operator, we have $a\, \Phi_1 \in U$. Note that \beq\new\begin{array}{c}\nonumber a\, \Phi_1 = O(1)e_1+O(z^{-2})e_2. \end{array}\eeq We claim that $a\, \Phi_1=0$. Indeed, the projection of $a\, \Phi_1$ along $V_0$ is proportional to $e_1+\mathbf{i} e_2$ and since it belongs to $U$, we get that $a\, \Phi_1$ is proportional to $\Psi$. Comparing the powers of $z$ in the $e_2$-components of $a\, \Phi_1$ and $\Psi$, we get that the proportionality coefficient must be $0$, that is, $a\,\Phi_1 =0$ as claimed. The differential equation $a\,\Phi_1 =0$ can be solved explicitly: \beq\new\begin{array}{c}\nonumber \Phi_1=\mathbf{i} z e_2. \end{array}\eeq For $k>0$ let us define \beq\new\begin{array}{c}\label{Phik} \Phi_{k}=b^{k-1} \Phi_{1}= \mathbf{i} z^{2k-1} e_2 \in U. \end{array}\eeq From this equation recalling Lemmas \ref{lemma1}, \ref{lemma_45} we get that \beq\new\begin{array}{c}\label{basisexp} U={\rm span}\,\{\{\Psi_{k}, k\in {\mathbb Z}\}, \{\Phi_k, k>0\} \}. \end{array}\eeq Thus, we proved Theorem \ref{t1}. \begin{remark} Let us note that \beq\new\begin{array}{c}\nonumber z \sim \frac{\sqrt{\bf i}}{\pi}\, c_2\, \lambda_2^{1/2} \int_{\gamma_2} e^{\lambda_2 f(x) } (z^{1/h} x)^{-1} dx,\quad \lambda_2\to \infty, \end{array}\eeq so that the basis vectors (\ref{Phik}) can be generated by the asymptotic expansion of the integrals \\ $c_2 \lambda_2^{1/2} \int_{\gamma_2} e^{\lambda_2 f(x) } (z^{1/h} x)^{-2k-1} dx$. So, all basis vectors (\ref{basisexp}) can be described by the asymptotic expansion of the integrals over the contours $\gamma_a$. \end{remark} From (\ref{psi_as_aop}) and definition of $\Psi^{(2)}_m$ it immediately follows that for $m \in {\mathbb Z}$ \beq\new\begin{array}{c}\label{psik_as_aop} \Psi^{(2)}_m=e^{g}\cdot \frac{(2m-1)!!}{(2{\bf i}z^2)^m}\\ =\frac{1}{(2{\bf i}z^2)^m}\left((2m-1)!!+\sum_{k=1}^\infty \left(\frac{{\bf i}^h}{(h+1)(2z^2)^{h+1}}\right)^k\frac{(2(m+hk+k)-1)!!}{k!}\right). \end{array}\eeq \section{Matrix integral}\label{sec:mi} From now on $U\in {\rm Gr}\,_2^{(0)}$ will denote the unique subspace invariant under the action of the operators $a$ and $b$. Let $\tau(\mathbf{t})=\tau(\mathbf{t}_1,\mathbf{t}_2)$ and $\Psi(\mathbf{t},z)$ be respectively the corresponding $\tau$-function and the corresponding wave function. Our goal is to express the tau-function $\tau(Z_1,Z_2)$ in the Miwa variables in terms of the steepest descent asymptotic of a certain matrix integral. \subsection{Integral kernel for the Pfaffian entries} Our next goal is to express the entries of the Pfaffian matrix $\tilde{\Phi}(Z_1,Z_2)$ in terms of asymptotics of integrals. The computation of the entries of the Pfaffian matrix amounts to computing the 3 types of 2-point functions (\ref{tildephi}). In this section we will find integral representations for their expansions (\ref{eq_phi}). \begin{lemma}\label{le:symm} The tau-function has the following symmetry: $\tau(\mathbf{t}_1,\mathbf{t}_2)=\tau(\mathbf{t}_1,-\mathbf{t}_2)$. \end{lemma} \begin{proof} Indeed, it is easy to check that if $\tau(\mathbf{t}_1,\mathbf{t}_2)$ is a tau-function, then $\tau(\mathbf{t}_1,-\mathbf{t}_2)$ is also a tau-function. The Virasoro constraints are invariant under the inversion $\mathbf{t}_2\mapsto -\mathbf{t}_2$ (note that due to the dilaton shift, the other inversion $\mathbf{t}_1\mapsto -\mathbf{t}_1$ does not preserve the Virasoro constraints). The uniqueness of the reduced tau-function satisfying Virasoro constraints implies that $\tau(\mathbf{t}_1,\mathbf{t}_2)=\tau(\mathbf{t}_1,-\mathbf{t}_2)$. \end{proof} The symmetry of the tau-function from this lemma yields immediately the following symmetry of the two point function $\phi_{2,2}$: \begin{corollary}\label{cor:symm} The following symmetry holds: $ \phi_{2,2}(-w,-z)=\phi_{2,2}(w,z). $ \end{corollary} We also have: \begin{lemma}\label{lem_K_reg} Let $g$ be the differential operator \eqref{do:g}. Then \begin{equation} \begin{split}\nonumber e^{g_z+g_w}\, \frac{zw}{z^2-w^2}&= \frac{zw}{z^2-w^2},\\ e^{g_z+g_w}\, \frac{z^2+w^2}{z^2-w^2} - \frac{z^2+w^2}{z^2-w^2} &\in {\mathbb C}[\![z^{-2},w^{-2}]\!]. \end{split} \end{equation} Moreover, the difference in the last line vanishes as $|z|=|w|=\infty$. \end{lemma} \begin{proof} Since \beq\new\begin{array}{c}\nonumber \frac{z-w}{z+w}=\frac{z^2+w^2}{z^2-w^2} -2 \frac{zw}{z^2-w^2}, \end{array}\eeq and operator $a_2$ is even, to prove the statement it is enough to show that \begin{equation} \begin{split}\nonumber \left(a_2^{h+1}+\tilde{a}_2^{h+1}\right)\frac{z-w}{z+w}\in {\mathbb C}[\![z^{-2},w^{-2}]\!], \end{split} \end{equation} where $\tilde{a}_2$ is the operator $a_2$, acting on the functions of variable $w$. Since $h$ is even, the statement of the lemma follows from identities $a_2^{h+1}+\tilde{a}_2^{h+1}=(a_2^{2h}-a_2^{2h-1}\tilde{a}_2+\dots+\tilde{a}_2^{2h})(a_2+\tilde{a}_2)$ and \beq\new\begin{array}{c}\nonumber \left(a_2+\tilde{a}_2\right)\frac{z-w}{z+w}=\frac{1}{2w^2}-\frac{1}{2z^2}. \end{array}\eeq \end{proof} Put \beq\new\begin{array}{c}\nonumber r_m:=(-1)^m \theta(m), \end{array}\eeq where $\theta(m)$ is the Heaviside function, so that \beq\new\begin{array}{c}\nonumber K(z,w)=2 \sum_{m=0}^\infty r_m \left(\frac{w}{z}\right)^m. \end{array}\eeq The identities in next proposition are very interesting. They will play a key role in the construction of our matrix model. These formulas indicate the special role of the basis (\ref{basisexp}). \begin{proposition}\label{kern_basis} The following formulas hold: \begin{align} \nonumber \phi_{1,b}(z,w) & = \sum_{k=0}^\infty r_k \Psi_{-k}^{(1)}(z)\Psi_{k}^{(b)}(w), \\ \nonumber \phi_{2,2}(w,z) & = \sum_{k=0}^\infty r_k \Psi_{-k}^{(2)}(z)\Psi_{k}^{(2)}(w)-\sum_{k=0}^\infty \left(\frac{z}{w}\right)^{2k+1}. \end{align} \end{proposition} \begin{proof} Recalling the definition of the wave function (\ref{wave-1})--(\ref{wave-2}), we get the following formulas for $b\in\{1,2\}$: \begin{align}\nonumber \phi_{1,b}(w,z) & = \Psi^{(b)}(\mathbf{t}^\circ,z)\Psi^{(1)}(w)/2 ,\\ \nonumber \phi_{b,2}(z,w) &=\Psi^{(b)}(\tilde{\bf t}^\circ,z)\Psi^{(2)}(w)/2, \end{align} where $\mathbf{t}^\circ$ and $\tilde{\mathbf{t}}^\circ$ are defined by $t^\circ_{1,m}= -2w^{-m}/m$, $t^\circ_{2,m}=0$ and $\tilde{t}^\circ_{1,m}=0$, $\tilde{t}^\circ_{2,m}= -2w^{-m}/m$. We introduce \begin{align}\nonumber X_1(w,z) & =\phi_{1,1}(w,z)e_1+\mathbf{i} \phi_{1,2}(w,z) e_2,\\ \nonumber X_2(w,z) & =\phi_{1,2}(z,w)e_1 + \mathbf{i}\phi_{2,2}(w,z) e_2. \end{align} Viewing $w$ as a parameter, we get that \beq\new\begin{array}{c}\nonumber X_a(w,z) \in U,\quad a=1,2. \end{array}\eeq Using that $U$ is spanned by $\Psi_k$ and $\Phi_k$, recalling the asymptotics given by (\ref{Phik}) and Lemma \ref{lemma36}, and having in mind that $\phi_{1,2}(z,w)\in{\mathbb C}[\![z^{-1},w^{-1}]\!]$, we get that $X_a(w,z)$ can be decomposed as follows: \beq\new\begin{array}{c}\label{Xdeco} X_1(w,z)=\sum_{k=0}^\infty \alpha_{k}(w) \Psi_k(z),\\ X_2(w,z)=\sum_{k=0}^\infty\beta_{k}(w) \Psi_{-k}(z) +\sum_{k=1}^\infty\gamma_k(w) \Phi_k(z), \end{array}\eeq for some Laurent series $\alpha_k,\beta_k,\gamma_k\in \mathbb{C}(\!(w^{-1})\!)$. In particular, \beq\new\begin{array}{c}\label{phibeta} \phi_{2,2}(w,z)=\sum_{k=0}^\infty\beta_{k}(w) \Psi_{-k}^{(2)}(z)+\sum_{k=1}^\infty\gamma_k(w) z^{2k-1}. \end{array}\eeq Comparing it with Proposition \ref{prop_exp} and having in mind that $\Psi_{-k}^{(2)}$ is even, $\Psi_{-k}^{(2)}(-z)=\Psi_{-k}^{(2)}(z)$, we immediately conclude that \beq\new\begin{array}{c}\nonumber \gamma_k(w)=-w^{1-2k}. \end{array}\eeq Let us consider \beq\new\begin{array}{c}\nonumber M(w,z)=\sum_{k=0}^\infty\beta_{k}(w) \Psi_{-k}^{(2)}(z) - \frac{1}{2}e^{g_z+g_w}\, \iota_{|w|>|z|} \frac{w^2+z^2}{w^2-z^2}. \end{array}\eeq From Proposition \ref{prop_exp} and Lemma \ref{lem_K_reg} it follows that \beq\new\begin{array}{c}\nonumber \phi_{2,2}(w,z) - \frac{1}{2} e^{g_z+g_w}\, K(w,z) =M(w,z), \end{array}\eeq and that $M(w,z)\in{\mathbb C}[\![z^{-2},w^{-2}]\!]$ with vanishing constant term. Let us show that $M(w,z)=0$. Let us act on $M$ by the operator $e^{-g_z}$. From (\ref{psik_as_aop}) we have \beq\new\begin{array}{c}\nonumber e^{-g_z}M(w,z)=\sum_{k=0}^\infty\beta_{k}(w) (-1)^k \frac{(2{\bf i}z^2)^k}{(2k-1)!!} - \frac{1}{2}e^{g_w}\, \iota_{|w|>|z|} \frac{w^2+z^2}{w^2-z^2}. \end{array}\eeq Hence $e^{-g_z}M(w,z)\in{\mathbb C}[\![z^{2},w^{-2}]\!]$ with trivial constant term. In the same time $e^{-g_z}M(w,z)\in{\mathbb C}[\![z^{-2},w^{-2}]\!]$, hence $e^{-g_z}M(w,z)=0$. The kernel of the operator $e^{-g_z}$ on ${\mathbb C}[\![z^{-2}]\!]$ is trivial, therefore $M(w,z)=0$, and \beq\new\begin{array}{c}\label{eq_phi_gr} \phi_{2,2}(z,w) = \frac{1}{2} e^{g_z+g_w}\, K(z,w). \end{array}\eeq Comparing its expansion with (\ref{phibeta}) we conclude that $\beta_k(w)=r_k\Psi_{k}^{(2)}(w)$, and from (\ref{Xdeco}) it follows that $\alpha_k(w)=r_k\Psi_{-k}^{(1)}(w)$. This completes the proof. \end{proof} \begin{corollary}\label{Cor22} \beq\new\begin{array}{c}\nonumber \tilde{\phi}_{2,2}(z,w)=\frac{1}{2} e^{g_z+g_w} \frac{z-w}{z+w} \end{array}\eeq \end{corollary} \begin{proof} The statement follows from Corollary \ref{cor_phiphi}, (\ref{eq_phi_gr}) and Lemma \ref{lem_K_reg}. \end{proof} Substituting the integral representations of the basis elements (\ref{vir-psi_2}) we get for $b\in\{1,2\}$ the following formulas: \begin{equation} \begin{split} \label{2int_formal} \phi_{1,b}(z,w) & =\frac{c_1 c_b}{2\pi} \sqrt{\lambda \mu} \iint_{\gamma_1\times \gamma_b} e^{\lambda (f(x)-u_1) + \mu (f(y)-u_b)} K(z x^2,w^{2/h_b} y^2) dx dy, \\ \phi_{2,2}(w,z) & = \frac{c_2^2}{2\pi} \sqrt{\lambda \mu} \iint_{\gamma_2\times \gamma_2} e^{\lambda f(x) + \mu f(y)} K(z^{2/h}x^2,w^{2/h}y^2) dx dy - \sum_{k=0}^\infty \left(\frac{z}{w}\right)^{2k+1}, \end{split} \end{equation} where $\lambda=\lambda_a(z)$ and $\mu=\lambda_b(w)$ in agreement with the choice of the contour of integration. \subsection{Double integrals}\label{sec:di} In this section we derive double integral expressions for $\widetilde{\phi}_{a,b}$. Note that in \eqref{2int_formal} we first expand in the powers of $z^{-1}$ and then apply the steepest descent method. It turns out that if we apply directly the steepest descent method, without expanding in $z^{-1}$, then we will obtain exactly the 2-point functions $\widetilde{\phi}_{a,b}$, including the extra term in the last equation of (\ref{2int_formal}). We will be interested in asymptotic expansions near the critical points $\xi_1=1$ and $\xi_2=0$ of $f(x)=x^{2h+2}-(h+1)x^2.$ The Taylor series expansion of $f(x)$ at $x=\xi_a$ has the form \beq\new\begin{array}{c}\nonumber f(x) = u_a - X^2 + O(X^3),\quad X:= c_a (x-\xi_a), \end{array}\eeq where the constants $c_a$ are the same as in \eqref{c12}. We define a formal series expansion $\Phi_{a,b}(\lambda,\mu)$ of the following double integrals, \beq\new\begin{array}{c}\label{Phi_ab} \frac{c_a c_b}{2\pi} \sqrt{\lambda\mu} \iint_{\gamma_a(\lambda)\times \gamma_b(\mu)} e^{\lambda (f(x)-u_a) + \mu(f(y)-u_b) } \tilde{K}( \lambda^{\frac{1}{h+1}} x^2,\mu^{\frac{1}{h+1}} y^2) dx dy, \end{array}\eeq where the contour $\gamma_a(\lambda):= \xi_a + \tfrac{1}{c_a\sqrt{\lambda}} \mathbb{R}$ is the contour used in the steepest descend method. Let us restrict the values of the parameter $\lambda$ as follows: for the contour $\gamma_1(\lambda)$ we require $\lambda\in \mathbb{R}_{<0}$, while for $\gamma_2(\lambda)$ we require $\lambda\in \mathbf{i} \mathbb{R}_{>0}$. For such a choice for every pair of contours $\gamma_a(\lambda)$ and $\gamma_b(\mu)$, the sum $\lambda^{1/(h+1)}+\mu^{1/(h+1)}$ does not vanish. There are two cases. First, if $(a,b)\neq (2,2)$, then the integrand is regular on $\gamma_a\times \gamma_b$. Using Taylor series expansion in a neighborhood of the critical point $(x,y)=(\xi_a,\xi_b)$, we get that the integrand in \eqref{Phi_ab} has an expansion of the form \beq\new\begin{array}{c}\nonumber\label{exp-Phi_ab} \frac{1}{2\pi}\sqrt{\lambda\mu} e^{-\lambda X^2 -\mu Y^2}\, \sum_{k,l=0}^\infty a_{k,l}^{a,b} (\lambda,\mu) X^k Y^l dX dY. \end{array}\eeq Here $X=c_a(x-\xi_a)$, $Y=c_b(y-\xi_a)$, and $a_{k,l}^{a,b}$ is a polynomial expression in $\lambda^{1/(h+1)}$, $\mu^{1/(h+1)}$, and $(\lambda^{1/(h+1)}+\mu^{1/(h+1)})^{-1}$ (for $a=b=1$) or $\lambda^{-1/(h+1)}$ (for $a=1$, $b=2$). Then $\Phi_{a,b}(\lambda,\mu)$ is defined by integrating termwise the above expansion over $\mathbf{i}\mathbb{R}\times \mathbf{i}\mathbb{R}$. Since \beq\new\begin{array}{c}\nonumber \frac{1}{2\pi}\sqrt{\lambda\mu} \iint_{(\mathbf{i}\raise-1pt\hbox{$\mbox{\Bbbb R}$})^2} e^{-\lambda X^2 -\mu Y^2} X^k Y^l dX dY = \begin{cases} 0 & \mbox{ if $k$ or $l$ is odd},\\ \frac{1}{2\pi} \Gamma(\tfrac{k+1}{2}) \Gamma(\tfrac{l+1}{2}) \lambda^{-k/2}\mu^{-l/2} & \mbox{otherwise}, \end{cases} \end{array}\eeq we get \beq\new\begin{array}{c}\label{phi_1b} \Phi_{a,b}(\lambda,\mu) = \frac{1}{2\pi} \sum_{k,l=0}^\infty a_{2k,2l}^{a,b} (\lambda,\mu) \Gamma(k+\tfrac{1}{2}) \Gamma(l+\tfrac{1}{2}) \lambda^{-k} \mu^{-l}. \end{array}\eeq Let us point out that the above formal procedure applies in the single variable case too. In particular, except for the case $a=2$ and $k<0$, the local cycles $\gamma_a$ in (\ref{vir-psi_2}) can be replaced with the infinite cycles $\gamma_a(\lambda_a)$. However, whenever we do this we loose the analyticity of the integrals, so we have to interpret them formally via the steepest descend method expansion. Having this remark in mind let us consider the expression \eqref{phi_1b} with $a=1$. We claim that it coincides with $\widetilde{\phi}_{1,b}(z,w)$, where $\lambda=\tfrac{\mathbf{i} z^{h+1}}{h+1}$ and $\mu=\tfrac{\mathbf{i} w^{h_b+h_b/h}}{h+1}$. Indeed, $\widetilde{\phi}_{1,b}(z,w)$ is a series of the same type as \eqref{phi_1b}. According to formula \eqref{2int_formal}, both series coincide after expanding each coefficient in the powers of $z^{-1}$. Therefore, from Corollary \ref{cor_phiphi} we get $\Phi_{a,b}(\lambda, \mu)=\widetilde{\phi}_{a,b}(z,w)$. Suppose now that $a=b=2$. The integration kernel has a singularity at $(x,y)=(\xi_2,\xi_2) = (0,0)$, but as we will see now, the singularity is integrable. The integrand in \eqref{Phi_ab} can be written as \begin{align}\label{Phi_22} \frac{c_2^2}{2\pi} \, \sqrt{\lambda\mu}\, e^{-(h+1)(\lambda x^2 + \mu y^2)} \, e^{\lambda x^{2h+2} + \mu y^{2h+2}} \frac{ \lambda^{1/(h+1)} x^2- \mu^{1/(h+1)} y^2 }{ \lambda^{1/(h+1)} x^2+ \mu^{1/(h+1)} y^2} dx dy. \end{align} Note that the polynomial $\lambda x^{2h+2} + \mu y^{2h+2}$ is divisible by $ \lambda^{1/(h+1)} x^2+ \mu^{1/(h+1)} y^2$, that is, the denominator of the integral kernel. Therefore, if we expand the exponential $e^{\lambda x^{2h+2} + \mu y^{2h+2}}= 1+O(\lambda x^{2h+2} + \mu y^{2h+2})$, then only the constant term is not divisible by the denominator, and \eqref{Phi_22} can be expanded as follows: \beq\new\begin{array}{c}\label{exp-Phi_22} \frac{1}{2\pi} \, \sqrt{\lambda\mu}\, e^{-(\lambda X^2 + \mu Y^2)}\left( \frac{ \lambda^{1/(h+1)} X^2- \mu^{1/(h+1)} Y^2 }{ \lambda^{1/(h+1)} X^2+ \mu^{1/(h+1)} Y^2} + \sum_{k,l=0}^\infty a_{k,l}^{2,2} (\lambda,\mu) X^k Y^l \right) dX dY. \end{array}\eeq Here $X:=c_2(x-\xi_2)=\sqrt{h+1} x$, $Y:=c_2(y-\xi_2) = \sqrt{h+1} y$, and $a_{k,l}^{2,2}$ is a polynomial in $\lambda^{1/(h+1)}$ and $\mu^{1/(h+1)}$. \begin{lemma}\label{le:int_sing} For $z,w>0$ \beq\new\begin{array}{c}\nonumber\label{eq_di} \frac{1}{2}\frac{z-w}{z+w}= \frac{zw}{2\pi}\iint_{\raise-1pt\hbox{$\mbox{\Bbbb R}$}_+^2} \frac{x-y}{x+y}e^{-w^2x-z^2y} \frac{dx dy}{\sqrt{xy}} . \end{array}\eeq \end{lemma} \begin{proof} Let us switch to polar coordinates $x=r\cos \theta$, $y=r\sin\theta$. The integral takes the form \begin{align}\nonumber & \frac{zw}{2\pi} \, \int_0^{\pi/2} \int_0^\infty e^{-(z^2 \cos\theta +w^2 \sin\theta) r} \frac{\cos\theta -\sin\theta}{\cos\theta+\sin\theta} \frac{dr d\theta}{\cos \theta \, \sin\theta} = & \\ \nonumber & = \frac{zw}{2\pi} \, \int_0^{\pi/2} \frac{\cos\theta -\sin\theta}{\cos\theta+\sin\theta} \frac{1}{z^2 \cos\theta +w^2 \sin\theta} \frac{d\theta}{\sqrt{\cos\theta\, \sin\theta}}. & \end{align} Finally, using the substitution $u=\sqrt{\tan \theta}$, we get \beq\new\begin{array}{c}\nonumber \frac{z w}{\pi}\, \int_0^{\infty} \frac{1-u^2}{(1+u^2)(z^2+ w^2 u^2)}\, du. \end{array}\eeq The above integral is straightforward to compute, and it is equal to $\tfrac{1}{2}\tfrac{w-z}{w+z}$. \end{proof} Now we are ready to prove the following lemma. \begin{lemma}\label{l_57} The following identity holds: \beq\new\begin{array}{c}\nonumber \widetilde{\phi}_{a,b}(z,w)= (-1)^{\delta_{a+b,4}}\Phi_{a,b}(\lambda,\mu), \quad 1\leq a\leq b \leq 2. \end{array}\eeq \end{lemma} \begin{proof} It remains to prove that $\widetilde{\phi}_{2,2}(z,w) = -\Phi_{2,2}(\lambda,\mu)$. Let us consider the integrand in the last line of (\ref{2int_formal}) in the variables $X$ and $Y$ \beq\new\begin{array}{c}\nonumber \frac{1}{2\pi} \, \sqrt{\lambda\mu}\, e^{-(\lambda X^2 + \mu Y^2)}\left( 2 \sum_{k=0}^\infty r_k \left(\frac{ \lambda^{1/(h+1)} X^2}{ \mu^{1/(h+1)} Y^2} \right)^k+ \sum_{k,l=0}^\infty a_{k,l}^{2,2} (\lambda,\mu) X^k Y^l \right) dX dY. \end{array}\eeq Comparing it to (\ref{exp-Phi_22}) from Corollary \ref{cor_phiphi} we see, that the statement of the lemma is equivalent to the identity \beq\new\begin{array}{c}\nonumber \frac{1}{2\pi} \, \sqrt{\lambda\mu}\, \iint_{\left(e^{-\mathbf{i} \pi/4}\raise-1pt\hbox{$\mbox{\Bbbb R}$}\right)^2} e^{-(\lambda X^2 + \mu Y^2)} \frac{ \lambda^{1/(h+1)} X^2- \mu^{1/(h+1)} Y^2 }{ \lambda^{1/(h+1)} X^2+ \mu^{1/(h+1)} Y^2}\, dXdY = \frac{1}{2} \,\frac{w-z}{w+z}. \end{array}\eeq This identity follows from Lemma \ref{le:int_sing} after the substitution $x=\mathbf{i} |\lambda|^{1/(h+1)} X^2$, $y=\mathbf{i} |\mu|^{1/(h+1)} Y^2$. \end{proof} \subsection{Change of variables} To make a contact with the standard form of the generalized Kontsevich matrix integrals \cite{AM,KMMMZ,K,W,IZ} let us make a change in (\ref{vir-psi_2}) of the integration variables ${\bf i}z^{h_a/h}x^2\mapsto y$ and of the parameters $\lambda_a\mapsto \lambda_a(z):= \tfrac{\mathbf{i} z^{h_a+h_a/h}}{h+1}$, \beq\new\begin{array}{c}\nonumber \lambda_a(f(x)-u_a) \mapsto -z^{h_a} y +\frac{{\bf i}^h y^{h+1}}{h+1} +\delta_{a,1} \frac{{\bf i} h z^{h+1}}{h+1}. \end{array}\eeq Let \beq\new\begin{array}{c}\nonumber \chi_a(y,z):=\sqrt{\frac{h_a z^{h_a}}{2}} e^{ -z^{h_a} y + \frac{{\bf i}^h y^{h+1}}{h+1} + \delta_{a,1} \frac{{\bf i} h z^{h+1}}{h+1}} \end{array}\eeq and \beq\new\begin{array}{c}\nonumber d \mu_a(y,z):=\chi_a(y,z) \frac{d y}{\sqrt{y}}. \end{array}\eeq Let us discuss the transformation of the integration contours. To begin with, since we are interested only in the asymptotic expansion, let us replace the locally defined contour $\gamma_a$ in \eqref{vir-psi_2} with the infinite contour $\gamma_a(\lambda_a)$. In the case when $a=1$, we have $\lambda_1\in \mathbb{R}_{<0}$ $\Rightarrow$ $\gamma_1(\lambda_1)=\mathbb{R}$. Let us choose the solution $z$ of the equation $\lambda_1=\lambda_1(z)$, such that, $\operatorname{Arg}(z)=\tfrac{\pi}{2(h+1)}$. Then, the image of the integration path $\gamma_1(\lambda_1)$ under the change $y=\mathbf{i} z x^2$ is a ray $\mathbf{i} z \mathbb{R}_{\geq 0}$ independent of $z$. We are interested only in the asymptotic expansion of the integral in the vicinity of the critical point $y=\mathbf{i} z$, hence we define the integration path $\gamma_1^*$ for the new variable $y$ to be $\gamma^*_{1}:=\mathbf{i} z \mathbb{R}$. For the second contour, we have $\gamma_2(\lambda_2) =e^{-\pi \mathbf{i}/4} \mathbb{R}$. In this case $\lambda_2\in \mathbf{i} \mathbb{R}_{>0}$, so the equation $\lambda_2=\lambda_2(z)$ has a unique solution $z\in \mathbb{R}_{>0}$. The image of $\gamma_2(\lambda_2)$ under the change $y=\mathbf{i} z^{2/h} x^2$ is $\gamma^*_2:=\mathbb{R}_{\geq 0}$. Note that, unlike the case of the other contour, here the change of variables yields a 2:1 covering $\gamma_2(\lambda_2)\to \gamma^*_2$. The reason for our choice in this case comes from the fact that the symmetry $x\mapsto -x$ of $f(x)$ preserves the critical point $\xi_2=0$. Therefore, all integrals below involving integration along $\gamma_2$ gain an extra factor of 2 when written as integrals along $\gamma_2^*$. First of all, note that the asymptotics (\ref{vir-psi_2}) can be written uniformly as \beq\new\begin{array}{c}\nonumber\label{vir-psi_a} \Psi^{(a)}_k(z) \sim \frac{\mathbf{i}^{2-a}}{\sqrt{\pi}} \int_{\gamma_a^*} (-{\bf i}y)^k d \mu _a(y,z), \quad z\to \infty, \end{array}\eeq where $k\in \mathbb{Z}$ for $a=1$ and $k\geq 0$ for $a=2$. Furthermore, let us make the associated change of the variables in the double integrals \eqref{Phi_ab} with $1\leq a\leq b\leq 2$. We get that \eqref{Phi_ab} is transformed into \beq\new\begin{array}{c}\nonumber \frac{\epsilon_{a,b}}{2\pi} \iint_{\gamma_a^*\times \gamma_b^*} \frac{x-y}{x+y} d \mu_a(x,z) d \mu_b(y,w), \end{array}\eeq where $\epsilon_{1,1}=-1$, $\epsilon_{1,2}=\mathbf{i}$, and $\epsilon_{2,2}=1$. Thus, using Lemma \ref{l_57} we proved \begin{lemma}\label{doubleint} Suppose that $1\leq a\leq b\leq 2$. Then \beq\new\begin{array}{c}\nonumber \widetilde{\phi}_{a,b}(z,w)\sim-\frac{\mathbf{i}^{a-b}}{2\pi} \iint_{\gamma_a^*\times \gamma_b^*} \frac{x-y}{x+y}\, d \mu_a(x,z) d \mu_b(y,w) ,\quad z,w\to \infty. \end{array}\eeq \end{lemma} Let us make two remarks about Lemma \ref{doubleint}. First, let us recall that the meaning of the asymptotic equality $\sim$ is that the steepest descent method expansion of the integral on the RHS of $\sim$ coincides with the formal series on the LHS of $\sim$. Second, note that $\epsilon_{a,b}=-\mathbf{i}^{a-b}(-1)^{\delta_{a+b,4}}$, so the sign by which $\epsilon_{a,b}$ and $-\mathbf{i}^{a-b}$ differ matches precisely the sign by which $\phi_{a,b}(z,w)$ and $\Phi_{a,b}(\lambda,\mu)$ differ. \subsection{Matrix integral} The goal of this section is to prove Theorem \ref{t2}. Let us consider a version of de Bruijn's Pfaffian theorem \cite{B}. Put \beq\new\begin{array}{c}\nonumber A_{i,j}=\iint_{\gamma_i\times \gamma_j} R(x,y)\phi_i(x)\phi_j(y) dx dy \end{array}\eeq for $1\leq i<j\leq 2n$ and $A_{i,j}=-A_{i,j}$ otherwise. Here $\gamma_i$ are some contours. Then \begin{lemma}\label{le:Bruijn} If $R(x,y)=-R(y,x)$, then \beq\new\begin{array}{c}\nonumber \operatorname{Pf} (A)= \int_{\gamma_1}\dots\int_{\gamma_{2n}} \operatorname{Pf} (R) \prod_{i=1}^{2n} \phi_i (x_i )dx_1\dots dx_{2n}, \end{array}\eeq where $\operatorname{Pf} (R)$ is the Pfaffian of the matrix with entries $R(x_i,x_j)$ ($1\leq i,j\leq 2n$). \end{lemma} The lemma follows immediately from the definition of Pfaffian. Let us first recall Proposition \ref{tau_as_Pf}. The entries of the matrix $\tilde{\Phi}(Z_1,Z_2)$ are described by Lemma \ref{doubleint}, that is, the matrix is divided into 4 blocks and the $(i,j)$-entry in the $(a,b)$-block is given by the asymptotic expansion \beq\new\begin{array}{c}\nonumber \tilde{\Phi}^{ab}_{i,j}\sim -\frac{1}{2\pi}\iint_{\gamma_{a}^*\times \gamma_{b}^*} \frac{x-y}{x+y}\, d \mu_a(x,z_{a,i}) d \mu_b(y,z_{b,j}). \end{array}\eeq Note that the power of $\mathbf{i}$ of the expression in Lemma \ref{doubleint} is in agreement with the extra factor of $\mathbf{i}$ in the definition of the $(1,2)$-block in formula (\ref{phi12}). Recall that $N_1+N_2$ is even and let us apply Bruijn's formula from Lemma \ref{le:Bruijn}. We get \beq\new\begin{array}{c} \nonumber \operatorname{Pf}(\tilde{\Phi}(Z_1,Z_2))\sim \frac{1}{(-2\pi)^{\frac{N_1+N_2}{2}}} \int_{{(\gamma_1^*)^{N_1}}}\int_{{(\gamma_2^*)^{N_2}}} \operatorname{Pf} (\tilde{K})\, \prod_{j=1}^{N_1}d \mu_1 (x_j,z_{1,j}) \prod_{j=N_1+1}^{N_1+N_2}d \mu _2(x_j,z_{2,j-N_1}). \end{array}\eeq Recall that $\tilde{K}_{i,j}=\tilde{K}(x_i,x_j)=\frac{x_i-x_j}{x_i+x_j}$. Using the Schur Pfaffian formula \beq\new\begin{array}{c} \nonumber \operatorname{Pf} (\tilde{K})=\prod_{i<j}^{N_1+N_2}\frac{x_i-x_j}{x_i+x_j}, \end{array}\eeq and the skew-symmetry with respect to the permutations of $x_i$ and $x_j$ for $i \neq j$, we get \beq\new\begin{array}{c} \label{Pf-asymp} \operatorname{Pf}(\tilde{\Phi}(Z_1,Z_2))\sim\frac{1}{(-2\pi)^{\frac{N_1+N_2}{2}}N_1!N_2!}\int_{{(\gamma_1^*)^{N_1}}}\, \int_{{(\gamma_2^*)^{N_2}}} Q\, \prod_{j=1}^{N_1}\frac{dx_j}{\sqrt{x_j}} \prod_{j=1}^{N_2} \frac{dy_j}{\sqrt{y_j}}, \end{array}\eeq where we have denoted $y_j:=x_{N_1+j}$ and \beq\new\begin{array}{c}\label{Q_integr} Q:=\Delta^*_{N_1}(x) \Delta^*_{N_2}(y) \det_{i,j=1}^{N_1} \chi_1(x_i,z_{1,j}) \det_{i,j=1}^{N_2}\chi_2(y_i,z_{2,j}) \prod_{i=1}^{N_1}\prod_{j=1}^{N_2} \frac{x_i-y_j}{x_i+y_j}. \end{array}\eeq Here \beq\new\begin{array}{c} \nonumber \Delta^*_N(x):=\prod_{i<j}^N\frac{x_i-x_j}{x_i+x_j}. \end{array}\eeq Note that \beq\new\begin{array}{c}\nonumber \det_{i,j=1}^{N_1} \chi_1(x_i,z_{1,j})= \det \sqrt\frac{h Z_1^h}{2} \, e^{\operatorname{Tr}\left( h \lambda_1(Z_1) + \frac{\mathbf{i}^h x^{h+1} }{h+1} \right)}\, \det_{i,j=1}^{N_1} e^{-x_i z_{1,j}^h} \end{array}\eeq and \beq\new\begin{array}{c}\nonumber \det_{i,j=1}^{N_2} \chi_2(y_i,z_{2,j})= \det Z_2 \, e^{\operatorname{Tr}\left( \frac{ \mathbf{i}^h y^{h+1} }{ h+1} \right)}\, \det_{i,j=1}^{N_2} e^{-y_i z_{2,j}^2}, \end{array}\eeq where $\lambda_1(Z_1)=\tfrac{\mathbf{i} Z_1^{h+1}}{h+1}$, $x={\rm diag}\, (x_1,x_2,\dots,x_{N_1}),$ and $y={\rm diag}\,(y_1,y_2,\dots,y_{N_2})$. Let us recall the Harish-Chandra--Itzykson--Zuber integral formula for unitary matrices \beq\new\begin{array}{c}\nonumber \int_{U(N)} \left[d U\right] e^{-\hbox{Tr } U A U^\dagger B}= C(N) \frac{\det_{i,j=1}^N e^{-a_i b_j}}{\Delta_N (a) \Delta_N (b)} \end{array}\eeq where $C(N)$ depends only on $N$, $ \left[d U\right]$ is the Haar measure on $U(N)$, and $\Delta_N (a) =\prod_{i<j}(a_j-a_i)$ is the Vandermonde determinant. Using the above formula to rewrite the determinants in (\ref{Q_integr}) as integrals over unitary groups, we get \beq\new\begin{array}{c} \nonumber Q= \det \sqrt\frac{h Z_1^h}{2} \det Z_2 e^{h \hbox{Tr }\lambda_1(Z_1)+\frac{{\bf i}^h}{h+1}\left(\sum_{j=1}^{N_1} x_i^{h+1}+\sum_{j=1}^{N_2} y_i^{h+1}\right)}\\ \times\frac{1}{C(N_1)C(N_2)} \int_{U(N_1)} \left[d U_1\right] e^{-\hbox{Tr } U_1 x U_1^\dagger Z_1^h} \int_{U(N_2)} \left[d U_2\right] e^{-\hbox{Tr } U_2 y U_2^\dagger Z_2^2}\\ \times \Delta_{N_1}(x) \Delta_{N_1}(Z_1^h ) \Delta_{N_2}(y) \Delta_{N_2}(Z_2^2 ) \Delta^*_{N_1}(x) \Delta^*_{N_2}(y) \prod_{i=1}^{N_1}\prod_{j=1}^{N_2} \frac{x_i-y_j}{x_i+y_j}. \end{array}\eeq Substituting the above expression for $Q$ in \eqref{Pf-asymp} and recalling the definition of ${\mathcal H}_N$ and ${\mathcal H}_N^+$ equipped respectively with measures $\widetilde{\left[d X\right]}$ and $\widetilde{\left[d Y\right]} $ (see Section \ref{sec:mm}), we get that $\operatorname{Pf}(\tilde{\Phi}(Z_1,Z_2))$ coincides with the asymptotic expansion near $X={\bf i}Z_1$ and $Y= 0$ of the following two-matrix integral \beq\new\begin{array}{c} \nonumber \widetilde{C}(h,N_1,N_2)\ \Delta_{N_1}(Z_1^h ) \ \Delta_{N_2}(Z_2^2 ) \ (\det Z_1)^{h/2} \ \det Z_2 \\ \times e^{h \hbox{Tr }\lambda_1(Z_1)}\ \int _{e^{\frac{(h+2)\pi}{2(h+1)}\mathbf{i}}{\mathcal H}_{N_1}} {\widetilde{\left[d X\right]} \, e^{\hbox{Tr } W(X,Z_1^h) } } \int _{{\mathcal H}_{N_2}^+}{\widetilde{\left[d Y\right]}\, e^{\hbox{Tr } W(Y,Z_2^2)}} S(X,Y), \end{array}\eeq where $W$ is given by (\ref{eq_Wpot}), $\widetilde{C}(h,N_1,N_2)$ is a numerical constant depending only on $h$, $N_1$, and $N_2$, and we used the formula \beq\new\begin{array}{c}\nonumber \det(X\otimes I_N+I_N\otimes X) = \prod_{i=1}^{N} 2x_i \ \prod_{1\leq i<j\leq N} (x_i+x_j)^2, \end{array}\eeq where $I_N$ is the $N\times N $ unit matrix, and $X={\rm diag}\,(x_1,\dots,x_N)$ is a diagonal matrix. We also have \beq\new\begin{array}{c}\nonumber S(X,Y)=\prod_{i=1}^{N_1}\prod_{j=1}^{N_2} \frac{x_i-y_j}{x_i+y_j} = \det \left(\frac{ X\otimes I_{N_2}-I_{N_1}\otimes Y}{ X\otimes I_{N_2}+I_{N_1}\otimes Y} \right). \end{array}\eeq The formula stated in Theorem \ref{t2} follows with normalization factor \beq\new\begin{array}{c}\label{formula-N} \mathcal{N}= C(h,N_1,N_2)\, \frac{ \Delta_{N_1}^*(Z_1)\Delta_{N_2}^*(Z_2) }{ \Delta_{N_1}(Z_1^h ) \ \Delta_{N_2}(Z_2^2 ) \ (\det Z_1)^{h/2} \ \det Z_2 }, \end{array}\eeq where the numerical constant $C(h,N_1,N_2)= \mathbf{i}^{N_1^2} 2^{-(N_1+N_2)/2}/ \widetilde{C}(h,N_1,N_2)$. \section{Examples and further comments} \ 1) We conjecture, that an extended open version of the $D_N$ generating function can be obtained by a simple deformation of the measure given by \beq\new\begin{array}{c}\nonumber \widetilde{\left[d X\right]}\, e^{n \hbox{Tr } \log X}, \end{array}\eeq where $n$ is a formal parameter. It would be interesting to compare such a matrix integral with the results of Basalaev and Buryak \cite{BB}. 2) Let $\mathcal{D}(\hbar,\mathbf{t})$ be the total descendent potential of the $D_N$ singularity. Theorem \ref{t2} gives a matrix integral for the total descendent potential in the Miwa variables for $\hbar=\rho_1^2 = - e^{2\pi\mathbf{i}/h}$, where $h=2N-2$. On the other hand, using the dilation equation and the $L_0$-constraint \cite{CM}, or the mirror symmetry from Section \ref{Mirror} and the dimension constraint for FJRW-invariants from Section \ref{sec:FJRW-inv}, one can show that to recover the $\hbar$ dependence it is enough to rescale the $t$ variables \beq\new\begin{array}{c} \nonumber \left.{\mathcal D}(\hbar, {\bf t}^{\rm SG} )= {\mathcal D}(\rho_1^2, {\bf t}^{\rm SG} ) \right|_{ t_{k,s}\mapsto (\sqrt{\hbar}/\rho_1)^{\frac{h}{h+1}\left( k+\operatorname{deg}(\phi_s)-1\right) } t_{k,s}}. \end{array}\eeq Since $\tau^{\rm CM}(\mathbf{t})= {\mathcal D}(\rho_1^2, {\bf t}^{\rm SG} ),$ where the relation between $\mathbf{t}^{\rm SG}=(t_{k,s}^{\rm SG})$ and $\mathbf{t}=(t_{a,m})$ is given by \eqref{y_1m}--\eqref{KW-BKP}, the total descendant potential can be expressed in terms of the tau-function as follows: \beq\new\begin{array}{c}\nonumber \left. {\mathcal D}(\hbar, {\bf t}^{\rm SG} )= \tau( {\bf t} ) \right|_{ t_{a,m}\mapsto (\sqrt{\hbar}/\rho_1)^{ \frac{mh}{h_a(h+1)}-1 } t_{a,m} }. \end{array}\eeq Let us define the Miwa parametrization of the total descendant potential by \beq\new\begin{array}{c}\nonumber {\mathcal D}(\hbar, Z_1, Z_2):= \left.{\mathcal D}(\hbar, {\bf t}^{\rm SG} )\right|_{ t_{a,m} = -\frac{2}{m} \frac{\sqrt{\hbar}}{\rho_1} \hbox{Tr } Z_a^{-m}}. \end{array}\eeq Then we have ${\mathcal D}(\hbar, Z_1, Z_2)= \tau( (\sqrt{\hbar}/\rho_1)^{-\frac{1}{h+1}} Z_1, (\sqrt{\hbar}/\rho_1)^{-\frac{h}{2(h+1)}}Z_2)$. Thus the total descendent potential in the Miwa parametrization can be identified with the asymptotic expansion of the following integral: \beq\new\begin{array}{c}\nonumber \frac{ e^{\frac{\rho_1h}{\sqrt{\hbar} } \hbox{Tr }\lambda_1(Z_1)} }{ (\sqrt{\hbar}/\rho_1)^{\frac{N_1^2}{2}+\frac{N_2^2}{2}} {\mathcal N} } \int _{\rho_1^{-\frac{1}{h+1}}e^{\frac{(h+2)\pi}{2(h+1)}\mathbf{i}}{\mathcal H}_{N_1}} \widetilde{\left[d X\right]} \, e^{\frac{\rho_1}{\sqrt{\hbar} }\hbox{Tr } W(X, Z_1^h)} \int _{\rho_1^{-\frac{1}{h+1}} {\mathcal H}_{N_2}^+} \widetilde{\left[d Y\right]}\, e^{\frac{\rho_1}{\sqrt{\hbar} }\hbox{Tr } W(Y,Z_2^2)}S(X,Y), \end{array}\eeq where the above integral is obtained from the matrix integral in Theorem \ref{t2} via rescaling $Z_a\mapsto Z_a (\sqrt{\hbar}/\rho_1)^{-\frac{h}{h_a(h+1)}}$ and changing the integration variables via $X:= (\sqrt{\hbar}/\rho_1)^{-\frac{1}{h+1}} X'$ and $Y:=(\sqrt{\hbar}/\rho_1)^{-\frac{1}{h+1}} Y' .$ 3) The FJRW invariants are known to be rational numbers. Using the identification \eqref{FJRW=SG} between the SG-correlators and the FJRW-correlators and the Euler characteristic constraint (ii) for the FJRW invariants (see Section \ref{sec:FJRW-inv}), we get that the coefficients of the total descendant potential $\mathcal{D}(\hbar,\mathbf{t}^{\rm SG})$, that is, the coefficients in front of monomials in $\hbar$ and $\mathbf{t}^{\rm SG} $ are rational numbers. Specializing $\hbar=\rho_1^2$ and using the Euler characteristic constraint again, it is easy to see that the tau-function in the Miwa parametrization satisfies the following condition: $\tau(\mathbf{i} Z_1,Z_2)\in \mathbb{Q}[\![Z_1^{-1}, Z_2^{-1}]\!]$. 4) The normalization factor $\mathcal{N}$ in Theorem \ref{t2} can be represented by the following matrix integral: \beq\new\begin{array}{c} \nonumber {\cal N}:= \frac{1}{\prod_{i,j=1}^{N_1}(\mathbf{i} z_{1,i}+\mathbf{i} z_{1,j})^\frac{1}{2}} \int _{e^{\frac{(h+2)\pi}{2(h+1)}\mathbf{i}}{\mathcal H}_{N_1}} \left[d X\right] \, e^{-\frac{\mathbf{i}}{2}\hbox{Tr } \sum_{k=0}^{h-1}XZ_1^kXZ_1^{h-k-1}} \int _{{\mathcal H}_{N_2}^+} \widetilde{\left[d Y\right]}\, e^{-\hbox{Tr } YZ_2^2 } . \end{array}\eeq 5) If $N_a=0$ for $a=1$ or $a=2$, the tau-function $\tau(Z_1,Z_2)$ reduces to a tau-function of 1-component BKP hierarchy. These tau-functions can be described by one-matrix models: $\bullet$ $N_1=0$ \beq\new\begin{array}{c}\nonumber \tau(0,Z_2)\sim \frac{\int _{{\mathcal H}_{N_2}^+} \widetilde{\left[d Y\right]}\, e^{\hbox{Tr } W(Y,Z_2^2) } }{ \int _{{\mathcal H}_{N_2}^+} \widetilde{\left[d Y\right]}\, e^{-\hbox{Tr } YZ_2^2 } }. \end{array}\eeq $\bullet$ $N_2=0$ \beq\new\begin{array}{c} \nonumber \tau(Z_1,0)\sim \frac{ e^{h \hbox{Tr }\lambda_1(Z_1)}\int _{e^{\frac{(h+2)\pi}{2(h+1)}\mathbf{i}}{\mathcal H}_{N_1}} \widetilde{\left[d X\right]} \, e^{\hbox{Tr } W(X,Z_1^h) } }{\frac{1}{\prod_{i,j=1}^{N_1}(\mathbf{i} z_{1,i}+\mathbf{i} z_{1,j})^\frac{1}{2}} \int _{e^{\frac{(h+2)\pi}{2(h+1)}\mathbf{i}}{\mathcal H}_{N_1}} \left[d X\right] \, e^{-\frac{\mathbf{i}}{2}\hbox{Tr } \sum_{k=0}^{h-1}XZ_1^kXZ_1^{h-k-1}} }. \end{array}\eeq To describe the corresponding point of the Sato Grassmannian for $N_2=0$ it is not enough to restrict the operators $a$ and $b$ on the first component of 2-BKP Grassmannian. Namely, it is easy to see that in this case $b_1$ is a not a Kac--Schwarz operator. However, $a_1$, $a_1 b_1$, $a_1 b_1^2-3/2 b_1$ are the Kac--Schwarz operators. We claim that these three operators generate the Kac--Schwarz algebra in this case. 6) The measure $\widetilde{\left[d X\right]}$ is a natural measure for the so called $O(1)$ matrix model introduced by Kostov \cite{Kos} and related to the Bures ensemble (see, e.g., \cite{FK}). The denominator can be simplified with an auxiliary Hermitian matrix integral \beq\new\begin{array}{c}\nonumber\label{auxil} \frac{1}{\sqrt{\det\left(X\otimes I_N+I_N\otimes X\right)}}=\int_{{\mathcal H}_N} [d A] e^{-\hbox{Tr } X A^2}. \end{array}\eeq Similarly, we can rewrite the interaction term $S(X,Y)$ as a Gaussian integral. The integrals in Theorem \ref{t2} can be rewritten using the above integral as follows: \beq\new\begin{array}{c} \nonumber \int \left[d X\right] \int \left[d A\right] e^{-\hbox{Tr }\left(XZ ^{h_i} -\frac{{\bf i}^h}{h+1}X^{h+1}+X A^2\right)}= \int \left[d X\right] \int \left[d A\right] e^{-\hbox{Tr }\left(XZ ^{h_i} +2\frac{{\mathbf{i}}^{h/2}}{\sqrt{h+1}} A X^{h/2+1}+X A^2\right)}, \end{array}\eeq where we made a change of the $A$ matrix variable, $A \mapsto A+\frac{{\mathbf{i}}^{h/2}}{\sqrt{h+1}}X^{h/2}$. This allows us to simplify the integral. In particular, for $h=2$ the integral over $X$ is Gaussian and can be computed explicitly. If we ignore the coefficients for the moment, then the potential of the last integral is of the form \beq\new\begin{array}{c}\nonumber x z^{h_i} + \frac{1}{N} a x^{N} +xa^2 =\int (z^{h_i}+W(x,a)) dx, \end{array}\eeq where $W(x,y)=x^{N-1}y+y^2$ is the potential, corresponding to the singularity $D^T_N$, see Section \ref{sec:FJRW-inv}. We expect that the potentials of the matrix integrals for other FJRW theories, in particular for $E_N$ case, can be obtained from the corresponding polynomials $W$ of $E_N^T$ in a similar way. 7) It may be interesting to consider the matrix integral from Theorem \ref{t2} for the negative values of $h$, therefore the negative values of $N$. For the matrix model in the $A_N$ case they correspond, in particular, to the Brezin--Gross--Witten model which was recently shown by Norbury \cite{N} to describe interesting intersection theory on moduli spaces of Riemann surfaces. We expect that a suitable version of the matrix integral in Theorem \ref{t2} for the negative values of $h$ is also related to some interesting enumerative geometry invariants. 8) We expect, that the complete set of W-constraints for the $D_N$ singularities, earlier derived in \cite{BM}, can be obtained from the invariance of the matrix integral with respect to the arbitrary holomorphic change of integration variable. The computation should be similar to the derivation of the Virasoro constraint for cubic Kontsevich integral \cite{KMMMZ}, but more involved. Moreover, these constraints should also follow from the Kac--Schwarz description and boson-fermion correspondence. It would be interesting to solve these constraints in terms of the cut-and-join operators, similar to the $A_N$-case described in \cite{A,Z}. 9) Kontsevich matrix integral was derived by Kontsevich \cite{K} using diagram technique, which can be related to the Strebel differentials and a cell decomposition of the moduli space. Here one can invert the logic, and expand the matrix integral getting the combinatorial model in terms of ribbon graphs. The diagram interpretation of the coefficients in the generating series should lead to the combinatorial interpretation of FJRW invariants for $D_N$-singularity case. 10) Obtained construction can be used to derive Kontsevich type integrals for other interesting tau-functions of BKP and multicomponent BKP hierarchies. 11) Our matrix integral should have a ``discrete" counterpart, which is expected to be also given by a certain type of 2-BKP tau-functions with a natural neutral fermion description. In particular, correlators involving neutral fermions were used also by Harnad--van de Leur--Orlov to construct 5 types of tau-functions $Z_i(\mu,\mathbf{t},\overline{\mathbf{t}})$ of 2-BKP (see \cite{HvLO}, Section 3). The third type, that is, $i=3$ in the notation of Section 3 in \cite{HvLO}, in the case when $a^c(\mathbf{z})=1$, seems to be comparable to our matrix model. Note however, that the total descendent potential and the tau-function $Z_3(\mu, \mathbf{t},\overline{\mathbf{t}})$ have completely different nature, so they seem to be unrelated. In the case of the total descendent potential, the tau-function in Miwa parametrization is expressed in terms of integrals \eqref{Pf-asymp} via their stationary phase asymptotic expansions. On the other hand, the tau-function $Z_3(\mu,\mathbf{t},\overline{\mathbf{t}})$ does not involve Miwa parametrization, it is an infinite sum of multiple integrals $I_3(N,\mathbf{t},\overline{\mathbf{t}}) $ of all possible dimensions $N$. If we change to Miwa variables, then the integrals $I_3(N,\mathbf{t},\overline{\mathbf{t}}) $ (see \cite{HvLO}, Section 3) become integrals of hypergeometric type and they are quite different from \eqref{Pf-asymp}. Nevertheless, it will be interesting to find out if our tau-function has a relation to the models of \cite{HvLO}. Such a relation could allow us to give an alternative proof of Theorem \ref{t2} based on Givental's higher-genus reconstruction formalism. We expect, that the integrals of the two types can be related by a multi-scaling limit, similar to the relation between ``discrete" and ``continuous" matrix integral descriptions of the $A_N$ case. 12) The Schur Q-functions constitute a natural basis for the expansion of the BKP tau-functions. It would be interesting to find the expansion of the tau-function for the simple singularity of type D. \section{Conflict of interest statement} On behalf of all authors, the corresponding author states that there is no conflict of interest.
proofpile-arXiv_068-15
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} We consider the problem of mimicking a complicated processing sequence by learning some representation of the joint probability density function (pdf) that couples the outcomes of the sequence to its inputs. In geophysics, our field of application, processing sequences are usually based on workflows that represent a combination of algorithms and user-provided information to achieve a given task. Wave equation and signal processing are classical components of the algorithms, and geological priors are often part of user-provided information \cite{Yilmaz2001}. Several geophysical processing sequences aim at removing some undesired very structured events in the geophysical data \cite{Yilmaz2001}, like the ``ghost'' events illustrated in Fig.~\ref{fig:figure1}. Learning an efficient representation that mimics such sequences can bring value, for example to take the best of various existing workflows, increase turnaround or obtain a processing guide. Deep Neural Networks (DNNs) provide a flexible tool to parameterize a function that predicts outcomes from inputs. Many explorations have recently been done using DNNs to mimic geophysical processing sequences, see for instance Refs. \cite{Alwon2018, Picetti2018, Halpert2018, Si2019, Zhang2020}. To train a DNN to predict outcomes from inputs, we may consider methods inspired from the Generative Adversarial Network (GAN) framework \cite{goodfellow2014generative}, in particular Conditional GAN (CGAN) \cite{Mirza2014,Isola2017}. Indeed, CGAN can deal with joint pdfs (contrary to the original GAN formulation that deals only with single parameter pdfs), the originality being that the discriminator becomes conditioned by the input data \cite{Ledig2017,Halpert2018,Picetti2018,Zhang2020}. However, in the common context of a deterministic processing sequence, where only one outcome is generated when the sequence is applied to an input \cite{Alwon2018, Picetti2018, Halpert2018, Si2019, Zhang2020}, using a simple $L_p$ norm based loss for the training usually gives good results \cite{Goodfellow2016}. So, can CGAN be pertinent even in the deterministic case? It has been observed that combining CGAN with an $L_p$ loss may help to improve the results further, see e.g. Ref. \cite{Ledig2017} for natural image processing and Refs. \cite{Halpert2018,Zhang2020} for geophysical processing. Surprisingly, our Wasserstein CGAN-based trainings \cite{Fabbri2017} on deterministic geophysical processing sequences, like the ``deghosting'' (or ghost removal) sequence \cite{Wang2013}, did not help to produce a real improvement in our tests compared to the use of an $L_p$ loss, see Fig.~\ref{fig:figure2}. In this paper, we propose a theoretical analysis of this aspect. First, we remind why $L_p$ losses should perform well in the deterministic prediction case. Then, we point out from the Wasserstein point of view what CGAN should bring compared to an $L_p$ loss, taking the opportunity to discuss the Wasserstein CGAN (W-CGAN) foundations. Our analysis gives a first explanation of why CGAN may perform more poorly than expected, and also leads to a proposal of an adversarial way to train a content loss that we call ``Content CGAN'' (C-CGAN); it gave better results on our data as illustrated in Fig.~\ref{fig:figure2}. For completeness, we start all our theoretical considerations from the non-deterministic prediction case, where multiple outcomes related to one input are possible, and then take the deterministic limit. \begin{figure}[ht] \centering \includegraphics[width=1.00\linewidth]{Figures/Fig1-1.jpg} \caption{ Marine seismic data acquisition. The pressure wavefield generated by an airgun is reflected in the subsurface, then comes back to the surface and is recorded along a cable pulled by a boat. Billions of data over thousands of square kilometers are recorded. A particularity of the geophysical data is to consist of very structured and continuous events, corresponding to discontinuities (layers) in the subsurface. The greyscale represents the polarity of the wavefield (black: positive, white: negative). Blue highlighted events have reflected on the water surface and are called ``ghosts''; they look like ``duplicate'' events with reverse polarity; they interfere with the other events and must be removed by a ``deghosting'' sequence for some further applications. } \label{fig:figure1} \end{figure} \begin{figure}[ht] \centering \includegraphics[width=1.0\linewidth]{Figures/Fig1-2.jpg} \caption{ The DNN training data consists in 200 randomly extracted input ``images'' of size $564 \times 551 \times 1$, representing 0.001\% of the total field data, together with corresponding output images generated by a conventional deghosting sequence. On the left, a conventional deghosting result is shown on a test data (chosen ``far'' from the training data). On the right, various DNN predictions from the input test data, shown before training full convergence to highlight the differences (40 epochs training). Ghost residuals (blue arrows) can be observed using an $L_p$ loss and adding W-CGAN does not improve the result. Adding our C-CGAN more satisfyingly removes the ghost residuals (white arrows). However, the main benefit of C-CGAN seems here to accelerate the training as at full convergence (150 epochs) the difference between C-CGAN and $L_p$ becomes quite smaller. } \label{fig:figure2} \end{figure} \section{Notations} \label{sec:Notations} $\mathcal{X}$ denotes the input data (image) space and $\mathcal{Y}$ the output data (image) space. ${P}_{Y,X}=P_X P_{Y|X}$ denotes the joint pdf associated to the (possibly non-deterministic) processing sequence we wish to mimic. $P_X$ is the marginal pdf that describes the distribution of the input data; realizations of the random variable $X\sim{P}_{X}$ are denoted by $\tilde{X}\in \mathcal{X}$. $P_{Y|X}$ is the conditional pdf that describes the outcomes of the processing sequence related to a given input; realizations of the random variable $Y\sim P_{Y|X}$ are denoted by $\tilde{Y}\in \mathcal{Y}$. $G_\theta^Z:\mathcal{X}\rightarrow \mathcal{Y}$ represents a prediction function parameterized by a model $\theta$, here a DNN, where the latent space random variable $Z\sim P_Z$ gives the flexibility to produce multiple outcomes related to a given input (for the non-deterministic prediction case). $\theta$ is to be optimized so that the $Z$-realizations of $G_\theta^Z(X)$ tend to mimic the realizations of $Y\sim P_{Y|X}$ . $\mathbb{E}$ denotes the expectation over a specified random variable. The deterministic prediction limit can be taken considering both: \begin{itemize}[leftmargin=1cm ,parsep=0cm,itemsep=0cm,topsep=0cm] \item The ``empirical'' joint pdf, for instance, for $ {P}_{{Y},{X}}(\tilde Y,\tilde X) \rightarrow \frac{1}{N_D}\sum_{i=1}^{N_D} \delta(\tilde Y-\tilde Y_i)\delta(\tilde X-\tilde X_i) $. $\{\tilde X_i,\tilde Y_i;i=1..N_D\}$ denotes a set of input and output data realizations. \item $G_\theta^Z$ independent of $Z$, so that a unique outcome is predicted by the DNN for each input. \end{itemize} \section{Which processing sequences are suitable for the use of an \texorpdfstring{$L_p$}{Lp} loss?} \label{sec:more_det} $L_p$-based losses are defined for $p\ge 1$ by \begin{eqnarray} \label{eq:dist3} C^p_{}(G_\theta^Z) &=& \mathbb{E}_{Z\sim {P}_{Z}} \mathbb{E}_{(Y,X)\sim {P}_{{Y}, X}} ||Y-G_\theta^Z(X)||_{L_p}^p , \end{eqnarray} where the output image space $L_p$ norm is defined by \begin{eqnarray} ||\tilde{Y}-\tilde{Y}^{(2)}||_{L_p}^p = \int_{\Omega} \Big| \tilde{Y}(y)-\tilde{Y}^{(2)}(y) \Big|^p d\mu(y) , \quad \forall (\tilde{Y},\tilde{Y}^{(2)})\in \mathcal{Y}\times\mathcal{Y} . \label{eq:norm_pixels} \end{eqnarray} $\mathcal{Y}$ here represents the real $L^p(\Omega)$ space (functions with integrable moments of order $p$). Each $\tilde{Y}\in \mathcal{Y}$ represents an image indexed by the positions $y$ in a ``pixels space'' $\Omega$ that is measurable for the measure $\mu(y)$. $\Omega$ usually represents the (discrete) pixels grid space and $\mu$ the counting measure. However, our considerations generalize to continuous spaces $\Omega$ taking the Lebesgue measure for $\mu$. Training aims to minimize $C^p(G_\theta^Z)$, eq. (\ref{eq:dist3}), with respect to $\theta$. As $C^p$ measures a similarity between one realization of ${Y}$ and one realization of $G^Z_\theta(X)$, the trained prediction function $G_\theta^Z$ will tend to become independent of $Z$ and output some ``average'' of all outcomes related to one input data. Indeed, we can easily compute the optimum for $p=2$: $G_\theta^Z(X)\approx\mathbb{E}_{Y\sim {P}_{{Y}|{X}}}Y$, and for $p=1$: $G_\theta^Z(X)\approx\mathbb{M}_{Y\sim {P}_{{Y}|{X}}}Y$ where $\mathbb{M}$ denotes the median (see Ref. \cite{Goodfellow2016} section 6.2.1.2). In the non-deterministic prediction case, if the multiple outcomes are related to structured events, training with an $L_p$ loss is obviously not suitable as it would tend to produce blurry predictions (due to the ``averaging''). However, if the multiple outcomes are related to zero-``average'' noise, an $L_p$ loss is suitable and would even tend to produce denoised predictions. Of course, an $L_p$ loss is also suitable to the deterministic prediction case; a denoising effect can still occur if each of the single outcomes are affected by zero-``average'' noise. Note that $p=1$ (the median) is more robust to outliers than $p=2$ but harder to train. $p=1.5$ has been chosen in Fig.~\ref{fig:figure1}, representing a current compromise in geophysics \cite{Yilmaz2001}. This being posed, what could CGAN bring compared to an $L_p$ loss in the deterministic case? Let us first discuss the CGAN foundations in the general non-deterministic case, from the Wasserstein point of view and complementarily to Ref. \cite{Fabbri2017}, and then analyze the deterministic limit. \section{Wasserstein CGAN for processing sequences} \label{sec:Wasserstein} \subsection{Non-deterministic prediction case} \label{sec:PAN} For notational purposes, let us consider the ``parameterized'' conditional pdf ${P}_{Y|X}^{(par)}$ whose realizations correspond to the ones of $G^Z_\theta(X)$. In other words, for any function $D$, ${P}_{Y|X}^{(par)}$ is defined so that \begin{eqnarray} \label{eq:pdf_G} \mathbb{E}_{Y^{(par)}\sim P_{Y|X}^{(par)}} D(Y^{(par)}) = \mathbb{E}_{Z\sim P_Z} D(G^Z_\theta(X)) , \end{eqnarray} where we keep the superscript $^{(par)}$ to make explicit which random variable is related to the parameterized pdf. Note that imposing a Gaussian parameterization to ${P}^{(par)}_{{Y}|{X}}$ and using cross-entropy (XE) as a loss leads to eq. (\ref{eq:dist3}), as recalled in Appendix \ref{app:XE}. This allows us to understand from another point of view the conclusions of \S \ref{sec:more_det}: $L_p$ losses are suited when the outcomes follow Gaussian statistics, and are not suited when the Gaussian assumption is too simplistic (as often with structured outcomes). We now wish to define a similarity measure between the two joint pdfs ${P}_{Y,X}$ and ${P}_{Y,X}^{(par)}=P_X{P}_{Y|X}^{(par)}$, without having to consider any parameterization like the Gaussian one. XE is not adapted and Wasserstein distances \cite{Villani2003} seem like a natural choice. We propose the following Wasserstein-based formulation suited to joint pdfs with same marginal ($p\ge 1$ and $r\ge 1$): \begin{eqnarray} \label{eq:W1_pred} JW_{L_p}({P}_{Y,X},P_{Y,X}^{(par)}) &=& \mathbb{E}_{X\sim {P}_{X}} W_{L_p}({P}_{Y|X},P_{Y|X}^{(par)}) \\ W_{L_p}({P}_{Y|X},P_{Y|X}^{(par)}) &=& \Big( \inf_{{\Pi}^X_{Y,Y^{(par)}}} \mathbb{E}_{({Y}, Y^{(par)})\sim {\Pi}^X_{{Y}, Y^{(par)}}} ||{Y}- Y^{(par)}||_{L_p}^r \Big)^\frac{1}{r} \nonumber . \end{eqnarray} ${P}_{Y|X}$ and $ {P}_{Y|X}^{(par)}$ are considered as single parameter pdfs for each realization of $X$, and $W_{L_p}$ represents a $r$-Wasserstein distance between ${P}_{Y|X}$ and $P_{Y|X}^{(par)}$ for any $L_p$-norm choice in the output image space \cite{Villani2003}. The infimum is taken over all joint pdfs ${\Pi}^X_{{Y}, Y^{(par)}}$ with marginals ${P}_{Y|X}$ and $P_{Y|X}^{(par)}$, $X$ being considered as a parameter. Then, the expectation over all realizations of $X$ is taken to obtain $JW_{L_p}$, representing a distance between the joint pdfs ${P}_{Y,X}$ and $P_{Y,X}^{(par)}$, as demonstrated in Appendix \ref{app:Wasserstein3}. Switching to the dual formulation and taking $r=1$ allows to simplify the second line of eq. (\ref{eq:W1_pred}) into the Kantorovitch-Rubinstein (KR) formulation (see \cite{Villani2003} section 1.2 and \cite{arjovsky2017wasserstein}) \begin{eqnarray} \label{eq:W2_pred} W_{L_p}({P}_{Y|X},P_{Y|X}^{(par)}) &=& \sup_{||D_X||^{ }_{Lip_{L_p}}\le 1} \Big[ \mathbb{E}_{Y\sim P_{Y |X}} D_X(Y) - \mathbb{E}_{Y^{(par)}\sim P_{Y|X}^{(par)}} D_X(Y^{(par)}) \Big] . \end{eqnarray} Note that the ``discrimator'' $D_X:\mathcal{Y}\rightarrow \mathbb{R}$ is parameterized by the input data and is constrained to be 1-Lipschitz for the $L_p$-norm, i.e. $||D_X||^{ }_{Lip_{L_p}}\le 1$. The corresponding ``Lipschitz norm'' is defined by $ ||D_X||^{ }_{Lip_{L_p}}=\sup_{\tilde Y\ne \tilde Y^{(2)}} \frac{|D_X(\tilde Y)-D_X(\tilde Y^{(2)})|}{||\tilde Y- \tilde Y^{(2)} ||^{ }_{L_p}}, \forall (\tilde Y,\tilde Y^{(2)}) \in \mathcal{Y}\times \mathcal{Y}$ \cite{Villani2003,arjovsky2017wasserstein}. As demonstrated in Appendix \ref{app:Wasserstein1}, if $D_X(\tilde Y)$ is differentiable with respect to $\tilde Y$, the Lipschitz norm simplifies into \begin{eqnarray} \label{eq:LipX} ||D_X||^{ }_{Lip_{L_p}} = \sup_{\tilde Y\in \mathcal{Y}} \Big|\Big| \frac{\partial D_X(\tilde Y)}{\partial \tilde Y} \Big|\Big|_{L_q} \quad\text{with}\quad 1/p+1/q=1 . \end{eqnarray} Inserting eq. (\ref{eq:pdf_G}) into eq. (\ref{eq:W2_pred}) to come back to $G^Z_\theta$, we finally obtain the following more tractable form \begin{eqnarray} \label{eq:W2_pred2} JW_{L_p}(G^Z_\theta) = \mathbb{E}_{X\sim {P}_{X}} \sup_{||D_X||^{ }_{Lip_{L_p}}\le 1} \Big[ \mathbb{E}_{Y\sim P_{Y |X}} D_X(Y) - \mathbb{E}_{Z\sim P_Z} D_X(G^Z_\theta(X)) \Big] . \end{eqnarray} Eqs. (\ref{eq:LipX}) and (\ref{eq:W2_pred2}) provide an adversarial training framework for predictive tasks: $JW_{L_p}$ contains a supremum principle on $D_X$, but the result is to be minimized with respect to $\theta$ during the training. The discriminator's parameterization by the input data $\tilde X\in \mathcal{X}$ establishes the relation with CGAN \cite{Mirza2014,Isola2017} and its Wasserstein counterpart \cite{Fabbri2017}, that is known. However, we underline some formal points that were not discussed in previous works to our knowledge: \begin{itemize}[leftmargin=1cm ,parsep=0cm,itemsep=0cm,topsep=0cm] \item The discriminator's dependency on the input data can possibly be strong and discontinuous, leading in the general case to one different discriminator per input data in eq. (\ref{eq:W2_pred2}). Of course, this would be inefficient numerically and is usually not necessary (especially when images lie in low dimensional manifolds, do not vary rapidly). However, the considerations in this section lead to some clarification on the possibility of using a different discriminator architecture per group of input data with similar properties within W-CGAN if needed. \item Eq. (\ref{eq:LipX}) provides the generalization to $L_{p\ne 2}$-norms to the derivative-based Lipschitz constraint of Ref. \cite{Gulrajani2017}. \item We established that $JW_{L_p}$ represents a distance between two joint pdfs with same marginal. \item Appendix \ref{app:Wasserstein3b} establishes the link with the scheme obtained starting from a Wasserstein distance between joint pdfs without same marginals \cite{Courty2017}. \end{itemize} We mentioned in \S \ref{sec:more_det} that training with an $L_p$ loss, eq. (\ref{eq:dist3}), compares one realization of $P_{Y |X}$ to one $Z$-realization of $G^Z_\theta(X)$, which leads to ``averaging''. Training with $JW_{L_p}$, eq. (\ref{eq:W2_pred2}), compares all realizations of $P_{Y |X}$ to all $Z$-realizations of $G^Z_\theta(X)$, i.e. $G^Z_\theta$ can learn to mimic the realizations of $P_{Y |X}$ and no ``averaging'' occurs. This fundamental difference would help to produce unblurred results in the non-deterministic case, when multiple outcomes are related to structured events. In the deterministic prediction case, however, what $JW_{L_p}$ would bring compared to an $L_p$ loss is unclear. This is what we discuss now. \subsection{Advantages of Wasserstein CGAN in the deterministic case?} \label{sec:PAN_det} To take the deterministic prediction limit, we use the method mentioned in \S \ref{sec:Notations}. Eq. (\ref{eq:W2_pred2}) becomes \begin{eqnarray} \label{eq:W2_pred2_determ} JW_{L_p}(G_\theta) = \sum_{i=1}^{N_D} \sup_{||D_{\tilde X_i}||^{ }_{Lip_{L_p}}\le 1} \Big[ D_{\tilde X_i}(\tilde Y_i) - D_{\tilde X_i}(G_\theta(\tilde X_i)) \Big] , \end{eqnarray} where $\tilde X_i$ and $\tilde Y_i$ denote input and output data pairs. We use a discriminator architecture form \begin{eqnarray} \label{eq:discrim_architec} && D_{\tilde X_i}(\tilde Y) = \int_{\Omega} F(\tilde X_i,\tilde Y)(y) d\mu(y) , \end{eqnarray} where $F(\tilde X_i,\tilde Y)\in\mathcal{Y}$ is parameterized by a convolutional DNN without striding and an additional last ``layer'' simply represents a sum over the ``pixels''. The last layer is equivalent to a global average pooling \cite{Isola2017, Radford2016UnsupervisedRL} and has the advantage to make the discriminator DNN model independent of the size of the data (i.e. the model can be used for any data size). This architecture allows for an interpretation of what the discriminator learns since $F(\tilde X_i,\tilde Y)$ lies in the output image space. Also, in our tests on geophysical processing tasks, it led to the highest Wasserstein distance estimates (or supremum values) when training the discriminator. So, it is the architecture we choose. Just to gain insight, we first consider a linear parameterization $F(\tilde X_i,\tilde Y)(y)=\alpha(\tilde X_i)(y)\tilde Y(y)$, where $\alpha$ is a function of $\tilde X_i$ parameterized by a DNN. Inserting this in eqs. (\ref{eq:LipX}) and (\ref{eq:W2_pred2_determ}), we obtain\footnote{ We have $JW_{L_p} \rightarrow \sum_{i=1}^{N_D} \sup_{||\alpha(\tilde X_i)||_{L_q}\le 1} \int_{\Omega} \alpha(\tilde X_i)(y) \Big( \tilde Y_i(y) - G_\theta(\tilde X_i)(y) \Big) d\mu(y)$, that necessarily leads to the supremum argument $\alpha^{sup}(\tilde X_i)(y)=|\alpha^{sup}(\tilde X_i)(y)|\times\text{sign}\Big(\tilde Y_i(y)-G_\theta(\tilde X_i)(y)\Big)$ and a saturation of the constraint. Note that the dependency of $\alpha$ on $\tilde X_i$ is sufficient to define the sign as only one $\tilde Y_i$ is associated to $\tilde X_i$ in the deterministic case. \label{eq:foot-lin} } \begin{eqnarray} \label{eq:W2_ours_pred3-simple} \widehat{JW}_{L_p}(G_\theta) = \sum_{i=1}^{N_D} \sup_{||\alpha(\tilde X_i)||_{L_q}= 1} \int_{\Omega} |\alpha(\tilde X_i)(y)|\times \Big| \tilde Y_i(y) - G_\theta(\tilde X_i)(y) \Big| d\mu(y) . \end{eqnarray} In the deterministic prediction limit, the $L_p$-based loss defined by eqs. (\ref{eq:dist3}) and (\ref{eq:norm_pixels}) becomes \begin{eqnarray} C^p(G_\theta) &=& \sum_{i=1}^{N_D} \int_{\Omega} \Big| \tilde Y_i(y) - G_\theta(\tilde X_i)(y) \Big|^p d\mu(y) . \label{eq:Lp_emppdf2} \end{eqnarray} Compared to $C^1$, i.e. eq. (\ref{eq:Lp_emppdf2}) with $p=1$, we observe that $\widehat{JW}_{L_p}$ adds learnt positive weights $|\alpha(\tilde X_i)|$ with unit $L_q$ norm. In other words, with a linear parameterization for $F$ in the deterministic case, W-CGAN ``only'' adds automatic learning of optimal data-dependent variance-like weights compared to the $L_1$ loss. In this simple case, these weights can be demonstrated to lead to\footnote{ By definition of the ``dual norm'' (\cite{Brezis1983} chapter I, \cite{Rudin1991} chapter IV), applied to the first equation in footnote \ref{eq:foot-lin}. }: $\widehat{JW}_{L_p}=( C^p )^{1/p}$, i.e. to a $\widehat{JW}_{L_p}$ that is equivalent to the $L_p$-based loss. The takaway is that W-CGAN, eq. (\ref{eq:W2_pred2_determ}), should ``at least'' learn an $L_p$-based loss or reweighting. What about more involved parameterizations for $F$ in eq. (\ref{eq:discrim_architec}), using for instance a convolutional DNN and non-linear activations? This will produce more involved transformations of $\tilde Y_i$ and $G_\theta(\tilde X_i)$ than a simple reweighting. Indeed, $\tilde Y_i\rightarrow F(\tilde X_i,\tilde Y_i)$ and $G_\theta(\tilde X_i)\rightarrow F(\tilde X_i,G_\theta(\tilde X_i))$ would then correspond to a postprocessing of the outputs. The supremum principle in eq. (\ref{eq:W2_pred2_determ}) allows to learn the postprocessing that makes $JW_{L_p}$ the most sensitive to the differences between the prediction and the output data, i.e. that should concentrate on the less matched events. In configurations where adding such a postprocessing would not affect the relative ``positions'' of most of the minimums in the loss valley, the main effect should be to improve the training convergence and deal better with the amplitude and noise present in the output data. These situations should tend to occur amongst others when the postprocessings do not dramatically affect the gross data amplitudes hierarchy. This is a first element to interpret when W-CGAN trainings on deterministic geophysical processing sequences may not help to produce a real improvement. The case of Fig.~\ref{fig:figure2} possibly falls in this category, that will be further discussed in \S \ref{sec:CCGAN_results}. Another element is that the method contains free parameters. Two of these are related to the Lipschitz constraint: $q$ (or $p$) in eq. (\ref{eq:LipX}), for which $q\approx 1$ represented a good value, and a weight to impose the constraint using for instance the method of Ref.~\cite{Gulrajani2017}. Also, like in Refs.~\cite{Ledig2017,Halpert2018,Zhang2020}, we observed $JW_{L_p}$ has to be combined to an $L_{p'}$ loss to give correct results, thus an additional weight is needed. We chose $p'=1.5$ and tuned the weight so that $JW_{L_p}$ and $L_{1.5}$ losses contribute equally, to obtain the results in Fig.~\ref{fig:figure2}. As these hyper-parameters are data dependent, it may explain why W-CGAN did not produce a systematic improvement in our tests. The question of tuning the parameters the best for any kind of data is important but goes beyond the scope of this paper and is left for a future study. \subsection{Content CGAN: An adversarial way to train a content loss} \label{sec:CCGAN} Another difficulty with W-CGAN is that it is not feasible to resolve exactly the supremum principle in $JW_{L_p}$, eq. (\ref{eq:W2_pred2_determ}), at each iteration. This can lead to slowness in the training and possibly sometimes to ``localized'' unstabilities, that may contribute to the explanation of some poor results. We propose to tackle a part of this specific problem by a heuristic reformulation of eq. (\ref{eq:W2_pred2_determ}). Note that the linearized case result of \S \ref{sec:PAN_det} can equivalently be recovered by firstly imposing the following form to $F$ \begin{eqnarray} \label{eq:W2_content_000} F(\tilde X_i,\tilde Y)(y)\rightarrow F(\tilde X_i,\tilde Y)(y)\times\text{sign}\Big( F(\tilde X_i,\tilde Y_i)(y)-F(\tilde X_i,G_\theta(\tilde X_i))(y)\Big) , \end{eqnarray} and then do the linearized approximation (remember footnote \ref{eq:foot-lin} and beware that the argument of the $\text{sign}$ does not depend on $\tilde Y$ but on $\tilde Y_i$, which is important for the Lipschitz norm, eq. (\ref{eq:LipX})). Keeping this form for any non-linear parameterization of $F(\tilde X_i,\tilde Y)$ and inserting eq. (\ref{eq:W2_content_000}) in eq. (\ref{eq:W2_pred2_determ}), we obtain~ \begin{eqnarray} \label{eq:W2_content} \overline{JW}_{L_p}(G_\theta) = \sum_{i=1}^{N_D} \sup_{ ||D_{\tilde X_i}||^{ }_{Lip_{L_p}} \le 1 } \int_{\Omega} \Big| F(\tilde X_i,\tilde Y_i)(y) - F(\tilde X_i,G_\theta(\tilde X_i))(y) \Big| d\mu(y) , \end{eqnarray} where $D_{\tilde X_i}$ is defined through eqs. (\ref{eq:discrim_architec}) and (\ref{eq:W2_content_000}). Eq. (\ref{eq:W2_content}) looks like an $L_1$-based content loss \cite{Ledig2017} that would adversarially be trained, simultaneously with the $G_\theta$ training. A good content loss should tend to maximize the differences between a prediction that has been ``postprocessed'' (through the DNN $F$) and the corresponding similarly postprocessed output data. This is achieved by the supremum principle in eq. (\ref{eq:W2_content}), where the Lipschitz constraint provides robustness (to avoid singularities...). This heuristical reasoning leads to our ``Content CGAN'' (C-CGAN) loss. Among other advantages, the C-CGAN loss always remains positive, even if the supremum principle is not well resolved at some iteration, whereas the W-CGAN loss, eq. (\ref{eq:W2_pred2_determ}), might not. \subsection{DNN architectures and results} \label{sec:CCGAN_results} In the results presented in Fig.~\ref{fig:figure2}, the DNN inputs a ghosted image $\tilde X$ and predicts a deghosted image $G_\theta(\tilde X)$. The prediction function $G_\theta$ architecture is Unet inspired \cite{Ronneberger2015}. The $F$ function architecture, that defines the discriminator through eq. (\ref{eq:discrim_architec}), is Denet inspired \cite{Remez2017}. For the $\tilde X$-dependency of $F$, we found it sufficient to concatenate $\tilde X$ to the first Denet layer; however, this certainly would deserve a specific study for the reason underlined in \S \ref{sec:more_det}. As mentioned in \S \ref{sec:PAN_det}, we used $q\approx 1$ in the Lipschitz norm, eq. (\ref{eq:LipX}), and combined W-CGAN (or C-CGAN) to the $L_{1.5}$ loss, so that both contribute equally. Fig.~\ref{fig:figure2} shows a result at 40 epochs, before full convergence (150 epochs). C-CGAN gave better results than CGAN or $L_{1.5}$ only, on our data. However, the main benefit of C-CGAN is here to accelerate the training. At full convergence we observed the differences between C-CGAN and $L_{1.5}$ become quite smaller, possibly for the reason outlined in \S \ref{sec:PAN_det}. Fig.~\ref{fig:figure3} proposes a way to demystify what the discriminator has learnt and interpret the interest of C-CGAN. Firstly through $F(\tilde X_i,\tilde Y_i)(y)$, defined in the output image space $\mathcal{Y}$. The figure shows that C-CGAN learns to concentrate on the most important events, i.e. around the ghosts; this is satisfying and contributes to explain why C-GCAN achieves better deghosting more rapidly. Secondly, we consider so-called ``adjoint-input'' $\frac{\partial LOSS(G)}{\partial G(y)}\big|_{G=G_\theta(\tilde X_i)}$, which is back-propagated in $G_\theta$ to compute how to update $\theta$ \cite{Lecun1988}. The adjoint-input is also defined in the $\mathcal{Y}$ space and allows for a visualization of the areas where the events should be better predicted after the update. Fig.~\ref{fig:figure3} shows that the $L_{1.5}$ loss adjoint-input tends to ``put the weight'' on all areas, regardless of their relative importance, thus not to concentrate more specifically on the ghost areas. The W-CGAN adjoint-input also tends to concentrate on many areas, contributing to explain why it brought no improvement in Fig.~\ref{fig:figure2}. The texture of the latter adjoint-input may seem atypical; we verified that the optimization of eq. (\ref{eq:W2_pred2_determ}) and of $G_\theta$ converged without unstability, but further analysis regarding the hyper-parameters mentioned in \S \ref{sec:PAN_det} is on the way. The C-CGAN adjoint-input, however, learned to concentrate more specifically around the ghost areas, contributing to explain why it converges more rapidly towards an acceptable solution in Fig.~\ref{fig:figure2}. Note, however, that the C-CGAN adjoint-input does not exhibit a very strong change in the amplitudes hierarchy compared to the $L_{1.5}$ adjoint-input, contributing to explain that the main effect of C-CGAN would here be to improve the training convergence (remind \S \ref{sec:PAN_det}). \begin{figure}[h] \centering \includegraphics[width=0.96\linewidth]{Figures/Fig3.jpg} \caption{ Deghosting task and same data than in Fig.~\ref{fig:figure2}. This figure illustrates what the discriminator learns from the $F(\tilde X,\tilde Y)$ (top) and adjoint-input (bottom) point of views. We observe that W-CGAN and $L_{1.5}$ tend to ``put a weight'' on many areas, regardless of their relative importance, while C-CGAN tends to learn to concentrate more on the important areas for deghosting, i.e. around the blue arrows. } \label{fig:figure3} \end{figure} \section{Conclusion and future work} We proposed a theoretical analysis of the CGAN framework for predictive tasks. We took the opportunity to establish the CGAN foundations from the Wasserstein point of view, and pointed out what CGAN should bring compared to an $L_p$ loss in the deterministic prediction case. We discussed that W-CGAN may perform more poorly than expected when the corresponding data-space ``postprocessings'' would not affect the relative ``positions'' of most of the minimums in the loss valley (for instance when they do not dramatically affect the gross data amplitudes hierarchy), or due to a difficulty with automatically tuning the corresponding hyper-parameters. Another difficulty is that the W-CGAN loss represents a distance only if the supremum principle is perfectly resolved numerically; our C-CGAN formalism helps to keep positivity and gave better results on our data. This first analysis is still to be confirmed by further studies. It is certainly data dependent (geophysical data being specific, with very structured and continuous events). Among others, understanding ``physically'' how to tune the CGAN hyperparameters for any kind of data is important in the deterministic as well as in the non-deterministic prediction cases; this will be a future study. \section*{Broader Impact} Learning an efficient representation that mimics involved processing sequences can bring value in a general industrial context, not only in geophysics. The goal can be to take the best of various existing workflows, increase turnaround or obtain a processing guide. \begin{ack} The authors are grateful to CGG and Lundin for the permission to publish this work. The authors are indebted to Nicolas Salaun, Samuel Gray, Thibaut Allemand, Gilles Lambar\'e, Mathieu Chambefort and Stephan Cl\'emen\c con for enlightening discussions and collaboration. All funding of this work by CGG. \end{ack} \small \bibliographystyle{ieeetr}
proofpile-arXiv_068-51
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Quantum channel discrimination (QCD)~\cite{kitaev_quantum_1997,acin_optimal_2001,acin_statistical_2001,sacchi_entanglement_2005,sacchi_optimal_2005,wang_unambiguous_2006} is an important task in quantum computing~\cite{shor_polynomial-time_1997,lloyd_quantum_1999} and quantum communication~\cite{bennett_quantum_2014,bennett_entanglement-assisted_2002,shi_practical_2020}. Quantum channels model the input and output relation of quantum states in physical processes~\cite{nielsen_quantum_2011,hayashi_quantum_2006,holevo_probabilistic_2011}. Various applications in quantum sensing~\cite{pirandola_advances_2018} can be reduced to QCD problems. An important case of QCD is finding a target channel within a sequence of background channels. ~In this case, we have a sequence of channels and know that all but one of them (the background channels) are identical, whilst one of them (the target channel) is different. The goal is to figure out which channel is the target channel, by probing the sequence of channels with quantum states a set number of times. This is a task of channel-position finding (CPF) \cite{zhuang_entanglement-enhanced_2020}. It is important to note that there are relevant scenarios in quantum sensing where the transmissivity stays the same for all channels while the noise background differs. This is a scenario with a passive signature, meaning that different levels of noise can be detected at the output of the channels even in the absence of input signals. In this setting, the model of CPF becomes a problem of environment localization, where the aim is to optimally identify the position of a different (target) environment with respect to standard background environments affecting an ensemble of modes. Motivated by this observation, we study CPF among bosonic Gaussian channels~\cite{weedbrook_gaussian_2012} with the same transmissivity (or gain) but different environments, establish the ultimate performance of this problem, and identify the regime of parameters where we can have quantum advantage over the classical benchmark based on coherent states. More precisely, we use channel simulation and stretching techniques~\cite{pirandola_fundamental_2017,pirandola_ultimate_2017,pirandola_fundamental_2019} to find the minimum fidelity between the outputs of two Gaussian channels that have the same transmissivity, $\tau$, but give rise to different induced noises, $\nu$. This minimization is carried out over all quantum inputs. We then use this minimum fidelity to find upper and lower bounds on the minimum discrimination error in finding the position of a target channel within a sequence of channels, for a fixed number of probes sent through each channel of the sequence. These bounds are on the minimum discrimination error for all possible adaptive, quantum protocols. We also find the minimum fidelity between two channel outputs (for channel with the same value of $\tau$ but different values of $\nu$) where the minimization is carried out over classical input states (mixture of coherent states). We use this fidelity to find a lower bound on the minimum discrimination error for all possible classical protocols. Our quantum and classical bounds hold for all phase-insensitive, Gaussian channels (thermal loss channels, thermal amplifier channels and additive noise channels)~\cite{holevo_one-mode_2007,weedbrook_gaussian_2012}. By comparing these bounds, we are able to prove quantum advantage for the general problem of environment localization. In particular, we find a condition on the sequence of channels that guarantees quantum advantage if the number of probes sent through the sequence of channels is large enough. Furthermore, we also design an explicit protocol, based on entangled states, photon counting and maximum-likelihood estimation which is able to beat any classical strategy. We apply our bounds (and the explicit protocol) to a number of discrimination tasks. We consider thermal imaging to find a warmer pixel in a colder background, eavesdropper localization to find the channel that an eavesdropper is interfering with and the problem of finding the least noisy frequency in a multi-mode cable. \section{Results} Our main results are upper and lower bounds on the error probability of environment localization. To establish the bounds, in Section~\ref{sec:channel simulation}, we present a method of channel simulation which allows the reduction of arbitrary adaptive protocols to quantum operations on a sequence of Choi states. From there, fidelity-based bounds can be derived for the error probability, which are calculated explicitly for Gaussian channels in Section~\ref{sec:fidelity}. In Section~\ref{sec:classical limits}, we use similar techniques to bound the error of classical protocols. In Section~\ref{sec:quantum advantage}, we then establish a region in which we can analytically prove that the task shows a quantum advantage. We present a concrete receiver design and discrimination protocol in Section~\ref{sec: bounds protocols} and thereby provide numerical bounds on the error, in both the classical and quantum cases. These bounds are often tighter than our fidelity-based, analytic bounds, and so we are often able to demonstrate quantum advantage at a lower number of probes than is required for our analytic bounds. Finally, we apply our bounds to several examples, in Section~\ref{sec:applications}, and demonstrate the quantum advantage. \subsection{Channel simulation} \label{sec:channel simulation} Consider a sequence of $m$ one-mode, phase-insensitive, Gaussian channels, where $m-1$ of the channels are identical ``background'' channels and one of the channels is a target channel. The target channel has the same transmissivity, $\tau$, as the background channels, but a different induced noise, $\nu$ (note that we consider a generalized transmissivity which may take values between zero and infinity). Suppose we want to identify the target channel and can do so by probing the sequence of channels using some adaptive protocol that involves sending $M$ transmissions through the sequence of channels (each transmission consists of sending a one-mode state through every channel in the sequence). We do not impose any energy bound on the transmissions. We would like to bound the minimum probability of error in identifying the target channel, with the minimization carried out over all possible adaptive protocols. The structure of the most general adaptive protocol can be considered to be a quantum comb \cite{laurenza_channel_2018,chiribella_quantum_2008}. A schematic of a possible setup is given in Fig.~\ref{fig:setup}, which shows a sequence of three thermal loss channels with the same transmissivity, $\tau$. Two of these channels are background channels (with environmental noise $\bar{n}_B$) and one of the channels is the target channel (with environmental noise $\bar{n}_T$). At each channel use, we are allowed to send an input state through the sequence of channels, and this input state may be dependent on the previous channel outputs. Each channel is represented by a beamsplitter interaction with a thermal mode, and all of the beamsplitters have the same transmissivity, but the thermal mode with which the input modes interact is different for the target and background channels. \begin{figure}[ptb] \centering \includegraphics[width=0.7\linewidth]{setup_diagram}\caption{An example of the setup in the thermal loss case. Each thermal loss channel can be represented by a beamsplitter that mixes the input mode with an environmental thermal state. Thermal loss channels are parametrized by the transmissivity of the beamsplitter and the average photon number, $\bar{n}$, of the thermal state. We consider a sequence of thermal loss channels for which the beamsplitters all have the same transmissivity, $\tau$. One of the channels has a thermal state with a different average number of photons from the others; this is the target channel. The average number of photons in the thermal state of the target channel is denoted $\bar{n}_T$, whilst the average number of photons in the thermal state of the background channel is denoted $\bar{n}_B$. The task is to locate the target channel; in the case of this setup, it is the middle channel.} \label{fig:setup} \end{figure} Any pair of one-mode, phase-insensitive, Gaussian channels with the same transmissivity is jointly teleportation covariant, using the Braunstein-Kimble (BK) protocol \cite{braunstein_teleportation_1998}. This means that both channels can be simulated using the same teleportation protocol, but with different resource states. In fact, using the BK protocol, a valid resource state for channel simulation is the asymptotic Choi matrix of the channel~\cite{choi_completely_1975,jamiolkowski_linear_1972,belavkin_radon-nikodym_1986}. The Choi matrix of a channel is the output state when part of a maximally entangled state is passed through the channel. For bosonic systems, the maximally entangled state $\Phi$ is the limit for infinite squeezing of a sequence of two-mode squeezed vacuum (TMSV) states~\cite{weedbrook_gaussian_2012} $\Phi^{a}$, i.e., $\Phi=\lim_{a} \Phi^{a}$, where $a$ is the level of squeezing and each $\Phi^{a}$ has covariance matrix (CM) \begin{align} V_{\mathrm{in}}^a=\begin{pmatrix} a \mathbb{I} &\sqrt{a^2-\frac{1}{4}}\mathbb{Z}\\ \sqrt{a^2-\frac{1}{4}}\mathbb{Z} &a \mathbb{I} \end{pmatrix}.\label{eq:TMSV CM} \end{align} Therefore, the Choi matrix $\sigma_{\mathcal{E}}$ of a bosonic channel $\mathcal{E}$ is defined as the infinite-squeezing limit of a sequence of states $\{ \sigma^{a}_{\mathcal{E}} \}$ where the generic element is given by a TMSV state partially propagated through the channel, i.e., $\sigma^{a}_{\mathcal{E}}:=\mathcal{I}\otimes \mathcal{E}(\Phi^{a})$. In the following, when we work with an asymptotic Choi matrix $\sigma_{\mathcal{E}}$ we implicitly mean that this is the limit of an underlying `Choi sequence' $\{ \sigma^{a}_{\mathcal{E}} \}$. Correspondingly, the teleportation simulation over $\sigma_{\mathcal{E}}$ is meant to be an asymptotic operation, where the simulation is defined over the Choi sequence $\{ \sigma^{a}_{\mathcal{E}} \}$ after which the limit for infinite squeezing is taken~\cite{pirandola_fundamental_2017}. Note that Gaussian states, which all elements of the sequence are, are completely described by their CM and their first moments vector. For states in the Choi sequence, all elements of the first moments vector are 0. The problem of CPF can be reduced to state discrimination between the $m$ possible outputs of the adaptive protocol used (with each outcome corresponding to a different target channel position). By bounding the fidelity between the different output states, we can find both upper and lower bounds for the minimum error probability $p_{err}$ (optimized over all adaptive protocols) of state discrimination. The lower bound on the discrimination error between a sequence of $m$ states $\{\rho_i\}$, with probabilities $\{p_i\}$, is~\cite{montanaro_lower_2008} \begin{align} p_{\mathrm{err}}\geq \sum_{i>j}^m p_i p_j F^2(\rho_i,\rho_j),\label{eq:lower bound 0} \end{align} and the upper bound, based on the pretty good measurement (PGM) is~\cite{barnum_reversing_2002} \begin{align} p_{\mathrm{err}}\leq 2\sum_{i>j}^m \sqrt{p_i p_j} F(\rho_i,\rho_j),\label{eq:upper bound 0} \end{align} where $F$ is the Bures fidelity, defined as \begin{align} F(\rho_i,\rho_j)=\Tr \sqrt{\sqrt{\rho_i}\rho_j\sqrt{\rho_i}}. \end{align} Since we can use the same teleportation protocol for both the target and the background channels, the entire discrimination protocol can be reduced, via stretching~\cite{pirandola_fundamental_2017,pirandola_ultimate_2017,pirandola_fundamental_2019}, to a single processor applied to different resource states (with the resource state depending on the position of the target channel). This adaptive-to-block reduction is shown in Fig.~\ref{fig:stretching}. \begin{figure}[ptb] \centering \includegraphics[width=1\linewidth]{teleportation_stretching_diagram}\caption{The reduction of a general adaptive discrimination protocol to a single round of quantum operations on a resource state. In panel (a), we have the most general discrimination protocol using $M$ uses of the sequence of channels. $\rho_0$ is some initial quantum state. We then apply some sequence of quantum operations (denoted by QO) interspersed with uses of the sequence of channels (denoted by $C^i$, where the label $i$ depends on the channel position). At each channel use, we may send a one-mode state through each of the channels in the sequence (and these modes are generally correlated with auxiliary modes that do not pass through the channels). Each round of quantum operations is allowed to be adaptive. This means that (i) entanglement can be present between ancillary modes of different quantum operations and (ii) measurements can be done on some subset of the modes and used to optimize following quantum operations. These measurements can always be delayed to the end of the protocol, by using controlled operations, so as to make all the QOs trace preserving. The final output of the adaptive protocol is denoted $\rho_0^i$; there are $m$ possible outputs depending on the channel position. Channel discrimination is then the task of discriminating between these $m$ different possible outputs, by means of an optimal collective quantum measurement (which may include all the measurements delayed). In panel (b), we simulate the channel with teleportation, using some teleportation protocol (TP) and a resource state ($\sigma^i$). Note that $\sigma^i$ is the resource state for the entire sequence of channels and is the tensor product of the resource states for teleportation of the $m-1$ background channels and the target channel, with the order of the subsystems determined by the label $i$. Note that neither the teleportation protocol nor the quantum operations depend on the label $i$ and so the entire discrimination protocol can be represented as some single fixed quantum operation on $\rho_0$ and $M$ copies of the resource state, $\sigma^i$. This representation is shown in panel (c).} \label{fig:stretching} \end{figure} Since no trace preserving quantum operation can increase the distance between two quantum states (the fidelity of any two input states will be less than or equal to the fidelity of the resulting output states), the fidelity between the possible output states is lower bounded by the fidelity between the possible resource states. Let $\sigma^i_{M}$ be the resource state composed of $M(m-1)$ copies of the asymptotic Choi matrix of the background channel, $\sigma_{B}$, and $M$ copies of the asymptotic Choi matrix of the target channel, $\sigma_{T}$, arranged such that the $M$ copies of the asymptotic Choi matrix of the target channel is the $i$-th $2M$-mode subsystem. Note that each asymptotic Choi matrix consists of two modes. We can write \begin{align} \sigma^i_{M}=P_{1i}\left[\sigma_{T}^{\otimes M}\otimes\sigma_{B}^{\otimes M(m-1)}\right], \end{align} where the operator $P_{1i}$ swaps the first $2M$-mode subsystem with the $i$-th $2M$-mode subsystem. We can then lower bound the fidelity of any pair of output states of a discrimination protocol with $M$ channel uses using \begin{align} F(\rho^i_M,\rho^j_M)\geq F(\sigma^i_{M},\sigma^j_{M}). \end{align} Using the fact that each asymptotic Choi matrix in the resource is independent (i.e. using the tensor product structure of the resource states), we can write \begin{align} F(\sigma^i_{M},\sigma^j_{M})=F^{2M}(\sigma_{T},\sigma_{B}), \end{align} for all $i \neq j$. More precisely, since the asymptotic Choi matrices, $\sigma_T$ and $\sigma_B$, are defined by the infinite-squeezing limit of two sequences of output states, $\{ \sigma_T^{a} \}$ and $\{ \sigma_B^{a} \}$, the fidelity functional is computed over the elements of the sequences and then the limit is taken, i.e., $F(\sigma_{T},\sigma_{B}):=\lim_a F(\sigma_{T}^{a},\sigma_{B}^{a})$. Then, it is important to notice that the bound $F(\rho^i_M,\rho^j_M) \geq F^{2M}(\sigma_{T},\sigma_{B})$ holds for any generally adaptive protocol $\mathcal{P}$. Therefore, we may write \begin{equation} F_{i,j}:=\inf_{\mathcal{P}} F(\rho^i_M,\rho^j_M) \geq F^{2M}(\sigma_{T},\sigma_{B}).\label{onepart} \end{equation} At the same time, we note that this lower bound is achievable by a block protocol $\mathcal{P}_{\mathrm{block}}^{a}$ where $m$ copies of the tensor product state $\Phi^{a \otimes M}$ are prepared and each TMSV state $\Phi^{a}$ is used for the single-probing of $\mathcal{I} \otimes \mathcal{E}_{B/T}$, so that the quasi-Choi matrix $\sigma_{B/T}^{a}$ is generated at the output for measurement. It is easy to see that, in the limit of infinite squeezing $a \rightarrow \infty$, this protocol achieves the performance at the right hand side of Eq.~(\ref{onepart}), so that we may write \begin{equation} F_{i,j}=F^{2M}(\sigma_{T},\sigma_{B}),~~\mathrm{for~any}~i,j.\label{secondpart} \end{equation} Let us optimize the error probability over all possible (generally adaptive) protocols $\mathcal{P}$. We define this optimal error probability as \begin{align} p_{\mathrm{err}}^{\mathrm{opt}}=\inf_{\mathcal{P}} p_{\mathrm{err}}; \end{align} it is the smallest achievable error probability for any discrimination protocol. As a consequence of the reasoning above, and the inequalities in Eqs.~(\ref{eq:lower bound 0}) and (\ref{eq:upper bound 0}), we can write \begin{align} &p_{\mathrm{err}}^{\mathrm{opt}}\geq \sum_{i>j}^m p_i p_j F^{4M}(\sigma_{T},\sigma_{B}),\label{eq:lower bound 1}\\ &p_{\mathrm{err}}^{\mathrm{opt}}\leq 2\sum_{i>j}^m \sqrt{p_i p_j} F^{2M}(\sigma_{T},\sigma_{B}).\label{eq:upper bound 1} \end{align} Let us now assume that each channel position is equally likely, and so $p_i=\frac{1}{m}$ for every value of $i$. We can then carry out the sums in Eqs.~(\ref{eq:lower bound 1}) and (\ref{eq:upper bound 1}) and write \begin{align} &p_{\mathrm{err}}^{\mathrm{opt}}\geq \frac{m-1}{2m} F^{4M}(\sigma_{T},\sigma_{B}),\label{eq:lower bound 2}\\ &p_{\mathrm{err}}^{\mathrm{opt}}\leq (m-1)F^{2M}(\sigma_{T},\sigma_{B}).\label{eq:upper bound 2} \end{align} \subsection{Calculating the fidelity between Choi matrices} \label{sec:fidelity} We now must calculate the the fidelity between the (asymptotic) Choi matrices of the target and the background channels. A phase-insensitive, Gaussian channel~\cite{weedbrook_gaussian_2012} can be parametrized by two parameters: its transmissivity, $\tau$, and its induced noise, $\nu$. It transforms the CM of an input two-mode state, $V_{in}$, with the transformation \begin{align} V_{\mathrm{in}}\to\left(\mathbb{I}\oplus \sqrt{\tau}\mathbb{I} \right)V_{\mathrm{in}}\left(\mathbb{I}\oplus \sqrt{\tau}\mathbb{I} \right)^T+\left(0\oplus \nu\mathbb{I} \right), \end{align} where $\mathbb{I}$ is the 2 by 2 identity matrix. There are three main classes of phase-insensitive, Gaussian channels that we must consider: thermal loss channels, thermal amplifier channels and additive noise channels. Loss and amplifier channels both have $\nu\geq \frac{|1-\tau|}{2}$ (where we have chosen the shot noise to be $\frac{1}{2}$), but loss channels have $0\leq \tau <1$, whilst amplifier channels have $1<\tau$. Additive noise channels have $\nu\geq 0$ and $\tau=1$. Passing the second mode of a TMSV state $\Phi^a$ with an average photon number per mode of $\bar{n}=a-\frac{1}{2}$ through a phase-insensitive, Gaussian channel results in the state with CM \begin{align} V_{\mathrm{out}}=\begin{pmatrix} a \mathbb{I} &\sqrt{\tau\left(a^2-\frac{1}{4}\right)}\mathbb{Z}\\ \sqrt{\tau\left(a^2-\frac{1}{4}\right)}\mathbb{Z} &(a\tau +\nu) \mathbb{I} \end{pmatrix},\label{eq:TMSVout} \end{align} where $\mathbb{Z}$ is the Z Pauli matrix. The Bures fidelity of a pair of two-mode Gaussian states $\rho_i$ and $\rho_j$, with zero first moments and CM $V_i$ and $V_j$ is given by~\cite{marian_uhlmann_2012,banchi_quantum_2015} \begin{align} &F(\rho_i,\rho_j)=\frac{\sqrt{\chi}+\sqrt{\chi-1}}{\sqrt[4]{\det\left(V_i+V_j\right)}},\\ &\chi=2\sqrt{A}+2\sqrt{B}+\frac{1}{2},\\ &A=\frac{\det\left(\Omega V_i \Omega V_j -\frac{1}{4}\mathbb{I}\right)}{\det\left(V_i+V_j\right)},\\ &B=\frac{\det\left(V_i+\frac{i}{2}\Omega\right)\det\left(V_j+\frac{i}{2}\Omega\right)}{\det\left(V_i+V_j\right)},\\ &\Omega=\mathbb{I}\otimes\begin{pmatrix} 0 &1\\ -1 &0 \end{pmatrix}. \end{align} Using this expression, we can calculate the fidelity of a pair of output states of phase-insensitive, Gaussian channels (when the input state is a TMSV) with the same transmissivity. In the case of thermal loss and amplifier channels, we define $\epsilon_T=\frac{\nu_{T}}{|1-\tau|}$ and $\epsilon_B=\frac{\nu_{B}}{|1-\tau|}$, where $\nu_T$ is the induced noise of the target channel, $\nu_B$ is the induced noise of the background channels, and $\tau$ is the transmissivity of all of the channels in the sequence. In fact, $\epsilon_T$ and $\epsilon_B$ give us the mean photon number of the environment for each channel, via the equation \begin{align} \bar{n}_{T(B)}=\epsilon_{T(B)}-\frac{1}{2}. \end{align} We find that the fidelity of the outputs of two such thermal loss or amplifier channels is analytically given by \begin{align} F_{\mathrm{loss/amp}}(\tau,\epsilon_T,\epsilon_B,a)=\frac{\sqrt{2}\left(\sqrt{\alpha+\beta}+\sqrt{\alpha-\beta}\right)}{\beta},\label{eq:fid_thermal} \end{align} where we define \begin{align} \begin{split} \alpha=&\left(4\epsilon_T\epsilon_B+4a^2(4\epsilon_T\epsilon_B+1)\right.\\ &\left.+(4a^2-1)\sqrt{(4\epsilon_T^2-1)(4\epsilon_B^2-1)}\right)|1-\tau|^2\\ &+8a(\epsilon_T+\epsilon_B)\tau |1-\tau|+(1+\tau)^2, \end{split}\\ \beta=&4\left(\tau+2a(\epsilon_T+\epsilon_B)|1-\tau|\right). \end{align} Taking the limit of this expression as $a\to\infty$, in order to obtain the fidelity between the Choi matrices, we get \begin{align} F_{\mathrm{loss/amp}}^{\infty}(\epsilon_T,\epsilon_B)=\frac{\sqrt{4\epsilon_T\epsilon_B+1+\sqrt{(4\epsilon_T^2-1)(4\epsilon_B^2-1)}}}{\sqrt{2}(\epsilon_T+\epsilon_B)}.\label{eq:choi_fid_thermal} \end{align} Note that we no longer have any explicit dependence on $\tau$. Thus, our discrimination bounds for thermal loss or amplifier channels become \begin{align} p_{\mathrm{err}}^{\mathrm{opt}}\geq \frac{m-1}{2m} (F_{\mathrm{loss/amp}}^{\infty}(\epsilon_T,\epsilon_B))^{4M},\label{eq:lower bound thermal}\\ p_{\mathrm{err}}^{\mathrm{opt}}\leq (m-1)(F_{\mathrm{loss/amp}}^{\infty}(\epsilon_T,\epsilon_B))^{2M}.\label{eq:upper bound thermal} \end{align} The latter upper bound might become too large in some cases. Note that the error probability in randomly guessing the position of the target channel is equal to $(m-1)/m$. Combining this with the upper bound in Eq.~(\ref{eq:upper bound thermal}) leads to \begin{equation} p_{\mathrm{err}}^{\mathrm{opt}}\leq (m-1) \min\{m^{-1},(F_{\mathrm{loss/amp}}^{\infty}(\epsilon_T,\epsilon_B))\}.\label{eqForCap} \end{equation} In order to investigate the behaviour of $F_{\mathrm{loss/amp}}^{\infty}$, we re-parametrize Eq.~(\ref{eq:choi_fid_thermal}) in terms of the mean of $\epsilon_T$ and $\epsilon_B$, i.e., \begin{equation} \epsilon_{\mathrm{av}}=\frac{\epsilon_T+\epsilon_B}{2},\label{meanEPS} \end{equation} and the absolute value of their difference between, i.e., \begin{equation} \epsilon_{\mathrm{dif}}=|\epsilon_T-\epsilon_B|.\label{diffEPS} \end{equation} Differentiating with regard to $\epsilon_{\mathrm{dif}}$, we get a negative semi-definite function and differentiating with regard to $\epsilon_{\mathrm{av}}$, we get a positive semi-definite function. This means that either increasing the difference in the average number of photons between the target and background channels (whilst keeping the mean fixed) or decreasing the mean of the $\epsilon$-values, whilst keeping the difference fixed, will decrease the minimum fidelity of the output states. We now consider the case of additive noise channels. We find that the fidelity of the outputs of two such channels becomes \begin{align} F_{\mathrm{add}}(\nu_T,\nu_B,a)=\frac{2a\sqrt{\nu_T\nu_B}+\sqrt{(2a\nu_T+1)(2a\nu_B+1)}}{(2a(\nu_T+\nu_B)+1)}.\label{eq:fid_additive} \end{align} Taking the limit of this expression as $a\to\infty$, we get \begin{align} F_{\mathrm{add}}^{\infty}(\nu_T,\nu_B)=\frac{2\sqrt{\nu_T\nu_B}}{\nu_T+\nu_B}.\label{eq:choi_fid_additive} \end{align} We can again substitute this expression into Eqs.~(\ref{eq:lower bound 2}) and~(\ref{eq:upper bound 2}). Our discrimination bounds for additive noise channels become \begin{align} p_{\mathrm{err}}^{\mathrm{opt}}\geq \frac{m-1}{2m} (F_{\mathrm{add}}^{\infty}(\nu_T,\nu_B))^{4M},\label{eq:lower bound additive}\\ p_{\mathrm{err}}^{\mathrm{opt}}\leq (m-1)(F_{\mathrm{add}}^{\infty}(\nu_T,\nu_B))^{2M}.\label{eq:upper bound additive} \end{align} We now investigate the behaviour of $F_{\mathrm{add}}^{\infty}$ by re-parametrizing Eq.~(\ref{eq:choi_fid_additive}) in terms of $\nu_{\mathrm{av}}$ and $\nu_{\mathrm{dif}}$, where $\nu_{\mathrm{av}}$ is the mean of $\nu_T$ and $\nu_B$ and $\nu_{\mathrm{dif}}$ is the absolute value of the difference between them. Note that $\nu_{\mathrm{dif}}\leq 2\nu_{\mathrm{av}}$. We can then rewrite Eq.~(\ref{eq:choi_fid_additive}) as \begin{equation} F_{\mathrm{add}}^{\infty}(r)=\sqrt{1-\frac{r^2}{4}},~~r=\frac{\nu_{\mathrm{dif}}}{\nu_{\mathrm{av}}}. \end{equation} Thus, we can see that the fidelity between the Choi matrices of two additive noise channels depends only on the ratio of $\nu_{\mathrm{dif}}$ to $\nu_{\mathrm{av}}$. Differentiating with regard to $r$, we see that the fidelity decays as $r$ increases. \subsection{Classical limits} \label{sec:classical limits} Let us define a classical protocol as a protocol that restricts the states sent through the sequence of channels to an arbitrary mixture of coherent states. Since the Gaussian channels we are considering are phase-insensitive and since both the target and the background channels have the same transmissivity, enacting a phase-shift or displacement on the input states sent through the channels cannot affect the fidelity of the output states (since these unitary operations commute with the channels). The joint concavity of the Bures fidelity and the linearity of the channels means that the optimal classical input state (to minimize the fidelity between output states) is a single coherent state (not a mixture). As a result, the classical discrimination protocol that minimizes the lower bound on the error probability sends vacuum states through the channel at each channel use. This means that such protocols use only the passive signature of the channels. We can obtain expressions for the minimum fidelity between output states for classical protocols by using our expressions for the fidelity between the output states using TMSV inputs in Eqs.~(\ref{eq:fid_thermal}) and (\ref{eq:fid_additive}) and setting $a=\frac{1}{2}$. This gives us the fidelity between the output states of the channels when the input state is a vacuum state. In the case of thermal loss and amplifier channels, the minimum classical fidelity between output states is \begin{align} F_{\mathrm{loss/amp}}^{\mathrm{class}}(\tau,\epsilon_T,\epsilon_B)=\frac{\sqrt{\gamma+\delta}+\sqrt{\gamma-\delta}}{\delta},\label{eq:fid_class_thermal} \end{align} where we define \begin{align} &\gamma=4\epsilon_T\epsilon_B |1-\tau|^2+2(\epsilon_T+\epsilon_T)\tau|1-\tau|+(1+\tau^2),\\ &\delta=2\left(\tau+(\epsilon_T+\epsilon_T)|1-\tau|\right). \end{align} In the case of additive noise channels, the minimum classical fidelity between output states is \begin{align} F_{\mathrm{add}}^{\mathrm{class}}(\nu_T,\nu_B)=\frac{1}{\sqrt{(\nu_T+1)(\nu_B+1)}-\sqrt{\nu_T \nu_B}}.\label{eq:fid_class_additive} \end{align} We can now give upper and lower bounds on the error of classical discrimination protocols. We write \begin{align} p_{\mathrm{err}}^{\mathrm{class}}\geq \frac{m-1}{2m} (F^{\mathrm{class}})^{4M},\\ p_{\mathrm{err}}^{\mathrm{class}}\leq (m-1)(F^{\mathrm{class}})^{2M},\label{eq:classical_lower} \end{align} where the fidelity function is given in either Eq.~(\ref{eq:fid_class_thermal}) or Eq.~(\ref{eq:fid_class_additive}), depending on the class of channel. \subsection{Quantum advantage} \label{sec:quantum advantage} We say that there is a quantum advantage if we can show that there exists some quantum discrimination protocol that gives a lower probability of error than any classical protocol. In order to prove a quantum advantage for channel position finding, we need to show that the lower bound on the error of classical protocols is larger than the upper bound on the error of all protocols. In other words, we must show that \begin{align} \frac{m-1}{2m} (F^{\mathrm{class}})^{4M}\geq (m-1)(F^{\infty})^{2M}. \end{align} This is equivalent to showing \begin{align} 2M\ln\left(\frac{(F^{\mathrm{class}})^2}{F^{\infty}}\right)\geq \ln(2m).\label{eq:fid bound adv cond} \end{align} Noting that $\ln(2m)>0$, since $m\geq2$, we can see that the condition in Eq.~(\ref{eq:fid bound adv cond}) will always be met for sufficiently large $M$ (number of probes) as long as the condition \begin{align} (F^{\mathrm{class}})^2>F^{\infty}\label{eq:quant_adv_cond} \end{align} holds. Whether this condition is met depends only on the parameters of the target and background channels. Note that even if this condition is not met, it does not mean there is no quantum advantage; it could be the case that the bounds are not tight. In fact, in Section~\ref{sec: bounds protocols} we provide alternative bounds which can potentially show quantum advantage even in cases in which the condition in Eq.~(\ref{eq:quant_adv_cond}) is not met. Unlike $F_{\mathrm{loss/amp}}^{\infty}$, the fidelity $F_{\mathrm{loss/amp}}^{\mathrm{class}}$ depends on the transmissivity $\tau$. In fact, differentiating, we find that $\frac{dF}{d\tau}\geq 0$ for $0\leq\tau<1$ and that $\frac{dF}{d\tau}\leq 0$ for $\tau>1$. Further, as $\tau\to 0$, we have $F_{\mathrm{loss/amp}}^{\mathrm{class}}\to F_{\mathrm{loss/amp}}^{\infty}$. This can be intuitively understood, since the entire channel discrimination process, including the coupling of the signal mode with the environment, can be regarded as a (generalized) measurement on the environmental modes. Thus, no matter how much entanglement the interacting modes have, the possible output states that the final measurement distinguishes between cannot have a lower (pairwise) fidelity than the possible configurations of environmental modes that are being discriminated between. In other words, the infinite squeezing case is equivalent to a direct measurement on the environmental modes before they are mixed with the signal states, whilst, in any finite energy scenario, we send signal states to interact with the environmental modes and then measure the signal states. Since the $\tau=0$ case corresponds to the signal states being completely replaced by the environmental modes, the classical protocol, in this case, is also a direct measurement on the environmental modes. Consequently, in the case of thermal loss channels, for all values of $\epsilon_T$ and $\epsilon_B$, there is some threshold value of $\tau$ such that channels with $\tau$ below the threshold do not meet the condition in Eq.~(\ref{eq:quant_adv_cond}). Setting $\tau=\frac{1}{2}$, we find that$\frac{(F^{\mathrm{class}})^2}{F^{\infty}}\leq 1$, and hence the inequality in Eq.~(\ref{eq:quant_adv_cond}) does not hold for any channel with $\tau\leq \frac{1}{2}$. For further details, see Appendix~\ref{appendix:fidelity}. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{quantum_adv}\caption{Regions in which we can prove a quantum advantage for thermal loss channels, as a function of their noise difference $\epsilon_{\mathrm{dif}}$ and mean noise $\epsilon_{\mathrm{av}}$, for different values of the transmissivity $\tau$. Note that the region for a higher value of $\tau$ completely contains the region for any lower value of $\tau$. The minimum value of $\epsilon_{\mathrm{av}}$ for fixed $\epsilon_{\mathrm{dif}}$ is $\frac{\epsilon_{\mathrm{dif}}+1}{2}$, since neither $\epsilon_T$ nor $\epsilon_B$ can be less than $\frac{1}{2}$.} \label{fig:quantum_adv} \end{figure} Fig.~\ref{fig:quantum_adv} illustrates the region in which we meet the condition in Eq.~(\ref{eq:quant_adv_cond}) (and so can prove a quantum advantage for some number of probes), in the case of thermal loss channels, for a few choices of transmissivity, $\tau$. The plot is in terms of $\epsilon_{\mathrm{dif}}$ and $\epsilon_{\mathrm{av}}$ defined in Eqs.~(\ref{meanEPS})-(\ref{diffEPS}). We see that higher transmissivities result in a larger region in which we can prove a quantum advantage. Further, as $\epsilon_{\mathrm{dif}}$ increases, the region in which we can prove quantum advantage narrows (in terms of the allowed values of $\epsilon_{\mathrm{av}}$). The condition for the inequality in Eq.~(\ref{eq:quant_adv_cond}) to hold takes a simple form for additive noise channels. We again re-parametrize in terms of $\nu_{\mathrm{av}}$ and $\nu_{\mathrm{dif}}$. We can then write the condition purely in terms of $\nu_{\mathrm{av}}$. Thus, we find that for a sequence of additive noise channels, we will always have a quantum advantage for some number of probes as long as \begin{align} \nu_{\mathrm{dif}}>\frac{\sqrt{32\nu_{\mathrm{av}}^4-8\nu_{\mathrm{av}}^2-8\nu_{\mathrm{av}}-1-(4\nu_{\mathrm{av}}+1)\sqrt{8\nu_{\mathrm{av}}+1}}}{2\sqrt{2}\nu_{\mathrm{av}}}. \end{align} \subsection{Bounds from specific protocols} \label{sec: bounds protocols} We can consider specific discrimination protocols; these can provide benchmarks for both the classical (entanglement-free) and entangled cases. In the classical case, we have vacuum input. In this case, the return state is thermal, therefore a photon counting measurement coupled with the maximum-likelihood estimation (MLE) gives the Helstrom performance~\cite{helstrom_quantum_1976}. In this protocol, we carry out photon counting on each of the return states, and simple derivation shows that the MLE decision rule reduces to choosing the channel with the maximum/minimum photon count, i.e., we estimate the target channel to be \begin{align} \arg\max_s N_s, \mbox{if }\bar{n}_T> \bar{n}_B, \end{align} and \begin{align} \arg\min_s N_s, \mbox{if }\bar{n}_T<\bar{n}_B, \end{align} where $s$ is an index labelling the channels in the sequence and $N_s$ denotes the total number of photons counted from the return states of channel $s$ (cumulatively, over all M channel uses). We can consider a similar protocol involving entanglement, in the cases of thermal loss and amplifier channels. In these cases, we can get thermal return states by sending TMSV states through the channels, carrying out anti-squeezing operations on the return states and then tracing over one of the two modes. For each probe sent through one of the channels, we start by carrying out two-mode squeezing on a pair of vacuum modes, with squeezing parameter \begin{align} r_0=\frac{1}{2}\ln\left(2a+\sqrt{4a^2-1}\right).\label{eq:sq_init} \end{align} This results in the TMSV state $\Phi^{a}$, which has an average photon number per mode of $\bar{n}=a-\frac{1}{2}$ and the CM given by Eq.~(\ref{eq:TMSV CM}). The first mode is kept as an idler, whilst the second mode is passed through the channel. Each individual channel output state will then have a CM of the form in Eq.~(\ref{eq:TMSVout}); we then carry out two-mode squeezing on the state, with squeezing parameter \begin{align} r_1=\frac{1}{2}\ln\left(\frac{|1-\sqrt{\tau|}}{1+\sqrt{\tau}}\right).\label{eq:sq_param} \end{align} For a thermal loss channel, we discard the idler mode; the resulting state has the CM \begin{align} V_{\mathrm{ret},\mathrm{loss}}^a&=\mathrm{Disc}_{1} \left[S(r_1)V_{\mathrm{out},\mathrm{loss}}^{a}S^T(r_1)\right]\\ &=\frac{\nu+2a\tau-\tau\sqrt{4a^2-1}}{|1-\tau|}\mathbb{I},\label{eq:V_ret} \end{align} where $S$ is the two-mode squeezing matrix, given by \begin{align} S(r)=\begin{pmatrix} \cosh(r)\mathbb{I} &\sinh(r)\mathbb{Z}\\ \sinh(r)\mathbb{Z} &\cosh(r)\mathbb{I} \end{pmatrix}, \end{align} and where $\mathrm{Disc}_1$ indicates that we discard the first (idler) mode. We can get a return state with the same form for an amplifier channel by carrying out the same process, but tracing over the other mode (the mode which passed through the channel). In other words, we have \begin{align} V_{\mathrm{ret},\mathrm{amp}}^a&=\mathrm{Disc}_{2} \left[S(r_1)V_{\mathrm{out},\mathrm{amp}}^{a}S^T(r_1)\right]\\ &=\frac{\nu+2a\tau-\tau\sqrt{4a^2-1}}{|1-\tau|}\mathbb{I}. \end{align} This protocol is illustrated in Fig.~\ref{fig:MLE_protocol}. \begin{figure}[ptb] \centering \includegraphics[width=1\linewidth]{MLE_protocol}\caption{The setup for a CPF protocol that provides a benchmark for the general quantum case. In panel (a), we have the protocol for the thermal loss case and in panel (b), we have the protocol for the thermal amplifier case. In both cases, we begin by carrying out two-mode squeezing on a vacuum state, with squeezing parameter $r_0$, as given in Eq.~(\ref{eq:sq_init}). This is denoted $\mathrm{S(r_0)}$. We then pass one of the modes through the channel, denoted $C$, and then carry out two-mode squeezing again, this time with squeezing parameter $r_1$. Finally, we carry out a photon counting measurement (denoted PC) on one of the modes and trace over the other mode. This process is repeated $M$ times (where $M$ is the number of probes used) for every channel in the sequence. Note that in the thermal loss case, the measurement is carried out on the channel mode, whilst in the thermal amplifier case, the measurement is carried out on the idler mode.} \label{fig:MLE_protocol} \end{figure} We now note that the CM in Eq.~(\ref{eq:V_ret}) has finite energy, even in the limit of infinite squeezing ($a \rightarrow \infty$). Letting $V_{\mathrm{ret},T(B)}^{\infty}$ be the asymptotic return state from the target (background) channel (for either a thermal loss or a thermal amplifier channel), we find that \begin{align} V_{\mathrm{ret},T(B)}^{\infty}=\frac{\nu_{T(B)}}{|1-\tau|}\mathbb{I}=\epsilon_{T(B)}\mathbb{I}. \end{align} Hence, we can get thermal return states even in the case of infinite entanglement. Note that these are the same return states we would get in the classical case if the channels had a transmissivity of 0. Note too that we cannot enact this protocol in the additive noise case, since our expression in Eq.~(\ref{eq:sq_param}) for the squeezing parameter $r_1$ diverges as $\tau\to 1$. We can then carry out photon counting measurements on the return states and estimate the target channel using the MLE. We now calculate the success probability of the MLE. The probability that a thermal mode with average photon number $\bar{n}$ is measured to have $k$ photons is given by \begin{align} P_{\bar{n}}(k)=\frac{\bar{n}^k}{(\bar{n}+1)^{k+1}}. \end{align} We then calculate the probability that $M$ thermal modes, with the same average photon number of $\bar{n}$, are measured to have a total of $k$ photons, by replacing the thermal distribution with a sum of independent and identically distributed (iid) thermal distributions. We find that this probability is given by \begin{align} P_{\bar{n},M}(k)={k+M-1\choose k}\left(\frac{\bar{n}}{1+\bar{n}}\right)^k \left(\frac{1}{1+\bar{n}}\right)^M, \end{align} where the binomial coefficient accounts for the different ways in which the photons can be distributed across the measured modes. From this we can calculate the probability that the $M$ modes are measured to have fewer than $n_c$ photons in total: \begin{align} {\rm pr}_{\bar{n},M}({\rm count}<n_c)=\sum_{k=0}^{n_c-1} P_{\bar{n},M}(k). \end{align} Let us first consider the case in which $\bar{n}_T>\bar{n}_B$. In this case the MLE gives the correct answer when all of the return states from the background channels are measured to have fewer photons than the target channel. We must also consider the possibility that the return states of one or more of the background channels are measured to have the same number of photons as the return states of the target channel (but not more). In this case, we choose randomly between the channels that gave the highest photon counts. This gives a total success probability (for the entangled case) of \begin{align} \begin{split} p^{\mathrm{MLE}}_{\mathrm{succ},\bar{n}_T>\bar{n}_B}=&\sum_{c=1}^{m} \frac{1}{c} \sum_{n_c=0}^\infty \left[{\rm pr}_{\bar{n}_{B/T},M}({\rm count}<n_c)\right]^{m-c}\\ &\times P_{\bar{n}_{T},M}(n_c){m-1\choose c-1}(P_{\bar{n}_{B},M}(n_c))^{c-1}. \end{split} \end{align} Here, the index $c$ is the number of channels with the same photon count (hence, the $c=1$ is the case in which all of the background channels give a lower photon count than the target channel). The factor of $\frac{1}{c}$ comes from the random choice when multiple channels give the same photon count. Note that in the case of $n_c=0$, the only non-zero contribution is in the case $c=m$, corresponding to a photon count of 0 for the target and all of the background channels. If this occurs, there is a $\frac{1}{m}$ chance of the correct channel being randomly guessed to be the target channel. In this case, we define \begin{align} {\rm pr}_{\bar{n}_{B/T},M}({\rm count}<0)^{0}=1. \end{align} Extension to the case in which $\bar{n}_T<\bar{n}_B$ can be done trivially, by writing \begin{align} {\rm pr}_{\bar{n},M}({\rm count}>n_c)=1-{\rm pr}_{\bar{n},M}({\rm count}<n_c+1). \end{align} Then we have a success probability of \begin{align} \begin{split} p^{\mathrm{MLE}}_{\mathrm{succ},\bar{n}_T<\bar{n}_B}=&\sum_{c=1}^{m} \frac{1}{c} \sum_{n_c=0}^\infty \left[{\rm pr}_{\bar{n}_{B/T},M}({\rm count}> n_c)\right]^{m-c}\\ &\times P_{\bar{n}_{T},M}(n_c){m-1\choose c-1}(P_{\bar{n}_{B},M}(n_c))^{c-1}. \end{split} \end{align} In both cases, the error probability is given by \begin{align} p^{\mathrm{MLE}}_{\mathrm{err}}=1-p^{\mathrm{MLE}}_{\mathrm{succ}}. \end{align} Note that for the classical MLE error probabilities, we simply substitute $\bar{n}_{T(B)}$ with the average photon numbers of the classical return states, i.e. $\bar{n}_{T(B)}|1-\tau|$. This quantity can be easily numerically calculated. Using this semi-analytic benchmark, we can show a quantum advantage with a lower value of $M$ than is required for the condition in Eq.~(\ref{eq:fid bound adv cond}) to be met. This is demonstrated in Fig.~\ref{fig:imaging}. It is also useful as it is based on a protocol that can be easily implemented. The scaling of the MLE error with the number of subsystems is of interest. We can upper bound the error in the case of $m$ subsystems in terms of the success probability for 2 subsystems, which we will call $p^{\mathrm{MLE}}_{\mathrm{succ},2}$. The error probability for $m$ subsystems then obeys the inequality \begin{align} p^{\mathrm{MLE}}_{\mathrm{err},m}\leq 1-(p^{\mathrm{MLE}}_{\mathrm{succ},2})^{m-1}=1-\left(1-p^{\mathrm{MLE}}_{\mathrm{err},2}\right)^{m-1},\label{eq:MLE scaling} \end{align} since the target channel having a higher photon count than one background channel cannot decrease the probability that it will have a higher photon count than a different background channel. In fact, this bound is an overestimate for any $m>2$, since the conditional probability that the target channel has a higher photon count than one background channel, given that it has a higher photon count than a different background channel, is more than $p^{\mathrm{MLE}}_{\mathrm{succ},2}$. This can be understood by considering the iid outcomes of 3 (6-sided) dice rolls denoted $a$, $b$ and $c$. The probability that $a>b$ is the same as the probability that $a>c$ and is equal to $\frac{5}{12}$, however the probability that $a>c$ given that $a>b$ is more than $\frac{5}{12}$, since the condition makes it less likely that $a$ is a small number and more likely that $a$ is a large number. Expanding the inequality in Eq.~(\ref{eq:MLE scaling}) to the first order in $p^{\mathrm{MLE}}_{\mathrm{err},2}$, we get \begin{align} p^{\mathrm{MLE}}_{\mathrm{err},m}\leq (m-1)p^{\mathrm{MLE}}_{\mathrm{err},2}.\label{eq:MLE scaling UB} \end{align} This inequality is strict for $m>2$. This means that the MLE error scales more slowly with $m$ than the upper bound in Eq.~(\ref{eq:upper bound 2}), which is based on the PGM. However, for some sets of channel parameters, the upper bound in Eq.~(\ref{eq:MLE scaling UB}) can be close to the actual value of $p^{\mathrm{MLE}}_{\mathrm{err},m}$. It is also of note that, whilst the bounds based on the fidelity are symmetric under the exchange of $\nu_T$ and $\nu_B$, the MLE bound is not (for more than two subsystems). Thus, using this protocol in one of our applications, we may achieve a different error probability for finding a single cold pixel in a hot background than for finding a single hot pixel in a cold background. \subsection{Applications of the bounds} \label{sec:applications} Let us consider some physical applications of these bounds. One possible scenario in which one may need to discriminate between various channels with the same transmissivity is thermal imaging. The sequence of channels could represent a sequence of pixels that is being probed with microwave or infrared radiation, where we know that one pixel is hotter (or colder) than its surroundings and want to know its location. Alternatively, we could be imaging a surface with a microscope and want to find the frequency at which a source on the surface is emitting radiation. The different channels would then represent different frequencies. These tasks can both be modelled as a CPF task over a sequence of thermal loss channels with the same transmissivity. \begin{figure}[ptb] \centering \includegraphics[width=0.9\linewidth]{thermal_imaging}\caption{Error probability in decibels (dB), $10 \log_{10}(p_{\mathrm{err}})$, as a function of the number of the probes per pixel, for a thermal imaging task in which a sequence of $m=9$ pixels, each of area $4000~\mathrm{\mu m^2}$, is probed using microwaves (with wavelength 1~mm). The transmissivity of each pixel is 0.99 and the goal is finding the one pixel at temperature $247.56$~K ($-25.59$\textdegree{}C, $\epsilon_T=21$) in a background of pixels at temperature $272.76$~K ($-0.39$\textdegree{}C, $\epsilon_B=23.2$). Lower and upper bounds on the error probability are given for general quantum protocols (labelled ``quantum LB" and ``quantum UB") and a lower bound on the error is given for classical protocols (labelled ``classical LB"), for differing numbers of states sent through the channels (probes). Benchmarks based on the MLE are also shown for both the quantum and the classical cases (labelled ``quantum MLE" and ``classical MLE"). For the quantum upper bound, we use the expression in Eq.~(\ref{eqForCap}). For a large number of probes (in this case, greater than or equal to 1854), the upper bound on the error of quantum protocols is smaller than the lower bound on the error of classical protocols, proving we have a quantum advantage (in the darker shaded area). However, a much smaller number of probes (396) is required for the bound based on the MLE in the quantum case to beat the classical lower bound, and hence we are able to show a quantum advantage for any number of probes greater than 395 (in the lighter shaded area).} \label{fig:imaging} \end{figure} In Fig.~\ref{fig:imaging}, we consider an imaging task, in which a colder pixel must be located from a sequence of $9$ pixels, each of which has an area, $A$, of $4000~\mathrm{\mu m^2}$. We consider a case in which imaging is carried out in the microwave range (with a wavelength of $1$~mm), with high transmissivity, a background temperature of ${\sim}-0.39$\textdegree{}C and a target temperature of ${\sim}-25.59$\textdegree{}C. We assume that our detectors are very close to the pixels and that our imaging pulses have a time duration, $t$, of $100$~ns. We also assume that the pulses are transform-limited and so set the bandwidth of detection to $2.5$~MHz. This is in line with the fact that a transform-limited pulse has a time-bandwidth product (in terms of the variances) of $\frac{1}{4}$ \cite{siegman_lasers_1986}. We find the mean photon numbers by calculating the induced noise, which is independent of the transmissivity. Planck's law states that the spectral radiance of a black body, at a frequency $f$, is given by \begin{align} R(f,T)=\frac{2hf^3}{c^2(e^{\frac{hf}{kT}}-1)}, \end{align} where $c$ is the speed of light, $h$ is Planck's constant, $k$ is the Boltzmann constant, and $T$ is the temperature of the pixel. By dividing $R$ by $hf$, we obtain the number of photons emitted per unit time, per unit area of the pixel into an infinitesimal frequency range and into a unit solid angle. We must then integrate $\frac{R}{hf}$ over the bandwidth of the detector and multiply it by the duration of the imaging pulse, $t$, the solid angle over which the detector collects photons, $\Sigma$, and the area of the pixels, $A$, in order to obtain the induced noise, $\nu$. We therefore write \begin{align} \nu_{B/T}=A\Sigma t\int_{f_{\mathrm{min}}}^{f_{\mathrm{max}}} \frac{2f^2}{c^2(e^{\frac{hf}{kT_{B/T}}}-1)}df, \end{align} where $T_{B/T}$ is the temperature of the background/target pixel and $f_{\mathrm{min/max}}$ is the minimum/maximum frequency in our frequency range. We set $\Sigma=2\pi$ (i.e. we assume that the detector collects all light emitted in one hemisphere normal to the surface of the pixel). This is justified by our assumption that the detector is close to the pixels. If the detector were further away, we could adjust $\Sigma$ accordingly (and may have to reduce the transmissivity, $\tau$). Dividing $\nu_B$ and $\nu_T$ by $|1-\tau|$ gives the values of $\epsilon_B$ and $\epsilon_T$ respectively. Note that, for the bounds based on fidelity, swapping $\epsilon_T$ and $\epsilon_B$ does not affect the calculations, so these would be the same if the task were to find a target pixel at temperature $-0.39$\textdegree{}C in a background of pixels at $-25.59$\textdegree{}C. This is not the case for the benchmark based on the MLE. From Fig.~\ref{fig:imaging}, we see that we can prove a quantum advantage for a large number of channel uses (probes). We also see that the (quantum) MLE bound enables us to show a quantum advantage at a much lower value of $M$ than the fidelity-based quantum upper bound. Before considering the next example, it is also worth noting that it is likely that the classical lower bound (blue dashed) in Fig.~\ref{fig:imaging} is not tight, since we see a gap between it and the classical MLE performance (green dashed). Therefore quantum advantage is likely to hold for any number of probes, since we see that the quantum MLE (green solid) beats the classical MLE (green dashed) for any number of probes. A future study might be able to prove such a quantum advantage. Another scenario in which one may wish to discriminate between thermal loss channels with different noises could arise in quantum communications. One may know that one of a sequence of communications lines has a higher excess noise than the others, perhaps due to the presence of an eavesdropper, and may wish to localise the eavesdropper by finding the channel with the higher excess noise. \begin{figure}[t] \centering \includegraphics[width=0.9\linewidth]{eavesdropper_localization}\caption{Error probability in decibels versus number of probes per communication line for the problem of eavesdropper localization. We consider a transmissivity of 0.1, corresponding to a loss of 10~dB. The background channels have an excess noise of 0.01, whilst the channel with the eavesdropper has an excess noise of 0.1. Lower and upper bounds on the error probability are given for general quantum protocols (labelled ``quantum LB" and ``quantum UB") and a lower bound on the error is given for classical protocols (labelled ``classical LB"). Benchmarks based on the MLE are shown for both the quantum and the classical cases (labelled ``quantum MLE" and ``classical MLE"). In this case, the quantum upper bound never goes below the classical upper bound, so we are not able to prove a quantum advantage.} \label{fig:eavesdropper} \end{figure} \begin{figure}[t!] \centering \includegraphics[width=0.9\linewidth]{position_finding}\caption{Error probability in decibels versus number of probes per channel for the problem of additive noise localization. We want to find the channel with the lower induced noise from a sequence of 100 additive-noise channels. The background channels have an induced noise of 0.03, whilst the target channel has an induced noise of 0.01. Lower and upper bounds on the error probability are given for general quantum protocols (labelled ``quantum LB" and ``quantum UB") and a lower bound on the error is given for classical protocols (labelled ``classical LB"). The benchmark based on the MLE is shown for the classical case (labelled ``classical MLE"). For a number of probes greater than or equal to 20, the upper bound on the error of quantum protocols is smaller than the lower bound on the error of classical protocols, proving we have a quantum advantage (in the shaded area).} \label{fig:position} \end{figure} This scenario is illustrated in Fig.~\ref{fig:eavesdropper}, where we consider transmission over communication lines with a loss of 10~dB. Excess noise is expressed in dimensionless shot noise units and is defined in terms of the transmissivity and the thermal number of the channel as $\epsilon = \tau^{-1} (1-\tau) \bar{n}$~\cite{pirandola_advances_2019}. We consider background excess noises of 0.01 and an excess noise for the eavesdropper of 0.1. In this case, we cannot prove a quantum advantage, although the quantum lower bound is lower than the classical lower bound. This is in accordance with the fact that we cannot meet the condition in Eq.~(\ref{eq:quant_adv_cond}) with any channel that has $\tau\leq\frac{1}{2}$. The quantum MLE benchmark is also lower than the classical MLE benchmark, but does not go below the classical lower bound. This is again likely to be caused by the classical lower bound not being tight. Another possibility is that we could have a multi-mode cable with multiple frequency channels and wish to find a channel with lower noise than the others. This is another case of discrimination between a sequence of thermal loss channels with different noises. If the transmissivity is high enough (for instance, for a short-range cable) we could potentially also model this scenario as a sequence of additive noise channels. Fig.~\ref{fig:position} illustrates this situation. We consider a sequence of 100 additive noise channels and want to find the channel with the lower induced noise. The background channels have an induced noise of 0.03 and the target channel has an induced noise of 0.01. We can show a quantum advantage for a number of probes greater than or equal to 20. Note that, whilst we can provide a classical benchmark based on the MLE, we cannot provide a quantum MLE benchmark in the additive noise case. This is due to the fact that the squeezing parameter in Eq.~(\ref{eq:sq_param}) diverges as $\tau\to 1$, meaning that the protocol shown in Fig.~\ref{fig:MLE_protocol} cannot be enacted in the additive noise case. \section{Conclusion} In this work we have considered the problem of channel-position finding with a passive signature, where the aim is to localize a target channel in a sequence of background channels with the same transmissivity/gain but a different induced noise. The problem can therefore be seen as a problem of environment localization. We have considered this model in the setting of bosonic systems, considering such a localization with phase-insensitive Gaussian channels, such as thermal-loss channels (with the same transmissivity but different thermal noise), noisy quantum amplifiers (with the same gain but different thermal noise), and additive noise channels (with different added noise). Using channel simulation and protocol stretching, we have determined upper and lower bounds for the optimal error probability for environment localization. These bounds hold for the most general, adaptive, multi-ary quantum discrimination protocols. By comparison with a classical benchmark, associated with the optimal performance achievable by coherent states, we have determined the mathematical conditions to have a quantum advantage. In particular, if these conditions on the noise parameters are satisfied, then it is guaranteed that quantum advantage is achieved after a certain number of probes/uses. Furthermore we have also designed an explicit protocol using TMSV states and a receiver based on photon counting and the maximum-likelihood estimation that allows us to beat the classical benchmark, in some cases after a smaller number of probes than the general quantum bound. In conclusion, we have applied our study to some examples that are connected with thermal imaging, eavesdropper and additive-noise localization in different communication lines or among a sequence of frequencies. In conditions of low loss, we showed quantum advantage in various cases. \smallskip \textbf{Acknowledgments.}~This work has been sponsored by the European Union via ``Quantum readout techniques and technologies'' (QUARTET, Grant agreement No 862644) and via Continuous Variable Quantum Communications” (CiViQ, Grant agreement No 820466), and by the EPSRC via the Quantum Communications Hub (Grants No. EP/M013472/1 and No. EP/T001011/1). Q.Z. acknowledges support from the Office of Naval Research under Grant Number N00014-19-1-2189 and the University of Arizona.
proofpile-arXiv_068-200
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Introduction} The ideal proposal for the symmetry of the order parameter of an unconventional superconductor should have the ability to explain all its specific experimental signatures. In the case of Sr$_2$RuO$_4$, this high standard has turned out to be most challenging. Even the candidate order parameter considered as promising over a long time, the spin-triplet chiral $p$-wave state \cite{luke1998time, mackenzie2003superconductivity, xia2006high, maeno2011evaluation}, has recently been questioned by contradictory experiments indicating spin-singlet pairing \cite{pustogow2019constraints, ishida2020reduction}. This has prompted new proposals for the pairing symmetries, some of which have quickly gained prominence, such as the even-parity, spin-singlet, time reversal symmetry breaking superposition of $d_{x^2 - y^2}$ and $g_{xy(x^2 - y^2)}$, the $ (d + i g) $-wave state \cite{kivelson2020proposal, ghosh2020thermodynamic, willa2020symmetry}. In contrast to the chiral $p$-wave state, whose two constituents, the $ p_x $ and $ p_y $-component, are degenerate by symmetry, the $ (d+ig)$-wave state has to rely on an accidental degeneracy, because $d_{x^2 - y^2}$ and $g_{xy(x^2 - y^2)}$ belong to different representations of the tetragonal point group. In our study, we scrutinize the $ (d+ig)$-wave specifically for this aspect of degeneracy in view of disorder effects. For this purpose, we formulated a single-band tight-binding model and apply the self-consistent $T$-matrix approximation in order to take the effect of impurity scattering on the superconducting phase into account. In this way we examine the behavior of the two pairing channels, in particular, the splitting of their transition temperatures. In the case of a double transition, we also analyze the resulting specific heat signatures. \section*{Model of a $\bm{(d+ig)}$-wave superconductor} \subsection*{Tight-binding model} We consider a single-band tight-binding model on a two-dimensional square-lattice, which includes nearest-neighbor (NN) and next-nearest-neighbor (NNN) hopping. In momentum space the Hamiltonian reads \begin{align} \mathcal{H} = \sum_{\bm{k}, s} \xi_{\bm{k}} c^{\dagger}_{\bm{k}, s} c_{\bm{k}, s} + V_{\text{pair}}, \end{align} where $c^{\dagger}_{\bm{k}, s}$ ($c_{\bm{k}, s}$) denotes the creation (annihilation) operator of an electron with spin $s = \uparrow, \downarrow$ and momentum $\bm{k} = (k_x, k_y)$. The dispersion, which is chosen to qualitatively resemble the genuinely two-dimensional $\gamma$ band of Sr$_2$RuO$_4$, is given by \begin{align} \xi_{\bm{k}} = -2 t (\cos k_x + \cos k_y) - 4t' \cos k_x \cos k_y - \mu, \end{align} with $\mu$ as the chemical potential and hopping matrix elements $t = 1$ (unit of energy) and $t' = 0.3$ (the lattice constant $ a $ is taken to unity). In Fig.~\ref{fig:FS} we show the Fermi surface (FS) for varying chemical potentials. The pairing potential $V_{\text{pair}}$ is restricted to the spin-singlet channel, \begin{align} V_{\text{pair}} = \sum_{\substack{\bm{k}, \bm{k'} \\ s_1, s_2}} V_{\bm{k}\bm{k'}} c^{\dagger}_{\bm{k}, s_1} c^{\dagger}_{-\bm{k}, -s_1} c_{-\bm{k'}, -s_2} c_{\bm{k'}, s_2}, \end{align} where the orbital structure is given by $V_{\bm{k}\bm{k'}}$. With our focus on the $(d+ig)$-wave \footnote{Another even-parity state is for instance the superposition of extended $s$-wave and $d$-wave, for which we simply use the basis function $\Phi_s(\bm{k}) = \cos k_x + \cos k_y$.}, we introduce \begin{align} V_{\bm{k}\bm{k'}} = \sum_{a = d, g} V_a \Phi_a (\bm{k}) \Phi_a (\bm{k}'), \end{align} where the even-parity basis functions are \begin{align} \Phi_d (\bm{k}) &= \cos k_x - \cos k_y, \\[2mm] \Phi_g (\bm{k}) &= \sin k_x \sin k_y (\cos k_x - \cos k_y). \end{align} After the standard mean-field decoupling of the pairing potential, the minimization of the free energy leads naturally to the quasiparticle gap function \begin{align} \Delta_{\bm{k}} = &\Delta_{d} \left( \cos k_x - \cos k_y \right) \nonumber \\[2mm] &\pm i \Delta_{g} \sin k_x \sin k_y \left( \cos k_x - \cos k_y \right), \end{align} which breaks time-reversal symmetry. \begin{figure}[t!] \centering \includegraphics[width=0.90\columnwidth]{nodes.pdf} \caption{ Fermi surfaces for $\mu = 0.25$ (black), $\mu = 0.925$ (gold) and $\mu = 1.175$ (red). The gap zeros of the $d$-wave are represented by the diagonal dashed lines (grey). The additional zeros of the $g$-wave are given by the horizontal and vertical dotted lines (light grey). The van-Hove points with a diverging density of states are marked by the black dots. } \label{fig:FS} \end{figure} The coefficients $\Delta_{d,g}$ are obtained by solving the self-consistency equation, \begin{align} \begin{pmatrix} \Delta_d \\ \Delta_g \end{pmatrix} = \sum_{\bm{k}} \mathcal{C}_{\bm{k}} \begin{pmatrix} V_d & 0 \\ 0 & V_g \sin^2 k_x \sin^2 k_y \end{pmatrix} \begin{pmatrix} \Delta_d \\ \Delta_g \end{pmatrix} . \label{eqn:gapeq} \end{align} The factor $C_{\bm{k}}$ takes the form \begin{align} \mathcal{C}_{\bm{k}} = -T \sum_n \frac{\left(\cos k_x - \cos k_y \right)^2}{\tilde{\omega}_n^2 + \xi_{\bm{k}}^2 + |\Delta_{\bm{k}}|^2}, \label{eqn:gapcpl} \end{align} where $T$ is the temperature and the renormalized Matsubara frequencies $\tilde{\omega}_n$ are different from the standard Fermionic ones, $\omega_n = (2n+1)\pi k_B T$, if disorder is present, as defined in Eq.~\eqref{eqn:renMats}. \subsection*{Disorder - T-matrix approximation} Disorder is introduced through non-magnetic impurities with a point-like potential leading exclusively to $s$-wave scattering. As we would like to explore the whole range of scattering potential strengths, meaning also the unitary limit where the potential exceeds the band width, we employ a $T$-matrix approach, which includes multiple scatterings at the same impurity. The $T$-matrix is defined by \begin{align} T_{\bm{k} \bm{k}'}(i\omega_n) = U_{\bm{k} \bm{k}'} + \sum_{\bm{k}''} U_{\bm{k} \bm{k}''} G(\bm{k}'', i\omega_n) T_{\bm{k}'' \bm{k}'}(i\omega_n), \end{align} where $U_{\bm{k} \bm{k}'} $ is the impurity potential in $\bm{k}$ space and $G(\bm{k}, i\omega_n)$ the (normal) electron Green's function. Note that we have omitted off-diagonal terms involving the anomalous Green's function, since they vanish for unconventional states. For $s$-wave scattering both $U_{\bm{k} \bm{k}'} $ and the $T$ matrix are scalar in momentum space, \begin{align} U_{\bm{k} \bm{k}'} = U, \quad T_{\bm{k} \bm{k}'}(i\omega_n) = T(i\omega_n). \end{align} We may restrict ourselves to low impurity concentrations $c$ such that we can neglect impurity interference effects, because superconductivity is rather quickly suppressed by disorder, once the mean free path becomes comparable to the zero-temperature coherence length. Hence, the self-energy reads \begin{align} \Sigma (i\omega_n) = c T(i\omega_n), \end{align} which renormalizes the Matsubara frequencies, \begin{align} i \tilde{\omega}_n = i \omega_n - \Sigma (i\omega_n). \label{eqn:renMats} \end{align} Using the renormalized frequencies $\tilde{\omega}_n$ in the self-consistent gap equation [Eqs.~(\ref{eqn:gapeq}, \ref{eqn:gapcpl})] enables us to examine the influence of disorder on the superposition of unconventional pairing states. \begin{figure}[t!] \centering \includegraphics[width=0.95\columnwidth]{Tc_ren_ink.pdf} \caption{ The ratio of the critical temperatures, $T_{c,d}/T_{c,g}$, as a function of the impurity concentration for different values of the chemical potential, $\mu$. We normalize the concentration values by $c_c$, which is the average of the critical concentrations, $(c_{c,d} + c_{c,g})/2$. The bare (renormalized) critical temperatures obtained from the linearized (full) self-consistent gap equations are given by the dots (squares). } \label{fig:TcImp} \end{figure} \subsection*{Critical temperatures $\bm{T_{c,d}}$ and $\bm{T_{c,g}}$} For two pairing states, which belong to different representations, such as the $d$- and $g$-wave states, the respective {\it bare} critical temperatures, $T_{c,d}$ and $T_{c,g}$, are generally different. In App.~\ref{sec:app_s+id} we also discuss briefly the related case of the $(s+id)$-wave.\par We now assume that the critical temperatures coincide in the clean system and enforce this in our model by fine-tuning the coupling strengths $ V_{d,g} $ in the pairing interaction accordingly. Focussing on the behavior of the bare critical temperatures, $ T_{c,d} $ and $ T_{c,g} $, under the influence of disorder, we solve the linearized gap equation [Eq.~\eqref{eqn:gapeq}], which decouples for the two channels. The ratio $T_{c,d} / T_{c,g}$ displayed in Fig.~\ref{fig:TcImp} (circles) reveals two regimes, if we vary the chemical potential. For $ \mu = 0.25 $ (smallest FS) the ratio $T_{c,d} / T_{c,g}$ decreases upon growing impurity concentration $ c$, while it increases for $ \mu = 1.175 $ (largest FS close to van Hove points). No change of the ratio is seen for $ \mu = 0.925 $. Thus, there is a fine-tuned FS where the ''degeneracy'' remains untouched. The difference in the behavior is reflected in the coherence lengths of the two pairing states, which depend on the position of the FS. A simple estimate of the zero-temperature coherence length $ \xi $ for a given gap function can be obtained from \begin{align} \xi^2 = \frac{\sum_{\bm{k}} |\nabla_k \frac{\Delta_{\bm{k}}}{E_{\bm{k}}}|^2 }{\sum_{\bm{k}}|\frac{\Delta_{\bm{k}}}{E_{\bm{k}}}|^2} . \end{align} For larger coherence lengths $ T_c $ suffers faster suppression with increasing $ c$. Consistently, we find $ \xi_d/\xi_g \approx 1.09 $ for $ \mu = 0.25 $ and $ \xi_d/\xi_g \approx 0.96 $ for $ \mu = 1.175 $. Intuitively it is clear for the latter case that the $d$-wave state can profit from the enlarged density of states at the van Hove points (small Fermi velocity), while the $ g$-wave state has nodes there. Hence, the $d$-wave state is more tightly bound. However, on more genuine Fermi surfaces pairing states of higher angular momentum have in general shorter coherence lengths for a given critical temperature. The splitting of the bare critical temperatures implies the occurrence of two consecutive superconducting transitions: First, into the superconducting phase and then breaking time reversal symmetry. The second transition, however, does not happen at the lower of the two bare $ T_c $, but at a renormalized critical temperature, because the second order parameter has to nucleate in the presence of the first one. Thus, to determine the real onset of the second order parameter we have to solve the full self-consistency equation [Eq.~\eqref{eqn:gapeq}] for $\Delta_d$ and $\Delta_g$. The renormalization of the critical temperatures, indicated by squares in Fig.~\ref{fig:TcImp}, yields a larger splitting of the two transitions than the ratio $T_{c,d} / T_{c,g}$ would suggest. Due to the presence of the first order parameter large parts of the states at the FS are consumed leaving a strongly reduced density of low-energy states available for the second order parameter. \subsection*{Specific heat for the double transition} There are few ways of observing superconducting double transitions. Traditionally, specific heat has been a hallmark of such a feature in many unconventional superconductors. Thus, we would like to show here that the impurity induced split of the transition could leave an observable signature in the specific heat. We use our Green's function formalism and linear response theory \cite{luttinger1960ground, keller1988free}, as shown in App.~\ref{sec:AppCV}.\par We consider here the situation $ T_{c,g} > T_{c,d}$, where the first transition leads to a $ g $-wave phase and the second to the time reversal symmetry breaking $(d+ig)$-wave phase. Fig.~\ref{fig:CvJump} depicts the temperature dependence of the specific heat, $C/T$. Clearly a second anomaly is visible below the onset of superconductivity (see also inset). In our calculation the second jump is roughly 20\% of the first one and both transitions are of second order. Furthermore, $ C/T $ reaches a finite value in the zero-temperature limit due to the finite zero-energy density of states induced by the disorder. \begin{figure}[t!] \centering \includegraphics[width=0.9\columnwidth]{Cv_jump.pdf} \caption{ The specific heat divided through temperature, $C/T$, as a function of temperature $T$ for a finite impurity concentration, $c > 0$. The ratio of the critical temperatures is given by $T_{c,d}/T_{c,g} \approx 0.90$ and the chemical potential by $\mu = 0.10$. The inset (dashed lines) zooms in at the second jump of the specific heat, where the $d$-wave nucleates. } \label{fig:CvJump} \end{figure} \subsection*{Conclusion} Our work highlights how non-magnetic disorder influences the transition temperatures of accidentally or nearly degenerate unconventional pairing channels. Generally, the two pairing states would show a different suppression of their critical temperatures under disorder, which would yield in turn a superconducting double transition. Such a double transition would be visible in the specific heat, as shown in Fig.~\ref{fig:CvJump}. However, since time reversal symmetry breaking would only occur at the second transition, $\mu$SR zero-field relaxation and polar Kerr effect measurements would be optimal tools to detect whether the appearance of intrinsic magnetic properties separates from the onset of superconductivity. Similarly, the renormalization of the ultrasound velocity of transverse modes would be a way to see the second transition. So far no such features have been reported and should therefore be indeed a target of measurements. The scenario based on the $(d+ig)$-wave phase for Sr$_2$RuO$_4$ relies on fine-tuning in the clean limit. To keep the degeneracy under disorder would mean to impose a second fine-tuning constraint. \section*{Acknowledgements} We would like to thank Mark H. Fischer and Roland Willa for many useful discussions. This work was financially supported by the Swiss National Science Foundation (SNSF) through Division II (No. 184739).
proofpile-arXiv_068-223
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section*{Acknowledgments} This work was partially supported by the German Research Council (DFG) under grant no. DI 2097/1-2~(``REFIT''). \bibliographystyle{ACM-Reference-Format} \section{\system} This section presents the cloud-based BFT system architecture \system. In particular, we focus on how the architecture achieves low latency by performing consensus only over short-distance links, how \system achieves modularity by relying on a novel message-channel abstraction, and how it can be dynamically reconfigured to adapt to workload changes. \subsection{Architecture} Targeting use cases in wide-area environments, \system's system architecture is distributed across multiple geographic sites. For this purpose, \system leverages the common organizational structure of state-of-the-art cloud infrastructures such as Amazon EC2~\cite{ec2-regions}, Microsoft Azure~\cite{azure-regions}, or Google Compute Engine~\cite{gce-regions} by grouping sites into \emph{regions}, as shown in Figure~\ref{fig:architecture}. The sites within a region typically are several tens of kilometers apart from each other and represent separate fault domains, commonly referred to as \emph{availability zones}. In addition to constructing the data centers at distinct geographic locations, cloud providers also ensure that data centers in different availability zones are equipped with dedicated power supply systems and network links to minimize the probability of dependent failures. For the \system system architecture, availability zones play an important role as they allow us to place replicas in separate fault domains and still enable them to interact over short-distance links with comparably low latency. \headline{Replica Groups} Relying on this setting, \system is composed of multiple loosely coupled replica groups, each being distributed across different availability zones of a specific region. One of the replica groups in the system, the \emph{agreement group}, is responsible for establishing a global total order on incoming requests. The size of this group depends on the protocol it uses for consensus. Running PBFT~\cite{castro99practical}, for example, the agreement group consists of $3f_a+1$~replicas and is able to tolerate $f_a$~Byzantine faults. All other replica groups in the system, the \emph{execution groups}, host the application logic, process the ordered requests, and handle the communication with clients. Each of these groups comprises $2f_e+1$~replicas and tolerates at most $f_e$~Byzantine faults. The level of fault tolerance provided by the agreement group and the executions groups may be selected independently. Supporting multiple execution groups enables \system to scale throughput by adding/removing groups and to minimize latency by placing groups in the vicinity of clients. \headline{Execution-Replica Registry} \system contains an execution-replica registry to provide clients with information on the locations and addresses of active replicas. The registry is a BFT service that is hosted and maintained by the agreement group. Its contents are updated by agreement replicas whenever the composition of the system changes~(see Section~\ref{sec:adaptability}). \headline{Efficient BFT Replication} In contrast to existing ap\-proach\-es~(see Section~\ref{sec:background-approaches}), \system does not run a full-fledged and complex replication protocol over long-distance links. Instead, all non-trivial tasks~(e.g.,~reaching consensus on requests) are carried out within a replica group using low-latency intra-region connections. Following this design principle, \system handles requests by forwarding them along a chain of stages represented by different replica groups. Specifically, clients submit their requests to their nearest execution group, which in turn forwards the request to the agreement group for ordering. Once this step is complete, the agreement group instructs all execution groups to process the ordered request. This ensures that execution-group states remain consistent without requiring the execution groups to reach consensus themselves. Having processed the request, the replicas of the execution group the client is connected to return the result. As each execution group comprises $2f_e+1$~replicas, clients are able to verify the correctness of a result solely based on the replies they receive from their local execution group. With all communication-intensive steps being performed over intra-region links, inter-region links in \system are only responsible for forwarding the outputs of one stage to the replica group(s) constituting the next stage. In particular, this approach has the following benefits: (1)~It greatly simplifies the interaction of replicas over long-distance connections. (2)~It enables a modular design that allows different deployments to rely on different agreement protocols without the need to modify the implementation of execution replicas. (3)~As we show in Section~\ref{sec:channels}, it allows \system to use the same abstraction, a reliable message channel, for all inter-region links, thereby facilitating system implementation. \begin{figure} \includegraphics{figures/architecture.pdf} \caption{\system system architecture} \label{fig:architecture} \end{figure} \headline{Practical Considerations} As of this writing, all major public clouds offer several regions with at least three availability zones~(Amazon EC2:~20, Microsoft Azure:~10, Google Compute Engine:~24) and therefore support the world-wide deployment of \system execution groups which tolerate one faulty replica. In addition, Amazon~(Virginia, Oregon, Tokyo) and Google~(Iowa) also already operate regions with four or more availability zones, which consequently are candidates for hosting \system's agreement group. With public cloud infrastructures still being expanded, new regions and availability zones are added every year, increasing the deployment options for \system. Besides, to further improve the resilience of \system, agreement and execution replicas may be distributed across different clouds, thereby reducing the dependence on a single provider~\cite{bessani13depsky,abu-libdeh10racs}. As there are several regions hosting data centers and availability zones of multiple providers~(e.g.,~Europe, North America, South America, India, Asia, and Australia), this approach also makes it possible to deploy larger agreement and execution groups that tolerate $f_a>1$ and $f_e>1$ replica failures, respectively. Representing distinct fault and upgrade domains, availability zones are designed to enable uninterrupted execution of services that are replicated within the same region. Despite the efforts undertaken by providers, in the past there have been rare incidents where problems in one availability zone caused temporary availability issues in other zones belonging to the same region~\cite{aws11incident}. In \system, if more than $f_a$~agreement replicas are unresponsive, the agreement group temporarily cannot order new requests until the replicas become available again. However, as we detail in Section~\ref{sec:protocol}, in such cases \system is still able to process weakly consistent read requests as these operations are handled within a client's local execution group. On the other hand, if more than $f_e$~replicas of the same execution group become unavailable, affected clients can temporarily switch to a different execution group and continue to use the service. \subsection{Inter-Regional Message Channels} \label{sec:channels} To support a modular design, we use an abstraction to handle all interaction between replica groups in \system: the \emph{inter-regional message channel~(\channel)}. Specifically, \channel{}s are responsible for forwarding messages from a group of sender replicas in one region to a group of receiver replicas in another region. Conceptually, \channel{}s can be viewed as an extension of BLinks~\cite{amir07customizable}, however, unlike BLinks, \channel{}s (1)~do not require messages to be totally ordered at the channel level and (2)~comprise built-in flow control. To forward information, an \channel internally can be divided into multiple subchannels providing first-in-first-out semantics. Each subchannel has a configurable maximum capacity~(i.e.,~an upper bound on the number of messages that can be concurrently in transmission) and relies on a window-based flow-control mechanism to prevent senders from overwhelming receivers. Below, we discuss the specifics of \channel{}s at a conceptual level. For possible implementations please refer to Section~\ref{sec:implementations}. \headline{Overview} Figure~\ref{fig:channel} presents an example \channel that comprises two subchannels and connects four senders to three receivers. Subchannels of the same \channel are independent of each other and can be regarded as distributed queues with limited capacity that distinguish messages based on unique position indices. Both senders and receivers run dedicated endpoints which together form the \channel and enable the replicas to access it. When a replica sends a message, it provides its local endpoint with the information which subchannel and position to use for the message~(\texttt{send()}). Similarly, to receive a message a replica queries its local endpoint for the message corresponding to a specific subchannel and position~(\texttt{receive()}). In addition, \channel endpoints offer a method to shift the flow-control window of a subchannel~(\texttt{move\_window()}), as further discussed below. \headline{Send Semantics} \channel{}s are not designed to exchange arbitrary messages between replicas but instead provide specific send semantics enabling \system to safely forward the decision of a replica group to another. In particular, tolerating at most $f_s$~senders with Byzantine faults, the \channel only forwards a message after at least $f_s+1$~different senders transmitted a message with identical content using the same subchannel and position. Consequently, in order for a message to pass the channel at least one correct sender must have vouched for the validity of the message's content and requested its transmission. In contrast, messages solely submitted by the up to $f_s$~faulty senders have no possibility of getting through and being delivered to receivers. \begin{figure} \vspace{-3mm} \begin{lstlisting} void void void \end{lstlisting} \vspace{1mm} \includegraphics{figures/channel.pdf} \caption{Conceptual view of an example \channel with two independent subchannels that both have a maximum capacity of ten messages~(M). Senders~($S_*$) and receivers~($R_*$) access the subchannels via their local endpoints; each endpoint manages its own subchannel-specific flow-control windows.} \label{fig:channel} \label{fig:interface} \end{figure} \headline{Authentication} \channel{}s protect all channel-internal communication with digital signatures to enable the recipient of a message to verify the integrity and the origin of the message. If an endpoint is unable to validate the authenticity of a received message, it immediately discards the message. \headline{Flow Control} With the capacities of subchannels being limited, \channel endpoints apply a flow-control mechanism to coordinate senders and receivers. For this purpose, for each subchannel an endpoint manages a separate window that restricts which messages a sender/receiver is able to transmit/obtain at a given time. If a subchannel's window at a sender endpoint is full, the sender cannot insert additional messages into this subchannel until the endpoint moves the window forward. In the normal case, this action is triggered by receivers calling \texttt{move\_window()} and requesting the start of the window to be shifted to a higher position. Whenever a sender endpoint learns that the window position has changed at one of the receiver endpoints, the sender endpoint sets its own window start to the $f_r+1$~highest position requested by any receiver where $f_r$~denotes the number of receivers with Byzantine faults to tolerate. This ensures that correct sender endpoints only move their windows, and thus discard messages at lower positions, after receiving the information that at least one correct receiver has permitted such a step. Besides receiver-driven window shifts, our channels also allow senders to request an increase of the starting position of a subchannel's window. If senders opt to do so, it may become impossible for a receiver endpoint to provide the message at the position the endpoint's local replica requested. The same scenario can occur if a receiver endpoint is slow or falls behind~(e.g.,~due to a network problem) while \mbox{$f_r+1$}~other receivers have already requested the window to be moved forward. In such cases, the affected receiver endpoint aborts the \texttt{receive()} call with an exception and thereby enables its local replica to handle the situation. As discussed in Section~\ref{sec:checkpointing}, replicas react to such an exception by obtaining the missed information from other replicas. \headline{Use in \system} \channel{}s are an essential building block of \system's modular architecture as they enable us to design a geo-replicated BFT system as a composition of loosely coupled replica groups that interact using the same channel abstraction. In particular, \system relies on two different \channel instances to perform all inter-group communication over long-\linebreak distance links: the \textit{request channel} and the \textit{commit channel}. The request channel allows an execution group to forward newly received requests to the agreement group; that is, this channel is an \channel that connects $2f_e+1$~senders~(i.e.,~execution replicas) to $3f_a+1$~receivers~(i.e.,~agreement replicas). To transmit the requests, the request channel comprises multiple subchannels, one for each client. In contrast, the commit channel only consists of a single subchannel and is used by the agreement group to inform an execution group about the totally ordered sequence of agreed requests. The commit channel consequently is responsible for forwarding the decisions of $3f_a+1$~senders to $2f_e+1$~receivers. In summary, the agreement group maintains a pair of \channel{}s~(i.e.,~one request channel and one commit channel) to each execution group. \subsection{Request Handling} \label{sec:protocol} \system differentiates between requests that potentially modify application state~(``writes'') and those that do not~(``reads''). This distinction enables the system to handle requests of each category as efficiently as possible. While writes need to be applied to all execution groups to keep their states consistent, it is sufficient for reads to only process them at the execution group a client is connected to. Figure~\ref{fig:protocol} gives an overview of how requests flow through \system. Below, we provide details on the system's replication protocols for writes and reads. In this context, it is important to note that all messages exchanged between clients and replicas must be authenticated, for example using HMACs~\cite{tsudik92message}. For messages sent through \channel{}s, the authentication is handled by the channels. In the following, we describe \system's handling of write and read requests . The proof of correctness and liveness is deferred to the \ifextended{appendix of the paper}{extended version of the paper~\cite{eischer20resilient-extended}}. \begin{figure} \includegraphics{figures/protocol.pdf} \caption{Overview of \system's replication protocol} \label{fig:protocol} \end{figure} \headline{Writes} \system's protocol for writes is presented in Figure~\ref{def:pseudo-write}. To perform a write operation~$w$, a client~$c$ creates a corresponding message \msg{Write}{w, c, t_c} using a unique client-local counter value~$t_c$ and sends the message to all replicas of an execution group. In general, a client for this purpose may select any execution group in the system, however, in an effort to minimize latency, \system clients typically choose the group closest to their own site. When an execution replica receives the client's request, it first checks whether the message is correctly authenticated and whether the client has permission to access the system. If any of these checks fail the replica discards the message. Otherwise, the replica of execution group~$e$ wraps the entire request~$r$ in a message \msg{Request}{r, e} and submits the message to the agreement group via its request channel. More precisely, unless the execution replica has already forwarded the request~(Lines~\ref{code:exec-cache-start}--\ref{code:exec-cache-end}) it moves the window of the client's subchannel to position~$t_c$ and inserts the write request at that position~(L.~\ref{code:exec-send-start}--\ref{code:exec-send-end}). Once at least $f_e+1$~members of the execution group~(i.e.,~at least one correct execution replica) have validated and forwarded the request, the request channel permits agreement replicas to retrieve the message~(L.~\ref{code:ag-receive}). This allows the agreement group to initiate the consensus process for the message~(L.~\ref{code:ag-order}), which is then performed entirely within the group's region. Having learned that the request is committed and has been assigned the agreement-sequence number~$s$~(L.~\ref{code:ag-deliver}), an agreement replica creates a confirmation \msg{Execute}{r, s}. As write operations need to be processed by all execution groups, the agreement replica sends this message through all commit channels at position~$s$~(L.~\ref{code:ag-execute}). Once $f_a+1$~agreement replicas~(among them at least one correct replica) have sent an \textsc{Execute} message with the same content and sequence number, a commit channel enables its receivers to obtain the message~(L.~\ref{code:exec-receive}). Having done so, an execution replica processes the included request by applying the corresponding write to its local state~(L.~\ref{code:exec-exec-start}). Each replica of execution group~$e$ also returns a reply \msg{Result}{u_c, t_c} with the operation's result~$u_c$ to the client that submitted the request with counter value~$t_c$~(L.~\ref{code:exec-exec-end}). The client accepts a result after it has received $f_e+1$~replies with matching result and counter value from different execution replicas. As we detail in Section~\ref{sec:checkpointing}, when processing writes replicas in \system also create periodic checkpoints~(L.~\ref{code:exec-cp-gen-start}--\ref{code:exec-cp-end} and \ref{code:ag-cp-gen-start}--\ref{code:ag-cp-end}) to assist other replicas that might have fallen behind. {% \font\lstttx=rm-lmtl10 scaled 820 \font\lstbttx=rm-lmtk10 scaled 820 \lstset{ basicstyle=\linespread{.925}\footnotesize\lstttx, emphstyle=\lstbttx, commentstyle=\fontsize{8pt}{0pt}\selectfont\textit, tabsize=2, numberstyle=\scriptsize, numbersep=2.2mm, xleftmargin=5mm, numbers=left, frame=none, columns=fullflexible, numberblanklines=true, emptylines=2, breaklines=true, breakatwhitespace=false, escapechar=\%, mathescape, morecomment=[l]{//}, morecomment=[s]{/*}{*/}, morestring=[b]", aboveskip=-3mm, emph= until, to, sleep, for, on, receive, from, if, send, main, loop, while, else, parallel, each, move_window% }% }% \begin{figure} \vspace{2mm}\hrule\commentsize\normalfont{\begin{center}Execution Replica of Execution Group $e$\end{center}}\hrule\vspace*{2mm}\vspace{8pt} \begin{lstlisting} on receive if request-IRMC.move_window request-IRMC.send main loop: if else: send if on commit-IRMC.move_window(0, if AG-WI win := [1,AG-WIN hist := last for each client while true: if else: // on sleep until upper limit of win commit-IRMC.send(0, if on commit-IRMC.move_window(0, if h_missing := for each execution group send h_missing via commit-IRMC of group win := \end{lstlisting} \hrule \caption{\system protocol for writes (pseudo code)} \label{def:pseudo-write} \end{figure} }% \headline{Reads} For reads, \system offers two different operations providing weakly consistent and strongly consistent results, respectively. To perform a weakly consistent read, a client sends a read request to all members of an execution group, which for a valid request immediately responds with a result, as illustrated by the dashed lines in Figure~\ref{fig:protocol}. As for writes, a client verifies the result based on $f_e+1$ matching replies. Weakly consistent reads achieve low latency as they only involve communication between the client and its execution group. Due to these reads being processed without further coordination with writes, in the presence of concurrent writes to the same state parts they may return stale values or fewer than $f_e+1$ matching results, similar to the optimized reads in existing BFT protocols~\cite{castro99practical,sousa15separating}. \system clients react to stalled \linebreak reads by retrying the operation or performing a strongly consistent read, which is guaranteed to produce a stable result. Strongly consistent reads in \system for the most part have the same control and data flow as writes, with one important exception. With reads not modifying the application state, it is sufficient to process them at the client's execution group. Consequently, after a read request completed the consensus process, agreement replicas only forward it to the execution group that needs to handle the request. The \textsc{Execute}s to all other groups instead contain a placeholder including only the client request counter value for the same sequence number, thereby minimizing network and execution overhead. \subsection{Checkpointing} \label{sec:checkpointing} As discussed in Section~\ref{sec:channels}, an \channel{} may garbage-collect messages before they have been delivered to all correct receivers. In the normal case in which all receivers advance at similar speed, this property usually does not take effect, resulting in each receiver to obtain every message. To address exceptional cases in which a correct receiver misses messages~(e.g.~due to a network problem), \system provides means to bring the affected receiver up to date via a checkpoint. The specific contents of a checkpoint vary depending on the receiver-replica group~(see below). Checkpoints are periodically created after a group has agreed on\,/\,processed the message for a sequence number~$s$ that satisfies $s \equiv 0~mod~k$. The checkpoint interval~$k$ of a replica group is configurable and for the execution to sustain liveness must be smaller than the maximum capacity of the group's input \channel. The agreement-checkpoint interval~$k_a$ may be selected independently from the interval for execution checkpoints~$k_e$. \headline{Agreement Checkpoints} Having completed the consensus process for a request for which a checkpoint is due, an agreement replica creates an agreement snapshot and includes (1)~a vector~$t$ that for each client contains the counter value~$t_c$ of the client's latest agreed request and (2)~the last \textsc{Execute} messages corresponding to the commit subchannel capacity~(L.~\ref{code:ag-cp-gen-start}--\ref{code:ag-cp-gen-end} in Figure~\ref{def:pseudo-write}). In a next step, the agreement replica computes a hash~$h$ over the snapshot and sends a message \msg{Checkpoint}{h, s} protected with a digital signature to all members of its group. Having obtained $f_a+1$~correctly signed and matching checkpoint messages for the same sequence number, a replica has proof that its snapshot is correct. At this point, the replica can move forward its separate window used to ensure the periodic creation of a new checkpoint~(L.~\ref{code:ag-win-sleep} and \ref{code:ag-win-move}) and also instruct the consensus protocol to garbage collect preceding consensus instances~(L.~\ref{code:ag-cp-gc}). Agreement replicas require periodic checkpoints to continue ordering new requests and thus there is at least one correct agreement replica that possesses both a corresponding valid checkpoint as well as proof of the checkpoint's correctness in the form of $f_a+1$~matching checkpoint messages. As a consequence, if a correct agreement replica falls behind and queries its group members for the latest checkpoint, the replica will eventually be able to acquire this checkpoint, verify it, and apply it in order to catch up by skipping consensus instances. In such case, the checkpoint enables the replica to learn (1)~the request-subchannel positions at which to query the \channel for the next client requests and (2)~the \textsc{Execute}s of the skipped consensus instances~(L.~\ref{code:ag-cp-apply-start}--\ref{code:ag-cp-apply-end}). \headline{Execution Checkpoints} Execution-group checkpointing follows the same basic work flow as in the agreement group. An execution snapshot comprises a copy of the application state and the latest reply to each client, similar to the checkpoints in Omada~\cite{eischer19scalable}. This information enables a trailing execution replica to consistently update its local state without needing to process all agreed requests. When an execution checkpoint for a sequence number~$s$ becomes stable at an execution replica, the replica moves the flow-control window of its incoming commit channel to $s+1$~(L.~\ref{code:exec-cp-start}--\ref{code:exec-cp-end}). This ensures that agreed requests are only discarded after at least one correct execution replica has collected a stable checkpoint. Note that there is no need for checkpoints to contain requests. A client moves its request subchannel's window forward by issuing a new request, thereby confirming that the old request \linebreak can be garbage-collected from the \channel. This also allows execution replicas to skip forward to the current request~(L.~\ref{code:ag-receive-tooold}). \subsection{Global Flow Control} \label{sec:flow} With the flow-control mechanism of an \channel only operating at the communication level between two replica groups, \system takes additional measures to coordinate the message flow at the point where the endpoints of multiple \channel{}s meet: the agreement group. Specifically, there are two types of messages~(i.e.,~new requests received through request channels and \textsc{Execute}s sent through commit channels) that have individual characteristics and are handled in different ways: (1)~With regard to incoming requests, agreement replicas represent the receiver side of request channels and therefore directly manage the positions of the channels' flow-control windows. As described in Section~\ref{sec:checkpointing}, to be able to quickly retrieve new requests an agreement replica updates the counter value of each client's latest request each time an agreement checkpoint becomes stable. (2)~With regard to outgoing \textsc{Execute}s, in contrast, agreement replicas represent the sender side of commit channels and therefore depend on the respective execution group at the other end of each channel to move the flow-control window forward. To prevent a single execution group from delaying overall progress, agreement replicas in \system do not wait until they are able to submit a newly produced \textsc{Execute} to every outgoing commit channel. Instead, having completed inserting an \textsc{Execute} for a sequence number~$s$ into $n_e-z$~commit channels an agreement replica is allowed to continue; $n_e$~is the total number of execution groups in the system and $z$ a configurable value~($0 \leq z < n_e$). To inform the execution groups of trailing commit channels, once such a request is garbage-collected a replica updates the channels' window positions to sequence number~\mbox{$s+1$}. If an affected execution replica subsequently tries to receive \textsc{Execute}s for sequence numbers of $s$ or lower, the commit channel responds with an exception~(see~Section~\ref{sec:channels}). In reaction, the execution replica starts to seek a stable execution checkpoint, querying members of both its own group and others, in order to compensate for the missed messages. \subsection{Adaptability} \label{sec:adaptability} \system's modular architecture makes it possible to dynamically change the number of execution groups in the system and thereby adjust to varying workloads. With the consensus protocol being limited to the agreement group, in contrast to traditional BFT systems such a reconfiguration in \system does not require complex mechanisms or subprotocols. \headline{Adding an Execution Group} To add a new execution group~$e$ to the system, a privileged admin client first starts the replicas of the group and then submits an \msg{AddGroup}{e, \mathcal{E}} message; $\mathcal{E}$ is a set containing the identity and address of each group member. As soon as the agreement process for this message is complete, agreement replicas establish an \channel{} pair~(i.e.,~a request channel and a commit channel) to the new execution group, update the execution-replica registry to reflect the changes, and start the reception of requests and the forwarding of \textsc{Execute}s. Trying to obtain an \textsc{Execute} for sequence number 0, the new replicas will be notified by their commit channels that they have fallen behind and consequently use the mechanism of Section~\ref{sec:flow} to fetch an execution checkpoint from another group. \headline{Removing an Execution Group} To remove an existing execution group~$e$ from the system, the administrator client submits a \msg{RemoveGroup}{e} message that, once agreed on, causes the agreement replicas to update the execution-replica registry and close their \channel{}s to the affected group. \subsection{Handling Faulty Clients and Replicas} Besides enabling \system's modular architecture, \channel{}s also play a crucial role when it comes to limiting the impact faulty clients and replicas can have on the system. In this context, especially one \channel property is of major importance: the fact that a channel only delivers a message after $f+1$~senders submitted it and the channel therefore has proof that at least one correct sender vouches for the message's validity~(see Section~\ref{sec:channels}). If, for example, a faulty client either sends conflicting requests to an execution group or the same request to fewer than \mbox{$f_e+1$}~execution replicas, the request channel of the affected execution group prevents the message's delivery to the agreement group. Note that in such case the effects of the faulty client are strictly limited to the subchannel of this client, which will not deliver a request if fewer than $f_e+1$~execution replicas insert the same message. As execution replicas use a dedicated request subchannel for each client, the subchannels of correct clients remain unaffected. If faulty execution replicas collaborate with a faulty client, different agreement replicas may receive different values for this client's requests. For example, a faulty client might submit a different request~$R_1$, $R_2$, \dots, $R_{f_e+1}$ to each of the $f_e+1$~correct execution replicas of one group and provide all requests to the $f_e$~faulty execution replicas of that group. Depending on which of the request versions the faulty execution replicas transmit to which agreement replica, in such a situation it is possible that some agreement replicas obtain an $f_e+1$~quorum for request~$R_1$ while others receive $f_e+1$~matching messages for request~$R_2$ and so on. Again, the effects are limited to the faulty client's subchannel, requests of correct clients can proceed as usual. This scenario is not specific to \system, but in a similar way can also occur in traditional BFT systems~\cite{castro99practical,yin03separating,veronese10ebawa,sousa15separating,eischer19scalable}, in which clients directly submit their possibly conflicting requests to the replicas performing the agreement. Consequently, all BFT protocols that tolerate faulty clients already comprise mechanisms to handle this scenario. This is usually combined with only executing client requests with a counter value which is higher than the highest value processed so far for that client,\linebreak which ensures that old or duplicate requests are skipped. Besides tolerating faulty clients, agreement protocols in general also provide means that allow correct follower replicas to elect a new leader if the current leader is faulty and, for example, fails to start the consensus process for a new client request within a given timeout~\cite{castro99practical,yin03separating,veronese10ebawa,sousa15separating,eischer19scalable}. To be able to monitor the leader, follower replicas must obtain information about incoming requests. In \system, this is ensured by the fact that request channels only garbage-collect a request from a correct client if the latter has successfully obtained a valid reply. A request for which this is not the case will be uploaded to all correct members of the client's execution group and through this group's request channel eventually reach all correct follower agreement replica, thereby enabling followers to hold the leader accountable. In addition, faulty agreement replicas cannot forward manipulated messages via the commit channel. As the consensus process ensures that all correct agreement replicas deliver the same total order of requests, eventually $f_a+1$~correct agreement replicas will send matching messages enabling the execution groups to receive the correctly ordered requests. In contrast, the delivery of faulty requests sent by the faulty agreement replicas is prevented by the \channel. \section{Background and Problem Statement} \label{sec:background} In this section, we present background on existing approaches and common requirements of BFT wide-area replication. \subsection{System Model} Our work focuses on stateful applications with strong reliability requirements whose clients are scattered across different geographic locations. To access the application a client submits a request to the server side. We assume that both clients and servers can be subject to Byzantine faults. As a consequence, nodes~(i.e.,~clients and servers) do not trust each other and do not make irreversible decisions based on the input provided by another node alone. For example, to tolerate up to $f$~faulty servers, a client only accepts a result after it has \linebreak obtained at least $f+1$~matching replies from different servers. Besides service availability and correctness in the presence of failures, low latency is a primary concern in our target systems. Achieving this goal while keeping the states of servers consistent is inherently difficult in use cases in which clients are geographically dispersed. The problem is further complicated by the fact that we assume that the locations from which clients access the application may change over time, typically as a result of the global day/night cycle. To continuously provide low latency under such conditions, a system must offer some kind of reconfiguration mechanism enabling an adaptation to varying workloads. One possibility to achieve this, for example, is to dynamically include additional \linebreak servers that are located closer to newly started clients. \subsection{Existing Approaches} \label{sec:background-approaches} In the following, we elaborate on the problems associated with Byzantine fault tolerance in geo-distributed systems and discuss existing approaches to solve them. \begin{figure} \vspace{-3mm} \subfloat[PBFT~\cite{castro99practical}\label{fig:pbft}] { \includegraphics{figures/pbft.pdf} } \hfill \subfloat[Steward~\cite{amir10steward}\label{fig:steward}] { \includegraphics{figures/steward.pdf} } \caption{System architectures for BFT geo-replication connecting a client~(C) with leader~(L) and follower~(F) replicas.} \end{figure} \headline{BFT in Wide-Area Environments} The straightforward approach to offer resilience against arbitrary failures is to rely on a BFT replication protocol, for example PBFT~\cite{castro99practical}. As illustrated in Figure~\ref{fig:pbft}, PBFT requires at least $3f+1$~replicas to tolerate $f$~failures. To keep the application state consistent across replicas, PBFT ensures that replicas run an agreement protocol to decide in which order to process client requests. For this purpose, PBFT elects one of the replicas as leader~(marked $L$ in Figure~\ref{fig:pbft}) while all other replicas assume the roles of followers~($F$). Having received a new request, the leader is responsible for initiating the agreement process, which then involves multiple message exchanges between replicas. To deal with scenarios where a faulty leader does not behave according to specification, for example by ignoring a request, PBFT provides a mechanism that enables followers to depose the leader and appoint a new one. Once the agreement process is complete, all non-faulty replicas execute the request and send the result to the client, thereby enabling the client to validate the result by comparison. Using BFT protocols such as PBFT to build resilient systems is effective but has several disadvantages in the context of geo-replication: (1)~With replicas being distributed across different geographic sites, the entire BFT~protocol needs to be executed over wide-area links, which often results in high response times. Note that this is not only true with regard to the task of agreeing on requests during normal operation, but for example also for electing a new leader as part of fault handling. (2)~Due to the fact that all requests must flow through the leader, the geographic location of the leader, and in particular its position relative to the majority of followers, usually has a significant influence on latency~\mbox{\cite{sousa15separating,eischer18latency}}. Consequently, a leader switch may decisively change a system's performance characteristics, requiring clients to deal with the associated latency volatility. (3)~Consisting of only $3f+1$~replicas, for traditional BFT systems it is inherently difficult to select suitable replica locations in cases where a large and varying number of clients are scattered across the globe. Ideally, replicas would be placed both in close distance to each other (to speed up agreement) as well as in close distance to clients (to minimize the transmission time of requests and results). For systems with just a few replicas but many \linebreak clients meeting this requirement is essentially impossible. \headline{Weighted Voting} By assigning different weights on the votes replica have within the consensus protocol~\cite{sousa15separating,berger19resilient} it is feasible to introduce additional replicas while keeping response times low or even reducing them in a geo-replicated setting. Unfortunately, this comes at the cost of an increased number of messages exchanged between replicas, which can be prohibitively expensive in public-cloud settings as providers typically charge extra for wide-area traffic. \headline{Leader Rotation} Different authors have proposed to improve performance by rotating the leader role among replicas, following the idea of enabling each client to submit requests to its nearest replica~\mbox{\cite{veronese09spin,mao09towards,veronese10ebawa}}. Results from an extensive experimental evaluation by Sousa~et~al.~\cite{sousa15separating}, however, showed that in practice this approach does not provide significant benefits compared with appointing a fixed leader at a well-connected site. Besides, leader rotation still requires the execution of a complex protocol over wide-area links. \headline{Hierarchical System Architecture} To increase the scalability of BFT systems in wide-area settings, Amir et al. presented a hierarchical architecture as part of their Steward system~\cite{amir10steward}. As shown in Figure~\ref{fig:steward}, instead of hosting a single replica, each site in Steward comprises a cluster of replicas that run a site-local BFT agreement protocol. A key benefit of this approach is the fact that, although individual replicas still may be subject to Byzantine faults, an entire cluster can be assumed to only fail by crashing. This property at the local level enables Steward to rely on a crash-tolerant agreement protocol at the global level~(i.e.,~between sites), which compared with traditional BFT systems requires fewer phases and fewer message transmissions over wide-area links. The efficiency enhancements made possible by its architecture enable Steward to improve performance, however, they come at the cost of an increased overall complexity that stems from the need to maintain replication protocols at two levels: within each site as well as between sites. Designing and implementing such protocols in isolation already is a non-trivial task, additionally guaranteeing a correct interplay between them is even more challenging. To ensure liveness Steward, for example, requires timeouts at different levels to be carefully coordinated~\cite{amir10steward}. Amir et al. addressed these problems in a subsequent work~\cite{amir07customizable}, which in this paper we refer to as CFT-WAR. In contrast to Steward, in CFT-WAR each step of the wide-area protocol~(e.g.,~Paxos~\cite{lamport98part}) is handled by a full-fledged multi-phase consensus protocol at each site~(e.g.,~PBFT). As a main advantage, this approach disentangles the protocols used for wide-area and site-internal replication. On the downside, it introduces additional overhead that in general prevents CFT-WAR from achieving response times as low as Steward's when providing the same degree of fault tolerance~\cite{amir07customizable}. Furthermore, due to performing agreement at two levels CFT-WAR still needs to run multiple subprotocols for tasks such as leader election, one at each level. A set of additional subprotocols would be required to support the dynamic addition/removal of individual replicas or entire sites in a hierarchical system architecture, thereby further increasing complexity. To our knowledge, the ability to adjust to varying workload conditions was not a design goal of Steward and CFT-WAR, which is why the systems do not offer \linebreak mechanisms for changing their composition at runtime. \subsection{Problem Statement} Our analysis in Section~\ref{sec:background-approaches} shows that applying existing approaches to provide BFT in a cloud-based geo-replicated environment is possible, for example with regard to safety, but cumbersome due to the associated high complexity and the lack of effective means to react to changing workloads. This observation led us to ask whether these problems can be circumvented by a BFT system architecture that is specifically tailored to the characteristics of today's cloud infrastructures. In particular, we aim for a resilient system architecture that has three properties: efficiency, modularity, and adaptability. \headline{Efficiency} To minimize response times during both normal-case operation as well as fault handling, a system architecture in the ideal case does not require the execution of complex protocols over wide-area links. Instead, tasks involving multiple phases of message exchange between replicas, such as the agreement on requests, should be handled by replicas that are located in comparably close distance to each other. \headline{Modularity} Supporting a variety of cloud use cases with different requirements is difficult if the protocols responsible for the agreement and execution of requests are hard-wired into the BFT system architecture. To address this issue, we join other authors~\cite{amir07customizable} in aiming for an architecture that, for example, can be integrated with different consensus protocols depending on the specific demands of an application. \headline{Adaptability} One major strength of public clouds is to quickly provide resources on demand and at various geographic locations all over the globe. A BFT system architecture should be able to leverage this feature for hosting replicas in the proximity of clients to reduce the latency with which clients access the replicated service. Specifically, if new clients are started at other sites, there should be a lightweight mechanism for dynamically adding new replicas. The same applies to means for removing replicas that are no longer of \linebreak benefit as the clients in their vicinity have been shut down. \section{\channel Implementations} \label{sec:implementations} In this section, we present two different variants to implement inter-regional message channels, focusing on simplicity~(\channela) and efficiency~(\channelb), respectively. Additional variants are possible, as discussed in Section~\ref{sec:related}. \headline{\channel with Receiver-side Collection~(\channela)} The receiver endpoint of an \channel only delivers a message~$m$ for a specific subchannel~$sc$ and position~$p$ if at least $f_s+1$~senders previously instructed the channel to transmit a message with identical content for the same subchannel position~(see Section~\ref{sec:channels}). As illustrated in Figure~\ref{fig:implementations-a}, the \channela solves this problem by each sender endpoint~$S_x$ directly forwarding a \smsg{Send}{m, sc, p}{S_x, \mathcal{X}} message and thereby enabling each receiver endpoint to individually collect $f_s+1$ matching messages. To allow receivers to verify the origin and integrity of a \textsc{Send}, a sender signs messages with its private key~$\mathcal{X}$. When a receiver requests a subchannel's flow-control window to be shifted, its receiver endpoint~$R_y$ submits a signed \smsg{Move}{sc, p}{R_y, \mathcal{Y}} message to all sender endpoints. For each receiver and subchannel, a sender endpoint stores the \textsc{Move} message with the highest position~$p$ and sets the subchannel's window start to the $f_r+1$~highest position requested by any receiver~(see Section~\ref{sec:channels}). To request a shift of a subchannel's flow-control window, sender endpoints also send \textsc{Move} messages which the receivers process analogously. \begin{figure} \vspace{-2mm} \subfloat[\channela\label{fig:implementations-a}] { \includegraphics[page=1]{figures/implementations.pdf} } \hfill \subfloat[\channelb\label{fig:implementations-b}] { \includegraphics[page=2]{figures/implementations.pdf} } \caption{Overview of two possible \channel implementations.} \label{fig:implementations} \end{figure} \headline{\channel with Sender-side Collection~(\channelb)} \channelb{}s minimize the number of messages transferred across wide-area links by applying the concept of \emph{collectors}~\cite{gueta19sbft}. That is, sender endpoints in \channelb{}s do not submit their \textsc{Send}s to the receiver side but, as indicated in Figure~\ref{fig:implementations-b}, instead exchange signed hashes of them within the sender group. Each sender endpoint serves as a collector, which means that the endpoint assembles a vector~$\vec{v}$ of $f_s+1$~correct signatures from different senders for the same \textsc{Send} message content~$sm$. Having obtained this vector, a collector~$S_x$ sends it in a signed \smsg{Certificate}{sm, \vec{v}}{S_x, \mathcal{X}} message to one or more receiver endpoints. On reception, a receiver verifies the validity of the \textsc{Certificate} by checking both the signatures of the message and the $f_s+1$~signatures contained in the vector~$\vec{v}$. If all of these signatures are correct and match the \textsc{Send} message content~$sm$, the endpoint has proof that $sm$ is valid as it was sent by at least one correct replica and delivers the associated message to its receiver on request. \channelb receiver endpoints individually select the sender endpoint serving as their current collector and announce these decisions attached to their \textsc{Move}s. As a protection against faulty collectors, all sender endpoints periodically transmit \smsg{Progress}{\vec{p}}{S_x, \mathcal{X}} messages directly to receiver endpoints in which they include a vector~$\vec{p}$ with the highest position of each subchannel for which they have a \textsc{Certificate}. If at least $f_s+1$~sender endpoints claim to have reached a certain position but a receiver's collector fails to provide a corresponding and valid \textsc{Certificate} within a configurable amount of time, the endpoint switches to a different collector. \section{Conclusion} The cloud-based \system system architecture models a BFT system as a collection of loosely coupled replica groups that can be flexibly distributed in geo-replicated environments. In contrast to existing approaches, \system does not require the execution of complex multi-phase protocols over wide-area links, but instead performs essential tasks such as consensus, leader election, and checkpointing across replicas residing in the same region. Our experiments show that this approach enables \system to achieve low and stable response times. \section{Evaluation} In this section, we experimentally evaluate \system in comparison to existing approaches for BFT wide-area replication. \headline{Environment} To compare different techniques, we implemented a Java-based prototype that can be configured to reflect three different system architectures~(cf.~Section~\ref{sec:background-approaches}): (1)~\textbf{\bft} represents the traditional approach of distributing a single set of replicas across different geographic locations. It relies on PBFT~\cite{castro99practical} as agreement protocol and uses HMAC-SHA-256 as MACs to authenticate the messages exchanged between replicas. (2)~\textbf{\hft} employs a hierarchical system architecture running the two-level Steward protocol~\cite{amir10steward} to coordinate multiple sites that each host a dedicated cluster of replicas. Steward requires threshold cryptography for which \hft uses the scheme proposed by Shoup~\cite{shoup00practical} based on 1024-bit RSA signatures. (3)~\textbf{\system} represents our system architecture proposed in this paper. In this evaluation, \system's agreement group runs PBFT for consensus and its \channel{}s protect their messages with 1024-bit RSA signatures. \begin{figure}[b!] \includegraphics{figures/eval-leader-pbft.pdf}% \vspace{-1mm}\\ \includegraphics{figures/eval-leader-hft.pdf}% \vspace{-.5mm}\\ \includegraphics{figures/eval-leader-spider-intra.pdf}% \caption{50th~(\raisebox{-.3mm}{\protect\tikz \protect\node[draw, minimum width=7, minimum height=7] {};}) and 90th~(\raisebox{-.3mm}{\protect\tikz \protect\node[draw, minimum width=7, minimum height=7, postaction={pattern=crosshatch, pattern color=black}] {};}) percentiles of write latencies for different client and leader locations including Virginia~(V), Oregon~(O), Ireland~(I), and Tokyo~(T).} \label{fig:eval-leader-all} \end{figure} To conduct our experiments in an actual wide-area environment, we start virtual machines~(t3.small, 2\,VCPUs, 2\,GB\,RAM, Ubuntu\,18.04.4\,LTS, OpenJDK\,11) in 4~Amazon EC2 regions across the globe~(Virginia, Oregon, Ireland, and Tokyo). In each of these regions, we deploy 50~clients that issue 100 writes/reads per second (200~bytes) to a key-value store provided by our systems under test; client messages carry 1024-bit RSA signatures. Given this client setting, our architectures demand the following replica placement for~$f=1$: For \bft, 1~replica is hosted in each of the 4~regions. \hft expects a cluster of 4~replicas in each region, which is used as contact cluster for local clients. For \system, we deploy 1~execution group (3~replicas) per region, distributed across different availability zones. In addition, we start \system's \linebreak 4~agreement replicas in separate Virginia availability zones. \headline{Writes} In our first experiment, we examine the latency of writes issued by clients at different sites. Based on the results presented in Figure~\ref{fig:eval-leader-all}, we make three important observations: (1)~In all evaluated architectures the response times to a major degree depend on a client's geographic location. For \bft and \hft, clients in Virginia for example benefit from the fact that their local replica~(cluster) experiences comparably short round-trip times when communicating with its counterparts in Oregon and Ireland. In particular, this results in low latency when the Virginia replica (cluster) acts as leader of the wide-area consensus protocol and is able to reach a quorum together with these two other sites. In \system, clients in Virginia also observe low write latency, but for a different reason. Here, the fact that the agreement group resides in the same region as the clients' local execution group allows clients in Virginia to achieve response times of as low as 13~milliseconds. (2)~For each client location, \system \mbox{provides} significantly lower latency than \bft~(up to 95\,\%) and \hft~(up~to~94\,\%). This is a direct consequence of the fact that in contrast to the other two system architectures \system does not execute a full-fledged replication protocol over wide-area links. Instead, a write request only has to wait for two wide-area hops: from a client's local execution group to the agreement group and back. The distribution of the ordered write request to other execution groups is handled by the agreement group and thus does not require execution groups to explicitly wait for each other. That is, when an execution replica in \system receives an \textsc{Execute} for a write from the agreement group, the replica can immediately process the operation and return a reply to the client. (3)~The response times of \bft and \hft vary considerably depending on the position of the current leader of the wide-area consensus protocol. \hft clients in Ireland, for example, experience a 53\,\% higher latency when the leader is positioned in Tokyo compared to when the leader role is assigned to Virginia. In contrast, the specific location of the agreement-group leader in \system only has a negligible effect on overall response times due \linebreak to all agreement replicas residing in the same region, resulting in stable response times even across leader changes. \begin{figure} \vspace{-2mm} \subfloat[Strongly consistent reads\label{fig:eval-readonly-strong}] { \includegraphics[clip, trim=0 1mm 0 0]{figures/eval-readonly-strong.pdf} } \subfloat[Weakly consistent reads\label{fig:eval-readonly-weak}] { \includegraphics[clip, trim=0 1mm 0 0]{figures/eval-readonly-weak.pdf} } \caption{50th~(\raisebox{-.3mm}{\protect\tikz \protect\node[draw, minimum width=7, minimum height=7] {};}) and 90th~(\raisebox{-.3mm}{\protect\tikz \protect\node[draw, minimum width=7, minimum height=7, postaction={pattern=crosshatch, pattern color=black}] {};}) percentiles of read latencies.} \label{fig:eval-readonly} \end{figure} \headline{Reads} In our second experiment, we compare the evaluated architectures regarding the performance of their individual (fast-)paths for read operations with different consistency guarantees. As the results in Figure~\ref{fig:eval-readonly} show, response times of strongly consistent reads in \system display a similar pattern as writes due to following the same path through the system. For clients in Tokyo, this leads to slightly higher response times compared with \bft and \hft, which in this case benefit from directly querying replicas without intermediaries in between. For all other client locations, \system's approach, which only requires waiting for one wide-area round trip from a client's execution group to the agreement group and back, enables lower latency than provided by \bft and \hft. With regard to weakly consistent reads, both \hft and \system achieve response times of 2~milliseconds or less, as these operations can be entirely handled by replicas in a client's vicinity and therefore do not require wide-area communication as in \bft. \headline{Modularity Impact} In our third experiment, we quantify the impact of our decision to design \system as a modular architecture that separates agreement from execution and consists of loosely coupled replica groups connected via \channel{}s. We create two variants of \system where (1)~the agreement group also executes requests and is the only group in the system~(\systemze) and (2)~there is only one execution group that is co-located with the agreement group in Virginia~(\systemoe). While, \systemze allows us to study \system without \channel and externalized execution, based on \systemoe we can assess the influence of an \channel without wide-area delays. Our results show that when clients access \systemze and \systemoe from different sites, response times are dominated by the wide-area communication between clients and replicas. Thus, the modularization overhead is small and adds less than 14~milliseconds~(see Figure~\ref{fig:eval-leader-spider}). \begin{figure}[b!] \subfloat[Overall latency (200-byte writes)\label{fig:eval-leader-spider}] { \includegraphics{figures/eval-leader-spider.pdf} } \subfloat[Throughput\label{fig:eval-channels-lat}] { \includegraphics{figures/eval-channels-lat.pdf} }\\ \subfloat[CPU usage\label{fig:eval-channels-cpu}] { \includegraphics{figures/eval-channels-cpu.pdf} } \subfloat[Network usage\label{fig:eval-channels-net}] { \includegraphics{figures/eval-channels-net.pdf} } \caption{Performance and resource usage of \channel{}s.} \label{fig:eval-channels} \end{figure} \headline{\channel{} Implementations} In our fourth experiment, we evaluate the two \channel variants presented in Section~\ref{sec:implementations} by establishing a channel of each type between Virginia and Tokyo and submitting messages of different sizes. The comparison of results in Figures~\ref{fig:eval-channels-lat}--\ref{fig:eval-channels-net} confirm the two implementations to have individual characteristics. Without the need to verify signatures for \textsc{Certificate} messages, \channela sender endpoints require less CPU resources per message and therefore enable \channela{}s to achieve a higher maximum throughput. On the other hand, forwarding only one wide-area message per receiver endpoint \channelb{}s significantly reduce the amount of data transferred over long-distance links, thereby saving costs in public-cloud environments. \headline{Adaptability} In our fifth experiment, we evaluate the write and read performance new clients experience when they join the system at an additional location. For this purpose, we start with our usual setting and after 80~seconds launch 50~clients in the EC2 region Sao Paulo. Once running, the new clients in \bft and \hft issue their requests to existing replicas, while for \system they contact an additional execution group also set up in Sao Paulo. Involving more client sites than replica sites in \bft and \hft, the setting in this experiment represents a typical use-case scenario for weighted-voting approaches~(see Section~\ref{sec:background-approaches}). We therefore repeat the experiment with a fourth system~(\bftwv) that extends \bft with weighted voting and comprises a replica at each of the five client locations. As required by weighted voting, two of the five replicas are assigned higher weights in the consensus protocol. Specifically, these are the replicas in Virginia and Oregon because this weight distribution achieves the best performance in our evaluation scenario. Figure~\ref{fig:eval-newloc-avgs} presents the results of this experiment showing the average response times observed across all active client sites. To save space, we omit the results for strongly consistent reads as they show a similar picture as writes. For each system, we evaluate different leader locations, but for clarity Figure~\ref{fig:eval-newloc-avgs} only reports the results of the configurations achieving the lowest response times for each system. \begin{figure} \vspace{-2mm} \subfloat[Writes\label{fig:eval-newloc-avg-write}] { \includegraphics{figures/eval-newloc-avg.pdf} } \subfloat[Weakly consistent reads\label{fig:eval-newloc-avg-read}] { \includegraphics{figures/eval-newloc-avg-read.pdf} } \caption{Impact of a new client site on overall latency.} \label{fig:eval-newloc-avgs} \end{figure} Figure~\ref{fig:eval-newloc-avg-write} shows that the overall write latency increases for all evaluated architectures once the clients in Sao Paulo join the system. This is a consequence of the fact that due to its geographic location EC2's Sao Paulo region has comparably high transmission times to other cloud regions. Clients in Sao Paulo therefore observe response times between about 124~milliseconds (\system) and about 298~milliseconds (\bft), which alone causes the measurable jumps in the overall write latency averages; the response times for clients in other regions remain unaffected. Interestingly, \bft and \bftwv achieve similar write performance throughout the experiment and thereby confirm that weighted voting does not automatically improve response times. This is only true when the additional replica is located at a site that is better connected than the existing ones and therefore enables the wide-area consensus protocol to reach faster quorums. In the setting evaluated here, \bft's typical consensus quorum is based on the votes of the replicas in Virginia, Oregon, and Ireland and therefore already provides better performance than any combination that includes the replica in Sao Paulo. As shown in Figure~\ref{fig:eval-newloc-avg-read}, of the evaluated architectures \system is the only one that allows the new clients in Sao Paulo to perform weakly consistent reads with low latency. While all other systems require the clients in Sao Paulo to read from at least one remote replica and consequently experience overall read-latency increases of up to 23~milliseconds, \system makes it possible to introduce an execution group in the new region to efficiently handle the reads of local clients. \begin{figure} \includegraphics{figures/eval-leader-f2.pdf}% \vspace{-0.5mm} \caption{50th~(\raisebox{-.3mm}{\protect\tikz \protect\node[draw, minimum width=7, minimum height=7] {};}) and 90th~(\raisebox{-.3mm}{\protect\tikz \protect\node[draw, minimum width=7, minimum height=7, postaction={pattern=crosshatch, pattern color=black}] {};}) percentiles of write latencies for different client sites when tolerating $f=2$ faults.} \label{fig:eval-leader-all-f2} \end{figure} \headline{Tolerating Two Faults} In our final experiment, we examine write latencies for settings that are configured to tolerate $f=2$ faults in each agreement and execution group. We place the additional replicas into nearby EC2 regions (Ohio, California, London, Seoul) to make use of further fault domains. The results in Figure~\ref{fig:eval-leader-all-f2} show that due to increased communication latency within groups both \hft and \system see a moderate increase of response times by up to 46 milliseconds compared with the $f=1$~setting, with \system still providing significantly lower latency than \bft and \hft. \section{Introduction} Byzantine fault-tolerant~(BFT) protocols enable a system to withstand arbitrary faults and consequently have been used to increase the resilience of a wide spectrum of \ifextended{}{\pagebreak} critical applications such as key-value stores~\mbox{\cite{padilha13augustus,padilha16callinicos,li16sarek,eischer19deterministic}}, SCADA systems~\cite{nogueira18challenges,babay18network,babay19deploying}, firewalls~\cite{bessani08crutial,garcia16sieveq}, coordination services~\cite{clement09upright,kapitza12cheapbft,behl15consensus,distler16resource,eischer19scalable}, and permissioned blockchains~\cite{sousa18byzantine,gueta19sbft}. To provide their high degree of fault tolerance, BFT protocols replicate the state of an application across a set of servers and rely on a leader-based consensus algorithm to keep these replicas consistent. This task requires several subprotocols~(e.g.,~for leader election, checkpointing, state transfer) and multiple phases of message exchange between replicas~\cite{castro99practical}. Unfortunately, this complexity makes it inherently difficult to achieve low latency in use cases in which the clients of an application are scattered across various geo\-graphic locations. For example, placing replicas in close proximity to each other may reduce the latency of strongly consistent requests whose execution must be coordinated by the consensus protocol between replicas. However, with replicas being located farther apart from clients this strategy also increases the response times of requests such as weakly consistent reads that do not need to be agreed on and only involve direct interaction between clients and replicas. In contrast, co-locating replicas with clients has the inverse effect of speeding up client--replica communication but adding a significant performance overhead to the agreement protocol. Existing approaches for BFT wide-area replication aim at minimizing this overhead by (1)~applying weighted-voting schemes to reduce the quorum sizes needed to complete consensus~\cite{sousa15separating,berger19resilient}, (2)~rotating the leader role among replicas to shorten the path necessary to insert a request into the agreement protocol~\mbox{\cite{veronese09spin,mao09towards,veronese10ebawa}}, or (3)~relying on a two-level system design that deploys an entire BFT replica cluster at each client site in order to be able to use crash-tolerant replication between sites~\cite{amir10steward,amir07customizable}. In all these cases, BFT systems still need to run complex consensus-based replication protocols over wide-area links which not only results in response-time overhead but also makes it difficult to dynamically introduce new \linebreak replica sites, for example, to serve clients at new locations. In this paper we address these problems with \system, a cloud-based BFT system architecture for geo-replicated services that models a system as a collection of loosely coupled replica groups that are deployed in different regions. Separating agreement from execution~\cite{yin03separating}, one of the groups (``\emph{agreement group}'') establishes an order on all requests with strong consistency demands while all other groups~(``\emph{execution groups}'') are responsible for communicating with clients and processing requests. In contrast to existing approaches, \system does not require complex wide-area protocols but instead handles tasks such as consensus, leader election, and checkpointing within a group and over short-distance links. To make this possible while still offering resilience against replica failures, \system leverages the design of today's cloud infrastructures~\cite{ec2-regions,azure-regions,gce-regions} and places the replicas of a group in different availability zones of the same region; availability zones are hosted by data centers at distinct sites and specifically engineered to represent different fault domains. In particular we make four contributions in this paper: (1)~We present the \system architecture and discuss how it achieves low latency for weakly consistent reads by placing execution groups close to clients, while at the same time minimizing agreement response times for strongly consistent reads and writes. (2)~We show how to design \system in a modular way so that execution groups do not depend on internals of the agreement group~(e.g.,~a specific consensus protocol). As an additional benefit, the modularity also makes it straightforward to add/remove execution groups at runtime. (3)~We introduce a wide-area BFT flow-control mechanism that exploits the special characteristics of \system to minimize complexity. Our approach is based on a simple message-channel abstraction that handles the inter-regional communication between two replica groups and prevents one group \linebreak from overwhelming the other. (4)~We evaluate \system in comparison to the state of the art in BFT wide-area replication. \section{Safety and Liveness Proof for \system} In the following, we first provide a detailed description of the individual components of \system, along with the assumptions and definitions used for proving the correctness and liveness properties of \system. Afterwards, we present the proof itself and conclude with pseudocode for both IRMC implementation variants (IRMC-RC and IRMC-SC). \subsection{Fault Assumptions} We assume that each execution group consists of $2f_e+1$~replicas and that there are up to $f_e$~faulty execution replicas per execution group. The agreement group has $3f_a+1$~replicas of which up to $f_a$~agreement replicas may be faulty. All faults are assumed to be Byzantine. We assume a partially synchronous network with periods of synchrony which are long enough to allow the protocol to make progress~\cite{dwork88consensus}. \defagreement replica\xspace{agreement replica\xspace} \defagreement replicas\xspace{agreement replicas\xspace} \defexecution replica\xspace{execution replica\xspace} \defexecution replicas\xspace{execution replicas\xspace} \defagreement black-box\xspace{agreement black-box\xspace} \subsection{Cryptographic Primitives and Assumptions} The pseudocode uses the following cryptographic primitives: \begin{itemize} \item sign($m$): Digitally sign message $m$ (e.g., using RSA). \item valid\_sig\textsubscript{$\mathcal{E}$}($m$): Verify that the signature for message~$m$ is valid and that the signer is part of group $\mathcal{E}$. \item mac\textsubscript{$a,e$}(m): Add a single MAC~(message authentication code) such that replica~$a$ authenticates message~$m$ towards replica~$e$~\cite{tsudik92message}. This primitive, for example, may be implemented using HMAC-SHA-256. \item mac\textsubscript{$a,\mathcal{E}$}(m): Add a MAC vector such that replica~$a$ authenticates message~$m$ to a replica group $\mathcal{E}$~\cite{castro99practical}. It consists of a MAC for each replica in group~$\mathcal{E}$. \item valid\_mac\textsubscript{$a,e$}(m) and valid\_mac\textsubscript{$a,\mathcal{E}$}(m) are used to verify these MACs. \item unwrap\_mac(m): Strips the added MAC from message~$m$ and returns the original message. \item h($m$): Calculate a cryptographically secure hash digest of message~$m$, for example using SHA-256. \end{itemize} \noindent{}We make the standard assumptions regarding cryptographic functions. We assume them to be secure, that is a malicious replica cannot forge signatures\,/\,MACs of other replicas nor can it create a message $m' \neq m$ with hash~$h(m) = h(m')$. \subsection{Consistency Guarantees} \system provides linearizability for write requests. Read requests with strong consistency are treated similarly, but only the designated execution group gets the full request, whereas all other groups just receive the client id and counter. Weakly consistent reads provide one-copy serializability. Section~\ref{sec:consistency-guarantees} contains the relevant proofs and definitions of the consistency guarantees. \subsection{Definitions} We first describe the properties provided by \system before describing the required assumptions for the agreement black-box and the checkpoint component. \subsubsection{Properties of \system} The definitions of E-Safety and E-Validity follow the lines of those used for Steward~\cite{amir10steward}. E-Safety~II and E-Liveness are adapted from PBFT~\cite{castro99practical}. E-Validity~II captures the usual at-most-once guarantee. \begin{definition}[E-Safety] \label{def:e-safety} If two correct servers execute the i\textsuperscript{th} write, then these writes are identical. \end{definition} \begin{definition}[E-Safety II] \label{def:e-safety2} The system provides linearizability regarding requests from correct clients. \end{definition} \begin{definition}[E-Validity] \label{def:e-validity} Only a correctly authenticated write request from a client may be executed. \end{definition} \begin{definition}[E-Validity II] \label{def:e-validity2} A write request may be executed at most once. \end{definition} \begin{definition}[E-Liveness] \label{def:e-liveness} A correct client will eventually receive a reply to its request. \end{definition} \subsubsection{Agreement Black-Box} \begin{figure} \vspace*{3mm}% \begin{lstlisting} interface // Request ordering of message m void // Must deliver request in order without gaps // Blocking callback, that is the agreement can only deliver the next message after the previous deliver call has completed // Delays in deliver may cause timeouts in the agreement black-box to expire callback // Forget everything before // After this call no sequence number void } \end{lstlisting} \caption{Agreement black-box interface (pseudo code)} \label{def:ag-black-box} \end{figure} We assume the agreement to be a black-box with the interface shown in Figure~\ref{def:ag-black-box} and the following properties. The comments at the interface methods detail their expected behavior. We assume that the first delivered sequence number is 1. \begin{definition}[A-Safety] \label{def:a-safety} If two correct agreement replicas deliver a message for sequence number~$s$, then these messages are identical. \end{definition} \begin{definition}[A-Liveness] \label{def:a-liveness} If $2f+1$~correct replicas receive a message $m$ for ordering, then eventually $f+1$~correct replicas will deliver message $m$ and all preceding messages. \end{definition} \begin{definition}[A-Validity] \label{def:a-validity} A correct agreement replica will only deliver correctly authenticated client requests. \end{definition} \begin{definition}[A-Order] \label{def:a-order} A correct agreement replica will deliver a message for sequence number~$s$ only after all preceding sequence numbers were delivered or garbage collected. \end{definition} \noindent{}These requirements are for example fulfilled by PBFT~\cite{castro99practical}. \subsubsection{Checkpoint Component} \begin{figure} \vspace*{3mm}% \begin{lstlisting} interface // Create and distribute own checkpoint message // By default only checkpoint components within a single group communicate with each other (i.e., checkpoint void // Sequence numbers of delivered checkpoints must increase monotonically // Older checkpoints must be skipped, if a newer checkpoint has already been delivered callback // Actively fetch requested checkpoint void } \end{lstlisting} \caption{Checkpoint-component interface (pseudo code)} \label{def:cp-interface} \end{figure} We assume that each replica has a checkpoint component with the interface from Figure~\ref{def:cp-interface} and the following properties. The comments at the interface methods detail their expected behavior. \begin{definition}[Stable checkpoint] A checkpoint is called \emph{stable} once a correct replica collects a certificate consisting of $f+1$~valid and matching checkpoint messages. \end{definition} \noindent{}Once a replica possesses a stable checkpoint it will call \texttt{stable\_cp} with the checkpoint, unless it has already delivered a checkpoint with a higher sequence number. \begin{definition}[CP-Safety] \label{def:cp-safety} A stable checkpoint was created by at least one correct replica. \end{definition} \noindent{}As shown later on, all correct replicas in a group will create identical checkpoints for the same sequence number. \begin{definition}[CP-Liveness] \label{def:cp-liveness} If one correct replica of a group delivers a checkpoint, then eventually all correct replicas of that group will deliver that checkpoint, unless a newer checkpoint was already delivered. \end{definition} \begin{definition}[CP-Liveness II] \label{def:cp-liveness2} Once $f+1$ correct replicas create and distribute identical checkpoint messages, the checkpoint will eventually become stable, unless it is superseded by a newer one before. \end{definition} \noindent{}An implementation should consider the following aspects: \begin{itemize} \item With an execution group size of $2f_e+1$ CP-Safety requires that each checkpoint message is authenticated using a signature. \item In order to provide CP-Liveness correct replicas must continuously inform\,/\,query each other about their latest stable checkpoint. \item A checkpoint message $\langle\textsc{Checkpoint},h,s\rangle$ for sequence number~$s$ with $h=h(st)$ only contains a hash of the checkpoint state~$st$ to keep the network overhead low. \item The full checkpoint state should only be transferred when necessary. \end{itemize} \subsubsection{Application} We assume that the application is implemented as a deterministic state machine which can execute client requests and provide a reply to them. In addition, the application must be able to retrieve and apply a checkpoint. The latter functionalities are denoted as assignment \texttt{app := app'} and passing \texttt{app} to \texttt{cp.gen\_cp} in pseudo code. \begin{definition}[RSM] \label{def:rsm} Different application instances have an identical state for sequence number $i$ when processing writes according to the same total order~\cite{schneider90implementing}. \end{definition} \subsection{IRMC Properties} \begin{figure} \vspace*{3mm}% \begin{lstlisting} /* Sender endpoin interface // If // If // If void // Ask receiver endpoint to move the window forward // The receiver endpoint will internally call move_window with the void } /* Receiver endpoin interface // Blocks until (1) a message // Position void } \end{lstlisting} \caption{IRMC interfaces (pseudo code)} \label{def:irmc-interface} \end{figure} The sender and receiver endpoint interfaces of the IRMC are shown in Figure~\ref{def:irmc-interface}. As before, the comments specify the expected behavior of the methods. All sender replicas are contained in the set $R_s$ and all receiver replicas in $R_r$. The capacity of an IRMC (subchannel) is denoted as $|IRMC|$ and is assumed to be $\geq 1$. It is identical for all subchannels of an IRMC. $IRMC_{sc}.win$ refers to the window of subchannel $sc$, which is initialized to start at 1. $min(IRMC_{sc}.win)$ and $max(IRMC_{sc}.win)$ return the lower and upper limit\linebreak \mbox{(inclusive)} of the window of subchannel~$sc$, respectively. $receive(sc, p) = m$ denotes that the receive call returned the message~$m$. \begin{definition}[IRMC-Correctness I] \label{def:irmc-correctness1} Receive only returns a message sent by a correct sender:\\ $receive(sc, p) = m \rightarrow$ a correct sender called $send(sc, p, m)~\wedge$ the receiver called $move\_window(sc, p')$ such that $p' \leq p < p' + |IRMC_{sc}|$. \end{definition} \begin{definition}[IRMC-Correctness II] \label{def:irmc-correctness2} Moving a window requires a move request by at least one correct replica:\\ $receive(sc, p) = \langle\textsc{TooOld}, p'\rangle$ with $p' > p \rightarrow$ a correct sender called $move\_window(sc, \hat{p})$ with $\hat{p}\geq p'~\vee$ a correct receiver called $move\_window(sc, \hat{p})$ with $\hat{p}\geq p'$. \end{definition} \begin{remark} \label{def:irmc-remark} Calls to \texttt{send} block if the requested position is after the upper limit of the current subchannel window. Calls to \texttt{receive} block if the position is in or after the subchannel window and the corresponding message was not yet received by the IRMC. \end{remark} \begin{definition}[IRMC-Liveness I] \label{def:irmc-liveness1} An identical message sent (\texttt{send} method call has returned) by at least $f_s+1$ correct replicas will eventually cause some message to be received by all correct receivers unless it is skipped (see also IRMC-Correctness II):\\ If $f_s+1$~correct senders call $send(sc,p,m)$, then eventually $\forall$~correct $r \in R_s$ that call(ed) $receive(sc, p)\hspace{-.5mm}:\hspace{.2mm}receive(sc, p)\hspace{-.5mm}=\hspace{-.5mm}*$ $\vee~receive(sc, p) = \langle\textsc{TooOld}, p'\rangle$ with $p' > p$. \end{definition} \begin{remark} Due to IRMC-Correctness I the received message can only be one that was sent by at least one correct sender. \end{remark} \begin{definition}[IRMC-Liveness II] \label{def:irmc-liveness2} Send calls return once the position is below the subchannel window's upper bound:\\ If $f_r+1$~correct receivers $r \in R_r$ call $move\_window(sc, p_r)$, then eventually all $send(sc, p', m)$ calls will have returned on all correct sender replicas where $p' < \tilde{p} + |IRMC_{sc}|$ and $\tilde{p} = f+1$-largest $p_r$. \end{definition} \begin{definition}[IRMC-Liveness III] \label{def:irmc-liveness3} Receiver endpoints will move the window at least as far as the $f_s+1$-highest \texttt{move\_win} request by a sender replica:\\ If $f_s+1$~correct senders call $move\_window(sc, p_s)$, then eventually all correct receiver endpoints will have (internally) called $move\_window(sc, p)$ with $p$ such that largest $p_s \geq p \geq f+1$-largest $ p_s$. \end{definition} \begin{remark} Note that if a receiver endpoint has already moved a subchannel window to a higher position than $p$, then the call to \texttt{move\_win} has no effect. \end{remark} \subsection{\system Pseudo Code} \lstset{ numbers=left, } \begin{figure} \vspace*{3mm}% \begin{lstlisting} write // Authenticate request // Repeat sending until reply was received repeat until broadcast sleep for return on receive // Only process correctly authenticated replies // Each replica may only send one if // Return reply after receiving if \end{lstlisting} \caption{Client $c$ (pseudo code)} \label{def:pseudo-client} \end{figure} \begin{figure} \vspace*{3mm}% \begin{lstlisting} app = application, cp = checkpoint component on receive // Ignore invalid requests if if // Check if a reply is available for the request if send return // Silent return on retry with no result yet if // Execution replicas must be able to forward a request once // This also applies for the latest client request if an execution replica already has a reply // Notify agreement of new request main loop while true: if // Executor missed some requests cp.fetch_cp else: // Filter duplicat if if send if cp.gen_cp on cp.stable_cp // Allow garbage collection of commit IRMC if app := app' \end{lstlisting} \caption{Execution replica $e$ (pseudo code)} \label{def:pseudo-execution} \end{figure} \begin{figure} \vspace*{3mm}% \begin{lstlisting} // Force agreement to periodically create a checkpoint AG-WI ag = agreement black-box, cp = checkpoint component for each execution group $r_\mathcal{E}$ = request IRMC receiver, $c_\mathcal{E}$ = commit IRMC sender, parallel for each client while true: if // Client already sent a newer request else: // // Order request and wait for the next one ag.order // In-order without gaps between sequence numbers, blocks agreement, blocking can cause agreement timeouts to expire on ag.deliver // Sleep if agreement must create a new checkpoint sleep until // Update state with new request // Ol parallel for each execution group sleep until completed for // Not completed parallel calls continue in the background if cp.gen_cp on cp.stable_cp // Move commit window forward parallel for each execution group ag.gc if parallel for each execution group // Add missing requests from hist to commit IRMC for sleep until completed for \end{lstlisting} \caption{Agreement replica $a$ (pseudo code)} \label{def:pseudo-agreement} \end{figure} The pseudo code for the client is shown in Figure~\ref{def:pseudo-client}, for the execution replica\xspace in Figure~\ref{def:pseudo-execution} and for the agreement replica in Figure~\ref{def:pseudo-agreement}. We assume that each method is executed atomically, unless it calls a blocking method, at which point execution may switch to other methods. Variable definitions are written as \texttt{var := value}, whereas \texttt{=} is used for comparisons and destructuring of values, for example $x = \langle\textsc{Execute}, r, s'\rangle$ uses the value in $x$ to define $r$ and $s'$ using pattern matching. \subsection{Proof} The proof primarily considers write requests. We assume for now that there is only one execution group, that is $n_e=1$ and $z=0$. Later on, we will relax this assumption. Strongly and weakly consistent read requests are considered afterwards. We write "L.~\ref{def:pseudo-client}.\ref{code:client-write}" to refer to Line~\ref{code:client-write} in Figure~\ref{def:pseudo-client}. \subsubsection{\hspace{1mm}Agreement-Checkpoint~~Equivalence~~(CP-A- Equivalence)} \begin{definition}[CP-A-Equivalence] \label{def:cp-a-equivalence} The state of an agreement replica ($s_n$, $t$, $hist$ and queued commit IRMCs messages) that has reached sequence number~$s$ via processing \texttt{ag.deliver}($s, r$) (L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver}) is equivalent to that of a replica that reaches sequence number~$s$ by applying a checkpoint for sequence number~$s$. \end{definition} \begin{proof} We prove this by induction.\\ \indent{}\emph{Base case}: All correct agreement replicas initialize $s_n$, $t$, $hist$ and the commit IRMCs with identical values. There is no checkpoint for that sequence number, as no checkpoint was generated yet.\\ \indent{}\emph{Induction step}: All correct agreement replicas pass through the same states by processing ordered requests or jump forward to one of those states via a checkpoint. As updates to the considered state parts are only made in either \texttt{ag.deliver}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver}) or \texttt{cp.stable\_cp}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-stable-cp}), it suffices to show that when either of them updates $s_n$ to a certain sequence number, then the resulting replica states are equivalent. Note that the sequence number~$s_n$ increases monotonically as \texttt{ag.deliver} is per A-Order~\ref{def:a-order} only called for increasing sequence numbers and \texttt{cp.stable\_cp} only increases the value of $s_n$~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-guard}). Assume that from a common starting point, replicas reach sequence number~$s$ by processing \texttt{ag.deliver}($s, r$)~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver}): Per A-Safety~\ref{def:a-safety} and A-Order~\ref{def:a-order} all correct agreement replicas\xspace receive the same sequence of requests via their \texttt{ag.deliver} callback, that is $s_n$, $t$ and $hist$~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-update}) evolve identically on those replicas. Therefore, a possible later call to \texttt{cp.gen\_cp}($s$, ($t, hist$))~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-gen-cp}) for a sequence number~$s$ has identical parameters on all correct agreement replicas\xspace. As per CP-Safety~\ref{def:cp-safety} only checkpoints which were created by at least one correct replica can become stable, any call of \texttt{cp.stable\_cp}($s$, ($t'$, $hist'$))~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-stable-cp}) can only deliver that checkpoint for sequence number~$s$. Applying a checkpoint for the current or an older sequence number~$s \leq s_n$ does not change $s_n$, $t$ and $hist$~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-guard}). Applying a checkpoint for a newer sequence number~$s > s_n$ atomically sets $s_n$, $t$ and $hist$ to the state they had when the checkpoint was created~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-update}) and adds missing requests (i.e., those skipped by updating $s_n$) to the commit IRMCs. The call to \texttt{ag.gc}($s+1$), which happens atomically with the state update, ensures that \texttt{ag.deliver} will only be called for sequence numbers $\geq s+1$. Per A-Order~\ref{def:a-order} the next \texttt{ag.deliver} call must be for $s_n+1 = s+1$. When called for an old checkpoint ($s \leq s_n$), then \texttt{$c_\mathcal{E}$.move\_window} (L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-move}) has no effect, as a \texttt{$c_\mathcal{E}$.send} call for $s_n$ must already have been issued, such that the IRMC has queued messages at least up to sequence number~$s$. Therefore $max(c_{\mathcal{E},0}.win) \geq s \Leftrightarrow min(c_{\mathcal{E},0}.win) \geq s-|c_{\mathcal{E},0}|+1$ that is the window start is at least at the position requested by the \texttt{$c_\mathcal{E}$.move\_window} call, see also the remark below. For a newer checkpoint, as $|hist'| = |c_{\mathcal{E},0}|$, this together with moving the window forward from the sender-side (per IRMC-Liveness II~\ref{def:irmc-liveness2} and IRMC-Liveness III~\ref{def:irmc-liveness3}) is enough to completely replace the state of the IRMC, if necessary. Requests that were already contained in the IRMC must be identical as the message sent for a specific sequence number~$s$ in \texttt{ag.deliver} or \texttt{cp.stable\_cp}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send} and \ref{def:pseudo-agreement}.\ref{code:agree-cp-send}) must be identical per induction assumption. \end{proof} \begin{remark} \texttt{$c_\mathcal{E}$.move\_window} (L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-move}) is actually called with $s-|hist'|+1$ which has the same effect as $s-|c_{\mathcal{E},0}|+1$ such that we assume $|hist'| = |c_{\mathcal{E},0}|$ in the following to simplify the presentation of the proof. As the first delivered agreement sequence number is 1 and for every delivered request a new message is added to $hist$~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-hist}), the size of $|hist| = min(s_n,|c_{\mathcal{E},0}|)$. Thus when applying a checkpoint $s - |hist'|+1 = s - min(s,|c_{\mathcal{E},0}|)+1 = max(1, s-|c_{\mathcal{E},0}|+1)$. As $min(c_{\mathcal{E},0}.win)$ is initialized with $1$ and \texttt{$c_\mathcal{E}$.move\_window} ignores calls which move the window backwards, $s-|c_{\mathcal{E},0}|+1$ is equivalent to $s - |hist'|+1$. \end{remark} \subsubsection{Execution Safety (E-Safety)} To prove property E-Safety~\ref{def:e-safety} we start with the following lemma: \begin{lemma} \label{def:e-safety-commit} When two execution replicas\xspace~$e_1$ and $e_2$ receive message $m$ and $m'$ at position $p$ in the commit channel, then $m = m'$. \end{lemma} \begin{proof} \label{proof:e-safety} We prove this by contradiction. Assume that $m\neq m'$. Per IRMC-Correctness I~\ref{def:irmc-correctness1} \texttt{$c_\mathcal{E}$.receive}($0, p$)~(L.~\ref{def:pseudo-execution}.\ref{code:exec-receive}) only delivers a message $m$ that was sent by a correct agreement replica\xspace, the same holds for $m'$. Therefore \texttt{$c_\mathcal{E}$.send}($0, p, m$) and \texttt{$c_\mathcal{E}$.send}($0, p, m'$) (either at L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send} or \ref{def:pseudo-agreement}.\ref{code:agree-cp-send}) must have been called by a correct agreement replica\xspace each. For the \texttt{$c_\mathcal{E}$.send} call in \texttt{ag.deliver}, the agreement black-box\xspace must have delivered message~$m$ and $m'$ on two correct replicas, which contradicts A-Safety~\ref{def:a-safety}. And according to CP-A-Equivalence~\ref{def:cp-a-equivalence} the \texttt{$c_\mathcal{E}$.send} when applying a checkpoint in \texttt{cp.stable\_cp} is equivalent to the previous send call in \texttt{ag.deliver}, which contradicts the assumption. \end{proof} \noindent{}With this we can prove E-Safety~\ref{def:e-safety}: \begin{corollary} An execution replica\xspace only executes requests received from the commit channel (compare L.~\ref{def:pseudo-execution}.\ref{code:exec-receive} - \ref{def:pseudo-execution}.\ref{code:exec-execute}) which according to Lemma~\ref{def:e-safety-commit} cannot receive different requests on different correct execution replicas\xspace. \end{corollary} \subsubsection{Execution Checkpoint Equivalence (CP-E-Equi-valence)} \begin{definition}[CP-E-Equivalence] \label{def:cp-e-equivalence} The state of an execution replica\xspace~($s_n$, $app$ and $u$) that has reached sequence number~$s_n$ via processing the corresponding \textsc{Execute} message~(L.~\ref{def:pseudo-execution}.\ref{code:exec-exec-msg}) for $s_n$ is equivalent to that of a replica that arrives there via a checkpoint for sequence number~$s_n$. \end{definition} \noindent{}The proof follows along the lines of CP-A-Equivalence~\ref{def:cp-a-equivalence}. \begin{proof} We prove this by induction.\\ \indent{}\emph{Base case}: All correct execution replicas\xspace initialize $s_n$, $app$ and $u$ with identical values. There is no checkpoint for that sequence number, as no checkpoint was generated yet.\\ \indent{}\emph{Induction step}: All correct execution replicas\xspace pass through the same states or jump forward to one of those states via \linebreak a checkpoint. As updates to the considered state parts are only made in either the main loop~(L.~\ref{def:pseudo-execution}.\ref{code:exec-main}) or \texttt{cp.stable\_cp}~(L.~\ref{def:pseudo-execution}.\ref{code:exec-stable-cp}), it suffices to show that when either of them updates $s_n$ to a certain sequence number, then the resulting replica states are equivalent. Note that the sequence number~$s_n$ increases monotonically as the main loop only increments it~(L.~\ref{def:pseudo-execution}.\ref{code:exec-inc-steps}) and \texttt{cp.stable\_cp} only increases the value of $s_n$~(L.~\ref{def:pseudo-execution}.\ref{code:exec-cp-guard}). Assume that from a common starting point, replicas reach sequence number $s_n$ by processing the corresponding \textsc{Execute}-message~(L.~\ref{def:pseudo-execution}.\ref{code:exec-exec-msg}): As \texttt{$c_\mathcal{E}$.receive}($0, s_n+1$)~(L.~\ref{def:pseudo-execution}.\ref{code:exec-receive}) is called sequentially (without skipping) for each sequence number and per E-Safety~\ref{def:e-safety} all correct execution replicas\xspace process identical requests for each sequence number, the (atomic) modifications of $s_n$, $u[c]$ and $app$ in the main loop (L.~\ref{def:pseudo-execution}.\ref{code:exec-execute} and following) are identical across execution replicas\xspace. Either all correct execution replicas\xspace come to the identical decision to skip execution of request $r$~(L.~\ref{def:pseudo-execution}.\ref{code:exec-skip})based on $u[c]$, which must be identical across replicas as per induction assumption the replica states were identical which includes $u[c]$, or according to RSM-property~\ref{def:rsm} the execution replicas\xspace arrive at identical $u[c]$ and \texttt{app} for $s_n$ after processing~$r$. Therefore a call to \texttt{cp.gen\_cp}($s$, ($u$, \texttt{app}))~(L.~\ref{def:pseudo-execution}.\ref{code:exec-gen-cp}) for sequence number~$s$ has identical parameters on all correct execution replicas\xspace and thus per CP-Safety~\ref{def:cp-safety} \texttt{cp.stable\_cp}($s$, ($u'$, \texttt{app'}))~(L.~\ref{code:exec-stable-cp}) can only deliver that checkpoint. Applying a checkpoint for the current or an older sequence number~$s \leq s_n$ does not change $s_n$, $app$ and $u$~(L.~\ref{def:pseudo-execution}.\ref{code:exec-cp-guard}). Applying a checkpoint for a newer sequence number~$s > s_n$ atomically sets $s_n$, $app$ and $u$ to the state they had when the checkpoint was created~(L.~\ref{def:pseudo-execution}.\ref{code:exec-cp-update}). Later calls to \texttt{$c_\mathcal{E}$.receive} (L.~\ref{def:pseudo-execution}.\ref{code:exec-receive}) will request the next sequence number after the checkpoint. \texttt{$c_\mathcal{E}$.move\_window} (L.~\ref{def:pseudo-execution}.\ref{code:exec-cp-move}) will cause any \texttt{$c_\mathcal{E}$.receive} calls for an old sequence number to finish with a \textsc{TooOld} message and request a sequence number after the checkpoint on the next iteration. \end{proof} \subsubsection{Execution Safety II (E-Safety II)} \label{def:e-safety2-proof} \begin{lemma} \label{def:client-result} When a client accepts a reply for its request, then that reply is correct and correct execution replicas\xspace provide the same reply. \end{lemma} \begin{proof} A client waits for replies~(L.~\ref{def:pseudo-client}.\ref{code:client-loop}) from $f_e+1$ different replicas of its execution group with the same content~(L.~\ref{def:pseudo-client}.\ref{code:client-precheck} and \ref{def:pseudo-client}.\ref{code:client-check}), such that per failure assumption at least one of the replies is from a correct execution replica\xspace. As shown in CP-E-Equivalence~\ref{def:cp-e-equivalence}, all correct execution replicas\xspace that process a request arrive at the same state and result. That result is either sent directly to the client~(L.~\ref{def:pseudo-execution}.\ref{code:exec-send-reply}) or retrieved from $u[c]$ on a request retry~(L.~\ref{def:pseudo-execution}.\ref{code:exec-resend-reply}). \end{proof} \noindent{}We can now prove E-Safety II~\ref{def:e-safety2}: \begin{proof} In order to prove that \system provides linearizability, we have to show that requests issued at any point in time are always executed after all requests for which a client has accepted the reply, and that the execution follows the application's specification~\cite{herlihy90linearizability}. The latter part of the requirement was already shown in CP-E-Equivalence~\ref{def:cp-e-equivalence}, which uses the fact that requests are executed~(L.~\ref{def:pseudo-execution}.\ref{code:exec-execute}) in a total order. This also guarantees that at least one correct replica has processed the \textsc{Execute} message for each sequence number. An executed request must have been delivered by the agreement black-box\xspace (see the proof in Section~\ref{proof:e-safety} for E-Safety~\ref{def:e-safety}). Assume that the execution replicas\xspace have executed request~$r$ which was ordered at sequence number~$s$. Now let the execution replicas\xspace execute a request~$r'$ afterwards which was ordered at a sequence number $s'$ with $s' < s$. However, as execution replicas\xspace only process requests in order, this contradicts the assumption that $r$ was already executed. Thus new requests are always ordered/executed at a sequence number higher than that of \linebreak previously executed requests. Per Lemma~\ref{def:client-result} a client cannot receive different replies from correct execution replicas\xspace. That is as soon as a single correct execution replica\xspace sends a reply to the client, which by construction happens before that client has accepted the reply, later requests are always ordered at a higher sequence number. \end{proof} \begin{remark} The request IRMCs do not matter for E-Safety~\ref{def:e-safety} and E-Safety II~\ref{def:e-safety2}, as the agreement black-box\xspace is safe independent of the input. \end{remark} \begin{remark} It is not necessary to store client messages in an execution checkpoint as a correct client keeps repeating incomplete requests, and as already executed requests are either part of a checkpoint or still available from the commit IRMC. \end{remark} \begin{remark} A correct execution replica\xspace might not receive a request from a correct client when the other execution replicas\xspace already have processed it. This is the reason why \texttt{cp.stable\_cp} at execution replicas\xspace must push the window of a client's subchannel forward. \end{remark} \subsubsection{Execution Validity (E-Validity)} E-Validity~\ref{def:e-validity} follows as a corollary: \begin{corollary} Per Lemma \ref{def:e-safety-commit} an executed request must have been delivered by the agreement black-box\xspace, and per A-Validity \ref{def:a-validity} only valid client requests are delivered that per cryptographic assumptions must originate from that client. \end{corollary} \subsubsection{\hspace{1mm}Execution Validity II (E-Validity II)} Next, we prove E-Validity II~\ref{def:e-validity2}: \begin{proof} This follows by construction of the main loop~(L.~\ref{def:pseudo-execution}.\ref{code:exec-main}): Requests which are not either the first request of a client or which do not have a higher counter value $t_c$ than the last one are skipped~(L.~\ref{def:pseudo-execution}.\ref{code:exec-skip}). After executing a request the latest counter for client $c$ is stored~(L.~\ref{def:pseudo-execution}.\ref{code:exec-store-rep}). As a request cannot have a counter value higher than its own counter value, it can be executed at most once. Per CP-E-Equivalence~\ref{def:cp-e-equivalence} $u$ and $app$ are always restored together, such that if the application state contains the effects of executing the write request, this fact is also reflected in $u$. And therefore the request will not be executed more than once. \end{proof} \subsubsection{Execution Liveness (E-Liveness)} We now prove that a correct client will eventually receive a reply to its request(s). Without loss of generality, we consider all requests to originate from the same client. For this we show that each of the processing steps a request passes through will eventually make progress. The lemmas assume implicitly that the client has either collected a stable reply (in which case the request processing is finished) or that it still waits for replies to its request and thus keeps resending its request. \begin{lemma} \label{def:e-liveness-e-req} When a correct client sends a new request~$r$, then an execution replica\xspace will pass it on to its request IRMC (unless it has already seen a newer request from that client). \end{lemma} \begin{proof} Assume that an execution replica\xspace receives a, from its perspective, new request~(L.~\ref{def:pseudo-execution}.\ref{code:exec-recv-write}). By definition a request $r=\langle\textsc{Write},w,c,t_c\rangle$ sent by a correct client is correctly authenticated and signed~(L.~\ref{def:pseudo-client}.\ref{code:client-sign}) and therefore passes the MAC and signature checks~(L.~\ref{def:pseudo-execution}.\ref{code:exec-check-mac} and \ref{def:pseudo-execution}.\ref{code:exec-check-sig}). The counter value $t_c$ is $t_c>t_c'$, with $t_c'$ being the counter value of any older request, as a correct client always increments its counter value after accepting a reply~(L.~\ref{def:pseudo-client}.\ref{code:client-counter}). As $t[c]$ is only modified when the execution replica\xspace receives a valid request from the client~(L.~\ref{def:pseudo-execution}.\ref{code:exec-counter}), it must contain either some older value $t_c'$ or the default of~$0$. (The client starts with $t_c=1$, whereas an execution replica\xspace has $t[c]=0$.) Therefore $t_c>t[c]$ and the execution replica\xspace calls \texttt{$r_\mathcal{E}$.send}($c$, $t_c$, unwrap($m$))~(L.~\ref{def:pseudo-execution}.\ref{code:exec-send}). In case the request is not new to the execution replica\xspace, then the Lemma provides no assurances. \end{proof} \begin{lemma} \label{def:e-liveness-e-send} The send call by the execution replicas\xspace for the client's request IRMC will not block indefinitely. \end{lemma} \begin{proof} The send call only blocks if the request counter $t_c$ > max($r_{\mathcal{E},c}.win$), that is the upper bound of the client's request subchannel, according to the definition of the $send$ method. To arrive at a contradiction assume that the \texttt{$r_\mathcal{E}$.send} call~(L.~\ref{def:pseudo-execution}.\ref{code:exec-send}) blocks indefinitely. As a correct client sends its (new) request to all execution replicas\xspace, eventually $f_e+1$~correct execution replicas\xspace will per Lemma~\ref{def:e-liveness-e-req} have called \texttt{$r_\mathcal{E}$.send} and therefore also \texttt{$r_\mathcal{E}$.move\_window}($c$, $t_c$)~(L.~\ref{def:pseudo-execution}.\ref{code:exec-move-win}). Per IRMC-Liveness III~\ref{def:irmc-liveness3} eventually all agreement replicas\xspace will call \texttt{$r_\mathcal{E}$.move\_window}($c$, $t_c$). With IRMC-Liveness II~\ref{def:irmc-liveness2} it follows that \texttt{$r_\mathcal{E}$.send} returns, which contradicts the assumption. \end{proof} \begin{lemma} \label{def:e-liveness-a-receive} An agreement replica\xspace will eventually try to receive a new correct request~$r$ from a correct client (unless it has already seen a newer one or skipped it with a checkpoint). \end{lemma} \begin{proof} Lemma~\ref{def:e-liveness-e-send} has already shown that all ($\geq f_e+1$) correct execution replicas\xspace will \texttt{$r_\mathcal{E}$.send} the new client request~$r$ which per IRMC-Liveness~I~\ref{def:irmc-liveness1} can be received by a corresponding call on the agreement replicas\xspace unless it is no longer part of the window of the subchannel. According to IRMC-Correctness~I~\ref{def:irmc-correctness1} only request~$r$ can be received, as all correct execution replicas\xspace send this request. We therefore have to show that an agreement replica\xspace will call \texttt{$r_\mathcal{E}$.receive}($c$, $t^+[c]$) (L.~\ref{def:pseudo-agreement}.\ref{code:agree-client-receive}) for the right request counter value $t_c$. Assume that $t^+[c] < t_c$: As shown above in the proof of Lemma~\ref{def:e-liveness-e-send} all correct agreement replicas\xspace will eventually call \texttt{$r_\mathcal{E}$.move\_window}($c$, $t_c$), which according to the semantics of the $send$ method will cause it to return $\langle\textsc{TooOld}, t_c\rangle$ which is used to update $t^+[c]$~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-too-old}) and request $t_c$ next. Assume that $t^+[c] > t_c$: We show that this case never applies. An agreement replica\xspace cannot have received a too new \textsc{TooOld} message and stored its counter value~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-too-old}): Per IRMC-Correctness~II~\ref{def:irmc-correctness2} at least one execution replica\xspace must have called \texttt{$r_\mathcal{E}$.move\_window} accordingly, which requires that a correct execution replica\xspace has received a valid request with counter $t^+[c] > t_c$ from a correct client. This contradicts the assumption that the request is new. Incrementing $t^+[c]$ after having received a previous request~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-next-req}) or processing it in \texttt{ag.deliver}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-next-deliver}) would require a previous request with counter value $t_c' \geq t_c$, which contradicts the assumption. (A faulty client could cause some chaos here, but this is no problem as the effects are strictly limited to the client's subchannel.) \end{proof} \begin{remark} These properties effectively make the \texttt{$r_\mathcal{E}$.receive} call self-synchronizing. \end{remark} \begin{lemma} \label{def:e-liveness-a-deliver} The agreement black-box\xspace will \texttt{ag.deliver} (L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver}) a new request~$r$ for sequence number~$s$ within bounded time or apply a checkpoint for a later or equal sequence number. \end{lemma} \begin{proof} After $f_e+1$~execution replicas\xspace complete their call to \texttt{$r_\mathcal{E}$.send}($c$, $t_c$, $r$)~(L.~\ref{def:pseudo-execution}.\ref{code:exec-send}) an agreement replica\xspace can receive request~$r$ and start the agreement process. Assume that the request~$r$ is not delivered within bounded time and is also not skipped via a checkpoint. The request of a correct client will eventually arrive at all correct ($\geq~f_e +1$) execution replicas\xspace. With Lemma~\ref{def:e-liveness-e-req} and \ref{def:e-liveness-e-send} it follows that $f_e+1$ correct execution replicas\xspace call \texttt{$r_\mathcal{E}$.send}. With IRMC-Liveness~I~\ref{def:irmc-liveness1}, IRMC-Correctness~I~\ref{def:irmc-correctness1} and Lemma~\ref{def:e-liveness-a-receive} it follows that all correct agreement replicas\xspace will eventually receive the request~$r$ or a $\langle\textsc{TooOld},t_c'\rangle$ message if \texttt{$r_\mathcal{E}$.move\_window}~(L.~\ref{def:pseudo-execution}.\ref{code:exec-move-win}) is called by $f_e+1$~execution replicas\xspace with $t_c'>t_c$. As a correct client does not issue a request with counter $t_c' > t_c$ before $r$ was executed, all correct execution replicas\xspace will eventually call \texttt{$r_\mathcal{E}$.move\_window} with exactly $t_c$, but no higher value, such that receiving \textsc{TooOld} would violate IRMC-Correctness~II~\ref{def:irmc-correctness2}. (Executing $r$ as is shown in the proof of Lemma~\ref{def:e-safety-commit} would require that it was delivered before by at least one correct agreement replica\xspace.) Thus, per IRMC-Liveness III~\ref{def:irmc-liveness3} all correct agreement replicas\xspace will eventually internally call \texttt{move\_window}($c$, $t_c$) on the request IRMC and $2f_a+1$ correct agreement replicas\xspace eventually receive request~$r$ as long as $r$ is not delivered. With A-Liveness~\ref{def:a-liveness} it follows that $f_a+1$ correct agreement replicas\xspace eventually deliver $r$, contradicting the assumption. Skipping the \texttt{ag.deliver} call via \texttt{ag.stable\_cp}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-stable-cp}) requires per CP-Safety~\ref{def:cp-safety} that at least one correct agreement replica\xspace created the checkpoint~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-gen-cp}) and thus the agreement black-box\xspace must already have delivered $r$. \end{proof} \begin{lemma} \label{def:e-liveness-e-exec} A request~$r$ delivered at sequence number~$s$ that is \texttt{$c_\mathcal{E}$.send} by $f_a+1$~correct agreement replicas\xspace will eventually either execute on $f_e+1$~correct execution replicas\xspace or on one correct execution replica\xspace once a stable checkpoint with sequence number $s_{CP} \geq s$ was created. \end{lemma} \begin{proof} Assume that no stable checkpoint with sequence number $s_{CP}\geq s$ is applied at the execution replica\xspace~(L.~\ref{def:pseudo-execution}.\ref{code:exec-stable-cp}) before processing $r$: IRMC-Liveness~I~\ref{def:irmc-liveness1} states that $f_e+1$~correct execution replicas\xspace receive some request or a $\langle\textsc{TooOld}, s'\rangle$ message~(L.~\ref{def:pseudo-execution}.\ref{code:exec-receive}) with $s' > s$ as $f_a$+$1$ agreement replicas\xspace sent the request~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}). According to IRMC-Correctness~I~\ref{def:irmc-correctness1} the request can only be request~$r$ as per A-Correctness~\ref{def:a-safety} all correct agreement replicas send request~$r$. The execution replicas\xspace cannot receive the \textsc{TooOld} message as this would violate IRMC-Correctness II~\ref{def:irmc-correctness2}: Execution replicas can only call \texttt{$c_\mathcal{E}$.move\_window}($0, s_{CP}+1$) (L.~\ref{def:pseudo-execution}.\ref{code:exec-cp-move}) with $s_{CP} < s$ per assumption and thus $s_{CP} + 1 \leq s$, which does not allow \textsc{TooOld} to be returned. As the agreement black-box\xspace delivers requests in sequence number order according to A-Order~\ref{def:a-order}, an execution replica\xspace will also be able to receive any other previous request between $s_{CP}$ and $s$ and therefore will eventually try to receive~$s$. Agreement replicas call \texttt{$c_\mathcal{E}$.move\_window}($0, \hat{s} - |c_{\mathcal{E},0}|+1$) (L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-move}). To create an agreement checkpoint at $\hat{s}$~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-gen-cp}), the window of the commit subchannel must have included $\hat{s}$ (as \texttt{$c_\mathcal{E}$.send}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}) would have blocked otherwise), that is $max(c_{\mathcal{E},0}.win) \geq \hat{s} \Leftrightarrow min(c_{\mathcal{E},0}.win) + |c_{\mathcal{E},0}| - 1 \geq \hat{s} \Leftrightarrow min(c_{\mathcal{E},0}.win) \geq \hat{s} - |c_{\mathcal{E},0}|+1$. Therefore, an agreement replica cannot advance the window of the commit IRMC unless an execution group triggered the window move before. However, as shown in the previous paragraph the latter would contradict the assumption. Therefore, $f_e+1$~correct execution replicas\xspace will eventually execute the request and possibly create a checkpoint. Assume that a stable checkpoint with sequence number $s_{CP}\geq s$ gets applied: Per CP-Correctness~\ref{def:cp-safety} at least one correct execution replica\xspace must have created the checkpoint and thus have executed the request as per the previous part of the proof. Per CP-Liveness~\ref{def:cp-liveness} all other correct execution replicas\xspace will eventually receive and apply the checkpoint or have executed the request. \end{proof} \begin{lemma} \label{def:e-liveness-e-cp} A correct execution checkpoint at sequence number $s_{CP}$ for which $f_a+1$~agreement replicas\xspace delivered and called \texttt{$c_\mathcal{E}$.send}($0, s_{CP}$)~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}) will eventually become stable~(L.~\ref{def:pseudo-execution}.\ref{code:exec-stable-cp}) unless it is superseded by a newer one. \end{lemma} \begin{proof} Assume that no such stable checkpoint exists and that it is not superseded by a newer one. Then per Lemma~\ref{def:e-liveness-e-exec} $f_e+1$~correct execution replicas\xspace will execute the request and thereby create their checkpoint messages~(L.~\ref{def:pseudo-execution}.\ref{code:exec-gen-cp}) which per CP-E-Equivalence~\ref{def:cp-e-equivalence} are identical and according to CP-Liveness II~\ref{def:cp-liveness2} will become stable. \end{proof} \begin{lemma} \label{def:e-liveness-e-win} If no progress occurs, then eventually the start of the subchannel window of the commit IRMC is $min(c_{\mathcal{E},0}.win)$ $= s_{CP} + 1$ with $s_{CP}$ being the latest stable execution checkpoint. \end{lemma} \begin{proof} Per CP-Liveness~\ref{def:cp-liveness} eventually all execution replicas\xspace will receive the latest stable execution checkpoint~(L.~\ref{def:pseudo-execution}.\ref{code:exec-stable-cp}) and call \texttt{$c_\mathcal{E}$.move\_window}($0, s_{CP}+1$)~(L.\ref{def:pseudo-execution}.\ref{code:exec-cp-move}). No correct execution replica\xspace calls \texttt{$c_\mathcal{E}$.move\_window} for a higher sequence number as $s_{CP}$ is the number of the latest checkpoint. Agreement replicas call \texttt{$c_\mathcal{E}$.move\_window}($0, \hat{s} - |c_{\mathcal{E},0}|+1$) (L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-move}). To create an agreement checkpoint at $\hat{s}$, the window of the commit subchannel must have included $\hat{s}$ (as \texttt{$c_\mathcal{E}$.send}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}) would have blocked otherwise, preventing the checkpoint generation), that is $max(c_{\mathcal{E},0}.win) \geq \hat{s} \Leftrightarrow min(c_{\mathcal{E},0}.win) + |c_{\mathcal{E},0}| - 1 \geq \hat{s} \Leftrightarrow min(c_{\mathcal{E},0}.win) \geq \hat{s} - |c_{\mathcal{E},0}|+1$. Therefore an agreement replica cannot advance the window of the commit IRMC to a sequence number that is larger than that of the execution replicas\xspace' \texttt{$c_\mathcal{E}$.move\_window} calls. % Thus all correct agreement replicas\xspace eventually arrive at $min(c_{\mathcal{E},0}.win) = s_{CP} + 1$ with $s_{CP}$ being the latest stable execution checkpoint. \end{proof} \begin{lemma} \label{def:e-liveness-a-send} Agreement replicas will eventually complete \texttt{$c_\mathcal{E}$.send}($s, r$)~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}). \end{lemma} \begin{proof} \texttt{ag.deliver} blocks when $win$ is full~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-win}). \texttt{AG-WIN} $\geq k_a$ and $win$ is always anchored directly after the sequence number of the last stable agreement checkpoint. Thus $win$ contains at least one sequence number for which a new agreement checkpoint will be created. Assume that \texttt{ag.deliver} blocks permanently on the window check. In that case, per assumption, there can be no stable agreement checkpoint with sequence number $s_{CP} \geq s$ and $s_{CP} \in win$, which would lead to progress. Therefore, as the client waits for $r$ to be executed, per Lemma~\ref{def:e-liveness-a-deliver} eventually $f_a+1$~agreement replicas\xspace also deliver all requests in $win$. That is $f_a+1$~correct agreement replicas create a new agreement checkpoint, which will become stable and moves $win$ forward. This contradicts the assumption. Assume that \texttt{$c_\mathcal{E}$.send}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}) blocks permanently, which requires that $s > max(c_{\mathcal{E},0}.win)$. Per A-Order~\ref{def:a-order} and CP-A-Equi-valence~\ref{def:cp-a-equivalence} it follows that all previous slots in the subchannel window are filled with requests. With Lemma~\ref{def:e-liveness-a-deliver} this applies to at least $f_a+1$~agreement replicas\xspace. As $|c_{\mathcal{E},0}|\geq k_e$ at least one position in the commit IRMC subchannel window is an execution checkpoint sequence number. Per Lemma~\ref{def:e-liveness-e-cp} this causes a new checkpoint to become stable, which according to Lemma~\ref{def:e-liveness-e-win} eventually moves the commit IRMC window forward and thus contradicts the assumption. \end{proof} \noindent{}Now we can prove that a correct client will eventually receive a reply to its request: \begin{proof} Assume that the client does not get a reply. Then per Lemma~\ref{def:e-liveness-a-send} and \ref{def:e-liveness-e-exec} $f_e+1$~correct execution replicas\xspace will eventually have the reply in $u[c]$. As a correct client does not send a new request before having obtained a reply to the last one, $u[c]$ must eventually contain the reply. Per CP-E-Equivalence~\ref{def:cp-e-equivalence} the reply is identical on all correct execution replicas\xspace. At latest after the next request retry the client will receive the (identical) reply from $f_e+1$ correct execution replicas\xspace, and therefore accept the reply~(L.~\ref{def:pseudo-client}.\ref{code:client-check}), which contradicts the assumption. \end{proof} \begin{remark} An agreement replica\xspace will receive a request $r$ either via the request IRMC, the agreement black-box\xspace or skip the request via a checkpoint. \end{remark} \subsubsection{Multiple Execution Groups} We now generalize to $n_e \geq 1$ execution groups of which $z < n_e$ might be skipped if these are slow. \begin{lemma} E-Liveness~\ref{def:e-liveness} also holds for multiple execution groups. \end{lemma} \begin{proof} Even though an agreement replica\xspace only waits for $n_e - z$ groups~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-wait-send}) to complete \texttt{$c_\mathcal{E}$.send}, an execution group will only miss requests if the agreement replicas\xspace call \texttt{$c_\mathcal{E}$.move\_window}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-move}) with a sequence number not yet received by a slow execution group. As shown in the proof of Lemma~\ref{def:e-liveness-e-win} an agreement replica can only create a checkpoint that would push the window of the commit IRMC forward if the execution group already has created a newer or matching checkpoint. Generalized to $n_e$ execution groups, the \texttt{$c_\mathcal{E}$.send}~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-deliver-send}) calls for $n_e - z$ execution groups have to complete, before an agreement checkpoint can be created~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-gen-cp}). Therefore an execution group that has fallen behind can always retrieve an up-to-date checkpoint from one of the $n_e - z$ up-to-date execution groups. As agreement replicas unconditionally move the commit IRMC window forward~(L.~\ref{def:pseudo-agreement}.\ref{code:agree-cp-move}), this will lead to at least $f_a+1$ agreement replicas\xspace calling \texttt{$c_\mathcal{E}$.move\_window} (per Lemma~\ref{def:e-liveness-a-send} a corresponding checkpoint will eventually exist and per CP-Liveness~\ref{def:cp-liveness} all correct agreement replicas\xspace will eventually receive it), which per IRMC-Liveness~I~\ref{def:irmc-liveness1} and IRMC-Liveness~III~\ref{def:irmc-liveness3} will eventually allow execution groups that fell behind to receive a \textsc{TooOld} message. \end{proof} \subsubsection{Consistency Guarantees} \label{sec:consistency-guarantees} We now revisit the consistency guarantees provided by \system. \paragraph{Write Requests} As previously shown in Section~\ref{def:e-safety2-proof}, \system provides linearizability for write requests. \paragraph{Read Requests with Strong Consistency} Read requests with strong consistency work like write requests with one exception: Only the designated execution group receives the full request, whereas the other groups only get the client id $c$ and counter $t_c$. This leads to the following observation: \begin{lemma} With read requests, the content of checkpoints can vary between groups in regard to the reply stored in $u[c]$. That is CP-E-Equivalence~\ref{def:cp-e-equivalence} only applies for individual groups at a time. \end{lemma} \begin{proof} Only the client's execution group will receive the read request and modify $u[c]$ accordingly after executing the request~(L.~\ref{def:pseudo-execution}.\ref{code:exec-store-rep}). All other execution groups store a placeholder in $u[c]$ which includes the request counter. Therefore, the reply parts of $u[c]$ can differ between groups. Note that this divergence is self-correcting in the sense that it will disappear after executing the next write request for that client. \end{proof} \begin{remark} This does not prevent the checkpoint from being transferred between groups, as each group can still generate a valid proof for its checkpoint. However, the global flow control could force a group to skip some requests, which might include group-specific read requests. In that case an execution replica\xspace has to tell the client to resubmit its request if necessary, based on the placeholder stored in $u[c]$. \end{remark} \paragraph{Read Requests with Weak Consistency} \begin{lemma} Weakly consistent read requests provide one-copy serializability (assuming each request, which can access various parts of the application state, represents a transaction). \end{lemma} \begin{proof} The reply to the client and state modifications must be equivalent to those from an acyclic ordering of transactions, where each transaction is processed atomically~\cite{bernstein87concurrency}. The application state is atomically modified with a totally ordered sequence of requests (see RSM~\ref{def:rsm}), which yields an acyclic order for all state-modifying requests and strongly-consistent read requests. All weakly consistent read requests happen between these state modifications, which does not introduce cycles in the request ordering either. As a correct client only accepts a reply sent by at least one correct replica, it will receive a conforming reply. \end{proof} \subsection{IRMC-RC} \begin{figure} \vspace*{3mm}% \begin{lstlisting} Sender replica send sleep until if else: // send move_window // The subchannel window start may only increase if send on receive if // Only accept new move messages if // Calculate actual window start w := garbage-collect messages with SeqNr Receiver replica receive sleep until sleep until either: - case return - case return move_window // The subchannel window start may only increase if send garbage-collect messages with SeqNr on receive if if on receive if // Only accept new move messages if if move_window \end{lstlisting} \caption{IRMC-RC (pseudo code)} \label{def:pseudo-irmc-rc} \end{figure} The IRMC-RC variant shown in Figure~\ref{def:pseudo-irmc-rc} is a simple implementation of an IRMC that provides the expected properties. Replicas can aggregate \textsc{Move} messages before sending them. In case a sender replica has multiple IRMCs and sends identical messages on the same subchannel and position, then it can share a single signed \textsc{Send} message between IRMCs. Without loss of generality we assume the set of senders~$R_S$ and receivers~$R_R$ to be disjoint, that is $R_S \cap R_R = \varnothing$. We assume reliable point-to-point channels between replicas, that is messages sent between individual replicas will be delivered eventually, unless messages are garbage collected at which point a replica discards old messages, even when they were not successfully delivered yet. To keep the pseudo code short, we assume that messages without correct authentication are automatically dropped before these can be processed. All messages are also expected to contain an identifier to allow differentiation between different IRMCs if necessary. \subsection{IRMC-SC} \begin{figure*}[!htp] \vspace*{3mm}% \begin{minipage}[t]{1.0\columnwidth} \begin{lstlisting} Sender replica + Variables send sleep until if else: // // send on receive if if limit // Check if replica has if send periodic: // Send position of latest certificate per subchannel with no gaps at previous positions in the subchannel window for each subchannel send // move_window and receive // Select sender for subchannel on receive if // Send queued messages for subchannel \end{lstlisting} \caption{IRMC-SC sender endpoint (pseudo code)} \label{def:pseudo-irmc-sc-sender} \end{minipage} \hfill% \begin{minipage}[t]{1.0\columnwidth} \begin{lstlisting} Receiver replica + Variables receive sleep until sleep until either: - case return - case return on receive if // Certificate must contain if on receive if // Merge progress vectors for each subchannel // Start timeout if some messages are still missing if start timer on timeout for // Timeout expired and there are still missing certificates if select new sender send restart timer // move_window and receive \end{lstlisting} \caption{IRMC-SC receiver endpoint (pseudo code)} \label{def:pseudo-irmc-sc-receiver} \end{minipage} \end{figure*} IRMC-SC shown in Figure~\ref{def:pseudo-irmc-sc-sender} and \ref{def:pseudo-irmc-sc-receiver} is a more complex but also more efficient implementation than IRMC-RC. For liveness, we assume that the \textsc{Move}~message is protected against replay attacks, for example by including a counter to filter out already processed instances of the message to ensure that these are not processed multiple times. In case a sender replica has multiple IRMCs and sends identical messages on the same subchannel and position, then it can share a single signed \textsc{Certificate} message between IRMCs. \vfill \pagebreak \vfill \section{Related Work} \label{sec:related} \headline{Adaptive BFT Replication} \system is not the first work to argue that it is crucial to enable BFT systems to dynamically adapt to changing conditions. Abstract~\cite{aublin15next} makes it possible to substitute the consensus protocol of a BFT system at runtime, for example, switching to a more robust algorithm once a replica failure has been suspected or detected. CheapBFT~\cite{kapitza12cheapbft} and ReBFT~\cite{distler16resource} follow a similar idea by comprising two different agreement protocols~(one for the normal case and one for fault handling) of which only one is active at a time. In contrast, the reconfiguration mechanism developed by Carvalho et al.~\cite{carvalho18dynamic} for BFT-SMaRt~\cite{bessani14state} temporarily runs two consensus algorithms in parallel to achieve a more efficient switch. As a result of \system's modularity, integrating support for the dynamic substitution of the agreement protocol is feasible and the use of customized protocols designed for high performance~\cite{martin06fast,behl15consensus} or strong resilience~\cite{amir10prime,aublin13rbft} \linebreak would not require modifications to execution groups. Other works allow BFT systems to dynamically change specific protocol properties at runtime. Depending on the current workload, de S\'{a} et al.~\cite{desa13adaptive}, for example, vary the parameters deciding how many requests are batched together and ordered within a single consensus instance. Berger~et~al.~\cite{berger19resilient} rely on a weighted voting scheme~\cite{sousa15separating} and by changing weights adjust the individual impact a replica has on the outcome of the agreement process. While adapting the batch size can be a measure to improve the performance of \system's agreement group, the use of a weighted voting scheme in general is only effective if (1)~a system contains more than the minimum number of agreement replicas and (2)~agreement replicas are located in different geographic regions; both of these points do not apply to \system. \headline{Communication Between Replica Groups} Amir et al. proposed BLinks~\cite{amir07customizable} as a means to send the totally ordered outputs of one replicated state machine to another replicated state machine that uses them as inputs. Unfortunately, the requirement of a channel-wide total order prevents \system from relying on BLinks as execution replicas do not necessarily have to use the same order when submitting new requests to the agreement group via their request channels. \channel{}s, on the other hand, do not have this restriction and furthermore comprise a built-in flow-control mechanism that represents the basis of \system's global flow control. However, transmitting only a single message between one dedicated sender and one dedicated receiver, BLinks may be used as a template for an \channel{} implementation \linebreak that involves even fewer wide-area messages than \channelb. \headline{Partitioned Agreement Groups} GeoBFT~\cite{gupta20resilientdb} makes use of replica groups located in different regions, which each run a full agreement protocol. In each protocol round every group orders a request yielding a request certificate, which is shared with all other groups. Afterwards the requests are merged into a single total order and are executed. This requires all groups to distribute a certificate in every round, even if it just contains a placeholder request, and thus all groups must work at the same time to make progress. In \system this requirement only applies to the agreement group whereas a limited number of slow execution groups can be skipped. Sharing a request ordering certificate in GeoBFT works by having the leader replica forward it to $f+1$~replicas of each group, which then further forward the certificate within their group. This request distribution scheme represents a middle ground between BLinks and \channelb{}s. Unlike \channel{}s it is coupled with the agreement protocol and has to remotely trigger a view-change to replace a leader replica which does not complete the request distribution in a timely manner. \headline{Efficient Client Communication} In most BFT systems, clients need to receive replies from different replicas in order to prove a result correct~\cite{castro99practical}, which in geo-replicated settings can significantly increase the number of messages exchanged over wide-area links. SBFT~\cite{gueta19sbft} addresses this problem by adding a protocol phase that aggregates request acknowledgements of multiple replicas into a single message to the client. In Troxy~\cite{li18troxy}, a client also has to wait for a single reply only, because the reply voter is hosted inside a trusted domain at the server side and forwards its decisions to the client through a secure channel. In \system, clients are typically located in the same region as an execution group allowing for communication over short-distance links. For scenarios in which this is not the case, it would be possible to extend \system to use one of the approaches discussed above. \headline{Leader Selection in Geo-replicated Systems} Multiple authors have underlined the impact that the leader-replica location has on response times, independent of the fault model, and presented solutions to select the leader in a way that minimizes overall latency~\mbox{\cite{sousa15separating,liu17leader,eischer18latency}}. Other agreement-based systems do not need to determine a fixed leader as they continuously rotate the leader role among replicas~\cite{veronese09spin,mao09towards,veronese10ebawa,mao08mencius,milosevic13bounded}. As our experiments show, with agreement replicas residing in different availability zones of the same cloud region, the specific location of the consensus leader in \system only has a negligible effect on response times. Consequently, \system achieves low and stable latency without requiring means to dynamically select or rotate the leader. \headline{Crash-tolerant Wide-Area Replication} Several works addressed the efficiency of geo-replication in systems that unlike \system solely tolerate crashes, not Byzantine failures. In Pileus~\cite{terry13consistency}, for example, writes are only handled by a subset of replicas that first order and execute them, and then bring all other replicas up to date by transferring state changes. P-Store~\cite{schiper10pstore} improves efficiency in wide-area environments by performing partial replication, thereby freeing a site from the need to receive and process all updates. Clock-RSM~\cite{du14clock} establishes a total order on requests by exploiting the timestamps of physical clocks and without requiring a dedicated leader replica. EPaxos~\cite{moraru13there} in contrast does not rely on a total request order, but only orders those requests that interfere with each other due to accessing the same state parts.
proofpile-arXiv_068-251
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{intro} PG\,1159 stars are hot hydrogen deficient post-AGB stars (Werner \& Herwig 2006). In the Hertzsprung-Russell diagram, they cover a region comprising the hottest central stars of planetary nebulae and white dwarfs ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 75\,000 -- 200\,000\,K, $\log\,g$\hspace*{0.5ex} = 5.5 -- 8). Their H deficiency is most probably the result of a late He-shell flash. Their envelopes are mainly composed of He, C, and O, with rather diverse abundance patterns (He = 0.30 -- 0.85, C = 0.13 -- 0.60, O = 0.02 -- 0.20, mass fractions). The prototype PG\,1159$-$035\ (= GW~Vir) was discovered in the Palomar Green survey (Wesemael {et\,al.}\ 1985). Subsequently it was found that the star is variable (McGraw {et\,al.}\ 1979) and it also became the prototype of the GW~Vir stars, which are non-radial multimode g-mode pulsators. Besides the Sun, PG\,1159$-$035\ is probably the star that is best studied with asteroseismic methods (Costa {et\,al.}\ 2008). One of the key questions related to these pulsators concerns the driving mechanism, because the instability strip occupied by them is not ``pure'' like the ZZ~Ceti strip, meaning that it also contains non-variable PG\,1159 stars. The primary pulsation driver is cyclic ionisation of C and O (Starrfield {et\,al.}\ 1984). The location of the instability strip is ``fuzzy'', because the red and blue edges of the strip depend on the He/C/O abundance ratio in the driving region; too high an He abundance poisons pulsations (Quirion {et\,al.}\ 2007). Another species that supports pulsation driving is iron, therefore, a subsolar iron abundance would narrow the instability strip. PG\,1159$-$035\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 140\,000\,K, $\log\,g$\hspace*{0.5ex} = 7), together with a near spectroscopic twin, the non-pulsator PG\,1520$+$525\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 150\,000\,K, $\log\,g$\hspace*{0.5ex} = 7.5), potentially defines the blue edge of the GW~Vir strip, provided their envelope chemical composition is similar. Considerable effort was put into spectroscopic analyses to derive temperature, gravity, and composition of these twin stars. One remaining open question is the iron abundance. For some PG\,1159 stars, including the twins, there were claims of iron deficiency (Jahn {et\,al.}\ 2007). Spectroscopically, the iron abundance in PG\,1159 stars is difficult to assess. Hitherto, the main tool were ultraviolet \ion{Fe}{vii} lines, well known from observations of hot hydrogen-rich central stars of planetary nebulae. Because of the high effective temperatures, these lines are predicted to be rather weak or even undetectable in PG\,1159 stars. The previously mentioned Fe deficiency was based on not detecting \ion{Fe}{vii} lines. UV lines from higher ionisation stages of iron were unknown until recently, when \ion{Fe}{x} lines were detected in five of the very hottest ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, $\geq$ 150\,000\,K) PG\,1159 stars (Werner {et\,al.}\ 2010). A solar iron abundance was derived. In this paper, we announce the detection of \ion{Fe}{viii} lines in FUSE spectra of three medium-hot ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 140\,000 -- 150\,000\,K) PG\,1159 stars, including the prototype and its twin. They serve as a tool for determining the iron abundance, closing the gap between the coolest ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, $\la$ 140\,000\,K) PG\,1159 stars, where we should be able to detect \ion{Fe}{vii}, and the very hottest objects exhibiting \ion{Fe}{x}. For the first time, we present an iron abundance determination of PG\,1159$-$035. We carefully re-assess archival FUSE and HST spectra of PG\,1159$-$035\ to look for weak, previously undetected \ion{Fe}{vii} lines. This search is extended to the cooler object PG\,1424$+$535\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 110\,000\,K, $\log\,g$\hspace*{0.5ex} = 7), where the non-detection of these lines would mean a Fe deficiency of one dex (Reiff {et\,al.}\ 2008). We also report on \ion{Fe}{viii} lines in other hot H deficient and H rich post-AGB stars. In the following section, we present the detection of \ion{Fe}{viii} lines in PG\,1159$-$035\ and other objects (Sect.\,\ref{observations}). Then we describe our model atmospheres (Sect.\,\ref{modeling}) and the spectroscopic iron abundance analysis of four PG\,1159 stars (Sect.\,\ref{analysis}), and we conclude in Sect.\,\ref{conclusions}. \begin{figure*}[ht] \centering \includegraphics[width=0.9\textwidth]{16992fig3.ps} \caption{Grotrian diagram of \ion{Fe}{viii}. For clarity, it is drawn from Opacity Project (OP) data that represent a small subset of the Kurucz dataset utilised in our computations. The OP level energies differ from Kurucz values. The transitions giving rise to the observed lines are indicated. }\label{fig_modelatom} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.7\textwidth]{16992fig4.ps} \caption{Effects of model parameter variations on the \ion{Fe}{viii} $\lambda$\,1148.22\,\AA\ line profile. The line strength is maximum at the parameters of PG\,1159$-$035\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 140\,000\,K, $\log\,g$\hspace*{0.5ex} = 7); other abundances are He/C/O/Ne = 0.32/0.48/0.17/0.02. }\label{fig_variation} \end{figure*} \begin{figure*} \centering \includegraphics[width=0.8\textwidth]{16992fig5.ps} \caption{\ion{Fe}{vii} lines in PG\,1159$-$035. Overplotted is the final model with solar iron abundance (model parameters like in Fig.\,\ref{fig:fe8_pg1159}). FUSE data are used for $\lambda < 1200$\,\AA\ and HST data otherwise. }\label{fig:fe7_pg1159} \end{figure*} \begin{figure*}[ht] \centering \includegraphics[width=0.8\textwidth]{16992fig6.ps} \caption{\ion{Fe}{vii} lines in PG\,1424$+$535. Overplotted is a model with solar iron abundance (model parameters: $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 110\,000\,K, $\log\,g$\hspace*{0.5ex} = 7, He/C/O/Ne = 0.49/0.43/0.06/0.02). }\label{fig_pgvier} \end{figure*} \section{Observations and line identifications} \label{observations} Recently, Landi \& Young (2010) have been able to identify four \ion{Fe}{viii} coronal emission lines in the $\lambda$~1000 -- 1200\,\AA\ region of the quiet Sun, in spectra obtained with the SOHO/SUMER instrument. This prompted us to look for accordingly photospheric lines in FUSE spectra of PG\,1159 stars. All four \ion{Fe}{viii} lines are present in the prototype PG\,1159$-$035\ (Fig.\,\ref{fig:fe8_pg1159}). We find \ion{Fe}{viii} lines in two more PG\,1159 stars and in several other hot (pre-) white dwarfs as well. From these objects we display the region around \ion{Fe}{viii} $\lambda$\,1148.22\,\AA\ in Fig.\,\ref{fig:fe8_allstars}. The two other PG\,1159 stars have slightly higher temperatures than the prototype (PG\,1520$+$525: $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 150\,000\,K, $\log\,g$\hspace*{0.5ex} = 7.5; PG\,1144$+$005: $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 150\,000\,K, $\log\,g$\hspace*{0.5ex} = 6.5). Different PG\,1159 stars from which FUSE spectra exist are obviously too hot or too cool to exhibit \ion{Fe}{viii} lines. In particular, these are the hottest ones exhibiting \ion{Fe}{x} lines mentioned in the introduction ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, $\geq$ 150\,000\,K), and the cooler object PG\,1424$+$535\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 110\,000\,K) that will be discussed below (Sect.\,\ref{sect42}). The central stars Abell~43 and NGC\,7094 are hybrid-PG\,1159 stars (i.e. exhibiting H-Balmer lines), and Abell~78 is a [WC]--PG\,1159 transition object. They all have low surface gravity, and the extraordinary wide profiles indicate that the \ion{Fe}{viii} lines are strongly affected by a stellar wind. The low surface gravity also favours the appearance of \ion{Fe}{viii} although $T\mathrm{\hspace*{-0.4ex}_{eff}}$\,\ of these stars is relatively low ($\approx$ 110\,000\,K). Two of the hottest known DO white dwarfs, PG0038+199 with $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 115\,000\,K and PG1034+001 with 100\,000\,K (Dreizler \& Werner 1996), display \ion{Fe}{viii} lines. Comparison with preliminary model calculations indicates a necessary upward revision of the temperatures by $\approx$\,20\,000\,K in order to reproduce these lines. We detect \ion{Fe}{viii} in neither KPD\,0005+5106, the hottest DO (200\,000\,K; Wassermann {et\,al.}\ 2010), nor in cooler DOs like RE\,J0503$-$289\ (70\,000\,K, Dreizler \& Werner 1996). We also find \ion{Fe}{viii} lines in several hydrogen-rich central stars. Four very prominent examples are displayed in Fig.\,\ref{fig:fe8_allstars}: NGC\,7293 ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 120\,000\,K, $\log\,g$\hspace*{0.5ex} = 6.3), LSS\,1362 ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 114\,000\,K, $\log\,g$\hspace*{0.5ex} = 5.7), NGC\,1360 ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 97\,000\,K, $\log\,g$\hspace*{0.5ex} = 5.3), NGC\,6853 ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 126\,000\,K, $\log\,g$\hspace*{0.5ex} = 6.5); the parameters are from Hoffmann {et\,al.}\ (2005). \begin{table} \begin{center} \caption{Wavelengths and oscillator strengths $f_{ij}$ of \ion{Fe}{viii} lines. \label{tab:levels}} \begin{tabular}{l l l l l } \hline \hline \noalign{\smallskip} Line & $\lambda_{\rm Kurucz}$/\AA & $\lambda_{\rm Landi}$/\AA & $f_{ij}$ \\ \noalign{\smallskip} \hline \noalign{\smallskip} $4{\rm s}\ ^2{\rm S}_{1/2}-3{\rm d}^2\ ^2{\rm P}^{\rm o}_{1/2}$ & 1006.087 & 1006.015 & 0.0169 \\ $4{\rm s}\ ^2{\rm S}_{1/2}-4{\rm p}\ \ ^2{\rm P}^{\rm o}_{3/2}$ & 1062.440 & 1062.463$^{\rm (1)}$& 0.401 \\ $4{\rm s}\ ^2{\rm S}_{1/2}-4{\rm p}\ \ ^2{\rm P}^{\rm o}_{1/2}$ & 1125.492 & 1125.546 & 0.216 \\ $4{\rm s}\ ^2{\rm S}_{1/2}-3{\rm d}^2\ ^2{\rm P}^{\rm o}_{3/2}$ & 1148.224 & 1148.223 & 0.0828 \\ \noalign{\smallskip} \hline \end{tabular} \\(1) mean value from two measurements \end{center} \end{table} \section{Model atmospheres and synthetic spectra} \label{modeling} For our analysis we use a grid of line-blanketed non-LTE model atmospheres, which is described in detail in Werner {et\,al.}\ (2004). In essence, the models include the main photospheric constituents, namely He, C, O, Ne, and occasionally H. NLTE line formation iterations for the iron population densities were computed on these model structures, i.e., keeping fixed temperature and density structure. For details on the used iron model atom, see Wassermann {et\,al.}\ (2010). We employ new versions of iron datasets (Kurucz 2009)\footnote{http://kurucz.harvard.edu/atoms.html}. They include many more levels and lines, in particular the four \ion{Fe}{viii} lines discussed in this paper. Properties of the newly detected \ion{Fe}{viii} lines are listed in Table~\ref{tab:levels}. They all arise from the same lower level. We specify the Kurucz wavelengths, as well as those measured by Landi \& Young (2010). The differences are all smaller than 0.1\,\AA. The largest deviation (0.072\,\AA) is shown by the 1006\,\AA\ line. The Kurucz wavelengths should be more accurate than the measured wavelengths since the energy levels involved were determined from more than one line. We also list the f-values from the Kurucz data. A simplified Grotrian diagram indicating the observed line transitions is shown in Fig.\,\ref{fig_modelatom}. We computed a small model grid in order to study the dependence of the \ion{Fe}{viii} lines on $T\mathrm{\hspace*{-0.4ex}_{eff}}$\,, $\log\,g$\hspace*{0.5ex}, and Fe abundance. The result for $\lambda$\,1148\,\AA\ is displayed in Fig.\,\ref{fig_variation}, and the other lines behave similarly. It turns out that effective temperature and gravity of PG\,1159$-$035\ are the most favourable for the detection of \ion{Fe}{viii}. It also explains why \ion{Fe}{viii} lines are not seen in objects that are much cooler or hotter. \begin{table} \begin{center} \caption{New \ion{Fe}{vii}, \ion{Fe}{viii}, and \ion{Ne}{vi} lines detected in PG\,1159$-$035.} \label{tab:new-lines} \begin{tabular}{llc} \hline \hline \noalign{\smallskip} Wavelength / \AA & Ion & Transition \\ \hline \noalign{\smallskip} 1006.09 & \ion{Fe}{viii}& $4{\rm s}\ ^2{\rm S}_{1/2}-3{\rm d}^2\ ^2{\rm P}^{\rm o}_{1/2}$ \\ 1062.44 & \ion{Fe}{viii}& $4{\rm s}\ ^2{\rm S}_{1/2}-4{\rm p}\ \ ^2{\rm P}^{\rm o}_{3/2}$\\ 1073.95 & \ion{Fe}{vii} & $4{\rm s}\ ^1{\rm D}-4{\rm p}\ ^1{\rm P}^{\rm o} $ \\ 1095.34 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_3-4{\rm p}\ ^3{\rm P}^{\rm o}_2 $ \\ 1117.58 & \ion{Fe}{vii} & $4{\rm s}\ ^1{\rm D}-4{\rm p}\ ^1{\rm F}^{\rm o} $ \\ 1125.49 & \ion{Fe}{viii}& $4{\rm s}\ ^2{\rm S}_{1/2}-4{\rm p}\ \ ^2{\rm P}^{\rm o}_{1/2}$ \\ 1141.43 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_3-4{\rm p}\ ^3{\rm F}^{\rm o}_4 $ \\ 1148.22 & \ion{Fe}{viii}& $4{\rm s}\ ^2{\rm S}_{1/2}-3{\rm d}^2\ ^2{\rm P}^{\rm o}_{3/2}$ \\ 1154.99 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_2-4{\rm p}\ ^3{\rm F}^{\rm o}_3 $ \\ 1163.88 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_2-4{\rm p}\ ^3{\rm D}^{\rm o}_3 $ \\ 1166.17 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_1-4{\rm p}\ ^3{\rm F}^{\rm o}_2 $ \\ 1180.82 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_3-4{\rm p}\ ^3{\rm D}^{\rm o}_3 $ \\ 1226.65 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_3-4{\rm p}\ ^3{\rm D}^{\rm o}_2 $ \\ 1239.69 & \ion{Fe}{vii} & $4{\rm s}\ ^3{\rm D}_1-4{\rm p}\ ^3{\rm D}^{\rm o}_1 $ \\ 1332.38 & \ion{Fe}{vii} & $4{\rm s}\ ^1{\rm D}-4{\rm p}\ ^1{\rm D}^{\rm o} $ \\ 1645.06 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{1/2}-3{\rm p}\ ^4{\rm P}_{3/2}$ \\ 1645.59 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{3/2}-3{\rm p}\ ^4{\rm P}_{5/2}$ \\ 1654.01 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{1/2}-3{\rm p}\ ^4{\rm P}_{1/2}$ \\ 1657.16 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{3/2}-3{\rm p}\ ^4{\rm P}_{3/2}$ \\ 1666.24 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{3/2}-3{\rm p}\ ^4{\rm P}_{1/2}$ \\ 1667.82 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{5/2}-3{\rm p}\ ^4{\rm P}_{5/2}$ \\ 1679.67 & \ion{Ne}{vi} & $3{\rm s}\ ^4{\rm P}^{\rm o}_{5/2}-3{\rm p}\ ^4{\rm P}_{3/2}$ \\ \noalign{\smallskip} \hline \end{tabular} \end{center} \vspace{-3mm} This table augments the UV line list of Jahn {et\,al.}\ (2007), their Table~2. \end{table} \section{Iron abundance analysis} \label{analysis} \subsection{PG\,1159$-$035} Figure~\ref{fig:fe8_pg1159} shows \ion{Fe}{viii} lines profiles computed from a solar Fe abundance model for PG\,1159$-$035\ compared to the observation. The fit is satisfactory, and a comparison with the Fe variation shown in the right panel of Fig.\,\ref{fig_variation} clearly rules out a significant iron deficiency. In contrast, we previously decided there is an iron deficiency of $>$\,$0.7$~dex from not detecting \ion{Fe}{vii} lines (Jahn {et\,al.}\ 2007), so we need to address this question again here. A close inspection of the FUSE and HST spectra reveals a number of weak \ion{Fe}{vii} lines (Fig.\,\ref{fig:fe7_pg1159}), which are fitted by our solar Fe abundance model. The reason we rejected the \ion{Fe}{vii} detection in our previous work was the apparent absence of the two strong predicted lines at $\lambda\lambda$\,1154.99 and 1180.82\,\AA\ in the FUSE data. The cause of the non-detection remains unclear. In particular, we carefully re-addressed the wavelength calibration. We are confident that the accuracy of the wavelengths is at least 0.02\,\AA. We also think that the oscillator strengths of these lines are correct because, together with other \ion{Fe}{vii} lines, they are rather prominent in spectra of H-rich central stars of planetary nebulae (e.g. Rauch {et\,al.}\ 2007). Either way, the simultaneous fit of the detected \ion{Fe}{vii} and \ion{Fe}{viii} lines independently confirms the validity of $T\mathrm{\hspace*{-0.4ex}_{eff}}$\,\ and $\log\,g$\hspace*{0.5ex}\ derived in earlier work. In Table~\ref{tab:new-lines} we list the iron lines newly detected in PG\,1159$-$035, together with lines from an \ion{Ne}{vi} multiplet (NIST\footnote{http://physics.nist.gov/pml/data/asd.cfm} wavelengths) that we have discovered in the HST/STIS spectrum during the present analysis. This table complements the UV line list presented by Jahn {et\,al.}\ (2007). In Table~\ref{tab:results} we summarise the photospheric parameters of PG\,1159$-$035\ from spectroscopic analyses and derived quantities. In comparison to Jahn {et\,al.}\ (2007), the table is improved for both the Fe abundance and the re-determination of mass, luminosity, and distance based on more realistic evolutionary tracks of Miller Bertolami \& Althaus (2006). \begin{table} \begin{center} \caption{Parameters of PG\,1159$-$035. } \label{tab:results} \begin{tabular}{cccc} \hline \hline \noalign{\smallskip} Parameter & Result & Abundances & Ref.\\ & & (solar units) & \\ \noalign{\smallskip} \hline \noalign{\smallskip} $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, / K & 140\,000 $\pm$ 5000 & & (1), (2)\\ $\log\,g$\hspace*{0.5ex} / cm s$^{-2}$ & 7.0 $\pm$ 0.5 & & (1), (2) \\ \noalign{\smallskip} H & $\le 0.02$ & $\le 0.027$ & (2) \\ He & 0.33 & 1.3 & (2) \\ C & 0.48 & 203 & (2) \\ N & 0.001 & 1.4 & (2) \\ O & 0.17 & 30 & (2) \\ F & $3.2 \cdot 10^{-6}$ & 6.3 & (2) \\ Ne & 0.02 & 16 & (2) \\ Si & $3.6 \cdot 10^{-4}$ & 0.54 & (2) \\ P & $\le 6.4 \cdot 10^{-6}$& $\le 1.1$ & (2) \\ S & $5.0 \cdot 10^{-6}$ & 0.016 & (2) \\ Fe & $1.3 \cdot 10^{-3}$ & 1.0 & (1) \\ \noalign{\smallskip} $M/M_\odot$& $0.536^{+0.068}_{-0.010}$& & (1) \\ \noalign{\smallskip} log $L/L_\odot$& $2.58^{+0.29}_{-0.29}$& & (1) \\ \noalign{\smallskip} $d$/kpc & $0.750^{+0.334}_{-0.585}$ & & (1) \\ \noalign{\smallskip} \hline \end{tabular} \end{center} \vspace{-3mm} Element abundances are given in mass fractions (2nd column) and relative to solar abundances (Asplund {et\,al.}\ 2009 values; 3rd column). References: (1) this work, (2) Jahn {et\,al.}\ (2007) and references therein, (3) Miller Bertolami \& Althaus (2006). \end{table} \subsection{PG\,1144$+$005, PG\,1520$+$525, PG\,1424$+$535}\label{sect42} The two other PG\,1159 stars in which we found \ion{Fe}{viii} lines are PG\,1144$+$005\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 150\,000\,K, $\log\,g$\hspace*{0.5ex} = 6.5) and PG\,1520$+$525\ ($T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 150\,000\,K, $\log\,g$\hspace*{0.5ex} = 7.5). The lines are fitted with profiles from models with solar iron abundance (Fig.\,\ref{fig:fe8_allstars}). We do not detect \ion{Fe}{vii} lines in these stars, because they are hotter than PG\,1159$-$035\ so that we expect weaker lines, and because the S/N of the available FUSE spectra is worse. The abundances of the models shown in Fig.\,\ref{fig:fe8_allstars} are He/C/O/Ne = 0.43/0.38/0.17/0.02 for PG\,1520$+$525\ and 0.38/0.58/0.02/0.02 for PG\,1144$+$005. The fourth PG\,1159 star considered in the present study, PG\,1424$+$535, is too cool to exhibit \ion{Fe}{viii} lines. The star is interesting because an iron deficiency of at least 1 dex was concluded from the claimed absence of \ion{Fe}{vii} (Reiff {et\,al.}\ 2008). Similar to the case of PG\,1159$-$035, we again addressed this question and found that \ion{Fe}{vii} lines are present after all. In Fig.\,\ref{fig_pgvier} we show a selection of \ion{Fe}{vii} lines from the FUSE spectrum compared to a solar iron abundance model. The match is very good. \section{Summary and conclusions} \label{conclusions} Our analysis of \ion{Fe}{vii} and \ion{Fe}{viii} lines in the FUSE spectra of four PG\,1159 stars results in solar iron abundances. Recent work on five hotter PG\,1159 stars exhibiting \ion{Fe}{x} lines (Werner {et\,al.}\ 2010) arrived at the same result. This set of nine stars comprises four objects, which previously were supposedly iron deficient: PG\,1159$-$035, PG\,1520$+$525, PG\,1424$+$535, K\,1-16 (Miksa {et\,al.}\ 2002, Jahn {et\,al.}\ 2007). The reason for this contrary result is twofold: an underestimation of $T\mathrm{\hspace*{-0.4ex}_{eff}}$\,\ in the case of K\,1-16 (Werner {et\,al.}\ 2010) and problems with the identification of the inherently weak lines from \ion{Fe}{vii} as described in the present work. \ion{Fe}{vii} was the only relevant ionisation stage with accurately known line positions at that time when earlier analyses were performed. There are still two objects left with seemingly strong iron deficiency. These are the hybrid-PG\,1159 star NGC~7094 and the [WC]--PG\,1159 transition object Abell~78 (Miksa {et\,al.}\ 2002). In both stars we have discovered strong \ion{Fe}{viii} lines (Fig.\,\ref{fig:fe8_pg1159}). They are much broader than predicted from our static models. The reason is most probably that the lines form in the stellar wind of these low-gravity (i.e. high-luminosity) central stars. The same mechanism could hamper the detection of weak \ion{Fe}{vii} lines, whose apparent absence the assertion of Fe deficiency was based on. Because of the prominent \ion{Fe}{viii} lines, we may speculate that the iron abundance in these objects is about solar, too, but a detailed analysis with expanding model atmospheres is required. Our results ease the problem of explaining the previously believed extreme iron deficiency with stellar evolution models, which do not predict such large Fe depletions by neutron captures in the intershell region of AGB stars. Two of the objects investigated in this study (PG\,1159$-$035\ and PG\,1520$+$525) have rather similar parameters and they can be regarded as a fixed point for the blue edge of the GW~Vir instability strip, at least for a particular chemical envelope composition. Within error limits, they have the same atmospheric abundance pattern (in particular the Fe abundance) and, thus, differences in the pulsation driving behaviour should only result from differences in $T\mathrm{\hspace*{-0.4ex}_{eff}}$\,\ and $\log\,g$\hspace*{0.5ex}. Our correct match of the \ion{Fe}{vii}/\ion{Fe}{viii} ionisation balance corroborates the previously determined parameters for the pulsator PG\,1159$-$035: $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 140\,000\,K, $\log\,g$\hspace*{0.5ex} = 7.0. The non-pulsator PG\,1520$+$525\ has $T\mathrm{\hspace*{-0.4ex}_{eff}}$\, = 150\,000\,K, $\log\,g$\hspace*{0.5ex} = 7.5. These parameters are confirmed by an analysis of its Chandra X-ray spectrum (Adamczak et al., in prep.). \begin{acknowledgements} T.R. is supported by the German Aerospace Centre (DLR) under grant 05\,OR\,0806. Some of the data presented in this paper were obtained from the Multimission Archive at the Space Telescope Science Institute (MAST). STScI is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555. Support for MAST for non-HST data is provided by the NASA Office of Space Science via grant NNX09AF08G and by other grants and contracts. \end{acknowledgements}
proofpile-arXiv_068-428
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} A number of earth-, balloon-, and satellite-based experiments have observed anomalies in the spectra of cosmic ray electrons and positrons. Fermi-LAT~\cite{Abdo:2009zk} and H.E.S.S.~\cite{Aharonian:2009ah} have measured an excess in the flux of electrons and positrons up to, and beyond $1$~TeV, respectively. PAMELA~\cite{Adriani:2008zr}, which is sensitive to electrons and positrons up to a few hundred GeV in energy, detects an upturn in the positron fraction beginning around 7 GeV, in disagreement with the expected decline from secondary production mechanisms. Recent measurements at Fermi-LAT support this result~\cite{newfermi}. In contrast, current experiments observe no excess in the proton or antiproton flux~\cite{Adriani:2008zq}. Although astrophysical explanations are possible~\cite{astrox}, these observations can be explained if the data includes a contribution from the decays of unstable dark matter particles that populate the galactic halo~\cite{analyses}. The dark matter candidate must be TeV-scale in mass, have a lifetime of order $10^{26}$~seconds, and decay preferentially to leptons. A number of scenarios have been proposed to explain the desired dark matter lifetime and decay properties~\cite{CEP,anomaly,sgddm,models,nonab}. To be more quantitative, consider a scalar dark matter candidate $\chi$ which (after the breaking of all relevant gauge symmetries) has an effective coupling $g_{eff}$ to some standard model fermion $f$ given by $g_{eff} \chi \bar f_L f_R + \mbox{h.c.}$ To obtain a lifetime of $10^{26}$~seconds, one finds $g_{eff} \sim 10^{-26}$ if $m_\chi \sim 3$~TeV. From the perspective of naturalness, the origin of such a small dimensionless number requires an explanation. One possibility is that physics near the dark matter mass scale is entirely responsible for the appearance of a small number, as is the case in models where a global symmetry, that would otherwise stabilize the dark matter candidate, is broken by instanton effects of a new non-Abelian gauge group $G_D$. A leptophilic model of fermionic dark matter along these lines was presented in Ref.~\cite{CEP}: the new gauge group is broken not far above the dark matter mass scale and the effective coupling is exponentially suppressed, $g_{eff} \propto \exp(-16\pi^2/g_D^2)$, where $g_D$ is the $G_D$ gauge coupling. (An example of a supersymmetric model with anomaly-induced dark matter decays can be found in Ref.~\cite{anomaly}.) On the other hand, the appearance of a small effective coupling can arise if the breaking of the stabilizing symmetry is communicated to the dark matter via higher-dimension operators suppressed by some high scale $M$. Then it is possible that $g_{eff}$ is suppressed by $(m_\chi/M)^p$, for some power $p$; it is well known that for $m_\chi \sim {\cal O}(1)$~TeV and $p=2$, the correct lifetime can be obtained for $M \sim {\cal O}(10^{16})$~GeV, remarkably coincident with the grand unification (GUT) scale in models with TeV-scale supersymmetry (SUSY)~\cite{sgddm}. If the LHC fails to find SUSY in the coming years, however, then the association of $10^{16}$~GeV with a fundamental mass scale will no longer be strongly preferred. Exploring other alternatives is well motivated from this perspective and, in any event, may provide valuable insight into the range of possible decaying dark matter scenarios. The very naive estimate for $g_{eff}$ discussed above presumes that the result is determined by a TeV-scale dark matter mass $m_\chi$, a single high scale $M$ and no small dimensionless factors. Given these assumption, the choice $M=M_*$, where $M_*=2 \times 10^{18}$~GeV is the reduced Planck mass, would not be viable: the dark matter decay rate is much too large for $p=1$ ({\em i.e.}, there would be no dark matter left at the present epoch) and is much too small for $p=2$ ({\em i.e.}, there would not be enough events to explain the cosmic ray $e^\pm$ excess). However, Planck-suppressed effects arise so generically that we should be careful not to discount them too quickly. What we show in the present paper is that Planck-suppressed operators can lead to the desired dark matter lifetime if they correct new physics at an intermediate scale. In the model that we present, this is the scale at which Yukawa couplings of the standard model charged leptons are generated via the integrating out of vector-like states. This sector will have the structure of a Froggatt-Nielsen model~\cite{fn}: an Abelian discrete symmetry will restrict the couplings of the standard model leptons and the vector-like states, but will be spontaneously broken by the vacuum expectation values (vevs) of a set of scalar fields $\{\phi\}$. Integrating out the heavy states will not only lead to the standard model charged lepton Yukawa couplings, but also to dark matter couplings that are naturally leptophilic and lead to dark matter decay. Aside from setting the overall scale of the charged lepton masses, the symmetry structure of our model will not restrict the detailed textures of the standard model Yukawa matrices. This feature is not automatic; symmetries introduced to guarantee dark matter leptophilia may also make it difficult to obtain the correct lepton mass matrices, at least without additional theoretical assumptions (for example, the addition of electroweak Higgs triplets, as in the model of Ref.~\cite{nonab}). Our framework is free of such complications and is compatible, in principle, with many possible extensions that might address the full flavor structure of the standard model. Our paper is organized as follows. In the next section, we present a model that illustrates our proposal. In Section~3, we compute the predicted $e^\pm$ flux, $\Phi(e^\pm)$, and the positron fraction $\Phi(e^+)/[\Phi(e^+)+\Phi(e^-)]$ for some points in the parameter space of our model and compare our results to the relevant cosmic ray data. It is worth noting that this analysis has applicability to any model that leads to similar dark matter decay operators. In Section~4, we comment on the relic density and dark matter direct detection in our example model. In Section~5, we summarize our conclusions. \section{A Model} We assume that the right-handed charged leptons of the standard model, $e_R$, and four sets of heavy vector-like charged leptons are constrained by the discrete symmetry \begin{equation} G=\mathbb{Z}_p \times \mathbb{Z}_q \, , \end{equation} with $p$ and $q$ to be determined shortly. We assume that the vector-like leptons have the same electroweak quantum numbers as $e_R$ \begin{equation} E^{(i)}_R \sim E^{(i)}_L \sim e_R, \,\,\,\,\, (i=1 \ldots 4) \, . \label{eq:eee} \end{equation} All the fields shown are assumed to be triplets in generation space, with their generation indices suppressed. Under the discrete symmetry, the fields in Eq.~(\ref{eq:eee}) are taken to transform as \begin{equation} e_R \rightarrow \omega^{-4} \, e_R \, , \label{eq:charges1} \end{equation} \begin{equation} E_{L,R}^{(i)} \rightarrow \omega^{1-i} \, E_{L,R}^{(i)}, \,\,\,\,\, (i=1 \ldots 4)\, . \label{eq:charges2} \end{equation} We will take $\omega$ and $\eta$ to be elements of $\mathbb{Z}_p$ and $\mathbb{Z}_q$, respectively, with $\omega^p=1$ and $\eta^q=1$. In addition, we assume the presence of a heavy right-handed neutrino, $\nu_R$, that is a singlet under $G$. We note that the fields that are charged under $G$ do not transform under any of the non-Abelian standard model gauge group factors, so that $G$ satisfies the consistency conditions of a discrete gauge symmetry in the low-energy theory~\cite{banksdine}; such discrete symmetries are not violated by quantum gravitational effects\footnote{The consistency conditions require that anomalies involving the non-Abelian gauge groups that are linear in a continuous group that embeds $G$ must vanish, as is automatic above. Ref.~\cite{banksdine} indicates that no rigorous proof exists that the cancellation of the linear gravitational anomalies is a necessary condition for the consistency of the low-energy theory. Nonetheless, such a cancellation can be achieved here by including a singlet, left-handed fermion, $N_L$, that transforms in the same way as $e_R$ under $G$. For the choice $p=8$, adopted later in this section, $N_L$ can develop a Majorana mass somewhat below $M_*$ and decay rapidly to lighter states via Planck-suppressed operators. Including such a state does not affect the phenomenology of the model otherwise.}. The Yukawa couplings of the standard model charged leptons arise when the symmetry $G$ is spontaneously broken and the vector-like leptons are integrated out of the theory. Symmetry breaking is accomplished via the vacuum expectation values of two scalar fields $\phi_E$ and $\phi_D$, which transform as \begin{eqnarray} && \phi_E \rightarrow \omega \, \phi_E \, , \nonumber \\ && \phi_D \rightarrow \eta \, \phi_D \, . \end{eqnarray} The following renormalizable Lagrangian terms involving the charged lepton fields are allowed by the discrete symmetry: \begin{eqnarray} {\cal L}_E &=& \overline{L}_L H E_R^{(1)} + \sum_{i=1}^3 \overline{E}^{(i)}_L \phi_E E^{(i+1)}_R + \overline{E}^{(4)}_L \phi_E \,e_R \nonumber \\ &+& \sum_{i=1}^4 M^{(i)} \,\overline{E}^{(i)}_L E^{(i)}_R + \mbox{ h.c.} \label{eq:esector} \end{eqnarray} While it is not our goal to produce a theory of flavor, we note that the terms in Eq.~(\ref{eq:esector}) are of the type one expects in flavor models based on the Froggatt-Nielsen mechanism. Hence, integrating out the $E$ fields leads to a higher-dimension operator \begin{equation} {\cal L} \supset \frac{1}{M^4} \overline{L}_L H \phi_E^4 e_R + \mbox{ h.c.} \, , \label{eq:chlep} \end{equation} which provides an origin for the charged lepton Yukawa couplings. Choosing $\langle \phi_E \rangle/M \sim 0.3$ gives the correct scale for the tau lepton Yukawa coupling; the smaller, electron and muon Yukawa couplings may be accommodated by suitable choices of the undetermined couplings in Eq.~(\ref{eq:esector}). One might imagine that the remaining Yukawa hierarchies could be arranged by the imposition of additional symmetries, though we will not explore that possibility here. We now introduce our dark matter candidate $\chi$, a complex scalar field that transforms as \begin{equation} \chi \rightarrow \omega^4 \, \chi \,\,\,\,\, \mbox{ and } \,\,\,\,\, \chi \rightarrow \eta^{-2} \chi \, \label{eq:chicharges} \end{equation} under $\mathbb{Z}_p \times \mathbb{Z}_q$. We assume that all the nonvanishing powers of $\omega$ and $\eta$ shown in Eqs.~(\ref{eq:charges1}), (\ref{eq:charges2}) and (\ref{eq:chicharges}) are nontrivial, which requires that $p>4$ and $q>2$. Then, there are no renormalizable interactions involving a single $\chi$ field (or its conjugate) and two fermionic fields that could lead to dark matter decay. However, non-renormalizable, Planck-suppressed operator provide the desired effect. The lowest-order, Planck-suppressed correction to Eqs.~(\ref{eq:esector}) that involves a single $\chi$ field is the unique dimension-six operator \begin{equation} \Delta {\cal L}_e = \frac{1}{M_*^2} \chi \, \overline{E}^{(1)}_L \phi_D^2 \, e_R + \mbox{ h.c.} \label{eq:d6opE} \end{equation} Including Eq.~(\ref{eq:d6opE}) and again integrating out the heavy, vector-like states, one obtains a new higher-dimension operator, \begin{equation} {\cal L}_{decay} = \frac{\phi_D^2}{M M_*^2}\, \chi \overline{L}_L H e_R + \mbox{ h.c.}, \label{eq:newop} \end{equation} which leads to dark matter decay. For $m_\chi \sim 3$~TeV (compatible qualitatively with fits to the PAMELA and Fermi-LAT data), a lifetime of $10^{26}$ seconds is obtained when \begin{equation} \frac{\langle \phi_D \rangle^2}{M_*^2}\frac{\langle H \rangle}{M} \sim 1 \times 10^{-26} \,. \end{equation} For our operator expansion to be sensible, we require $\langle \phi_D \rangle < M$; however, we also do not want a proliferation of wildly dissimilar physical scales, if this can be avoided. Interestingly, if we choose $M$ to be the geometric mean of $\langle H \rangle$ and $M_*$, one finds \begin{equation} M = 2 \times 10^{10}\mbox{ GeV}, \,\,\,\,\, \langle \phi_E \rangle = 0.3\, M, \,\,\,\,\, \langle \phi_D \rangle =0.1\, M\,, \end{equation} which meets our aesthetic requirements. Standard model quark and neutral lepton masses are unaffected by the discrete symmetry of our model, by construction. Light neutrino masses arise via a conventional see-saw mechanism, and it is possible to obtain a right-handed neutrino mass scale $M_R \approx M$, so that all the heavy leptons appear at a comparable scale. Assuming that the largest neutrino squared mass is comparable to $\Delta m^2_{32}=2.43 \times 10^{-3}$~eV$^2$, as suggested by atmospheric neutrino oscillations~\cite{pdg}, then this possibility is obtained if the overall scale of the Yukawa coupling matrix that appears in the neutrino Dirac mass term is of the same order as the charm quark Yukawa coupling. \begin{figure} \centering \includegraphics[width=8cm,angle=0]{scales.eps} \caption{A possible choice for the mass scales in the theory. Symmetry breaking vevs appear within approximately an order of magnitude of the lower two scales.} \label{fig:scales} \end{figure} This scenario is depicted in Fig.~\ref{fig:scales}. In this case, the theory is characterized by three fundamental scales: the Planck scale, an intermediate scale (associated with charged lepton flavor and right-handed neutrino masses), and the TeV-scale. Symmetry-breaking vevs appear within a factor of $\alt 10$ below the latter two. Of course, the right-handed neutrino scale need not be linked with the scale at which the charged lepton Yukawa couplings are generated; this is simply one of many viable possibilities that depend on choices of the free parameters of the model. Finally, we return to the discrete symmetry group $G=\mathbb{Z}_p \times \mathbb{Z}_q$. We have noted that the structure of the theory that we have described is obtained for $p>4$ and $q>2$, but this does not take into account an important additional constraint: there must be no Planck-suppressed operators involving couplings between the various scalar fields in the theory that can lead to other dark matter decay channels that are either (i) too fast or (ii) too hadronic. For example, the choice $p=5$ and $q=3$, allows the renormalizable $G$-invariant operator $\chi \phi_E \phi_D^\dagger$, which leads to mixing, for example, between the $\chi$ and $\phi_E$ fields; the latter couples to two standard model leptons via the operator in Eq.~(\ref{eq:chlep}), leading to a disastrously large decay rate. We find that all unwanted operators are sufficiently suppressed if we take $p=8$ and $q=4$, that is \begin{equation} G_I = \mathbb{Z}_8 \times \mathbb{Z}_4 \, . \end{equation} The lowest-order combination of scalar fields that is invariant under $G_I$, as well as the standard model gauge group, is \begin{equation} \frac{1}{M_*^3} \chi \, \phi_D^2 \, \phi_E^4 \, , \label{eq:thecombo} \end{equation} Suppression by {\em three} factors of the Planck scale is more than sufficient to suppress any operators that are generated when the $\phi_E$ and $\phi_D$ fields are integrated out of the theory, or that may be constructed from products of Eq.~(\ref{eq:thecombo}) with any $G_I$-singlet, gauge-invariant combination of standard model fields. It is straightforward to confirm that the alternative choice \begin{equation} G_{II}=\mathbb{Z}_8 \times \mathbb{Z}_5 \, , \end{equation} is also viable, by similar arguments. The difference between the symmetry groups $G_I$ and $G_{II}$ is that the former allows two types of dark matter mass terms: $\chi^2 + \mbox{h.c.}$ and $\chi^\dagger \chi$. This leads to a mass splitting between the two real scalar components of $\chi$, so that the lighter is the dark matter candidate. The choice $G_{II}$ forbids the $\chi^2$ mass terms, so that the dark matter consists of particles and anti-particles associated with the original complex scalar field. We note that in this theory, the renormalizable interactions involving $\chi$ have an accidental U(1)$_\chi$ global symmetry which would lead to dark matter stability in the absence of the Planck-suppressed effects. The analysis that we present in the following sections is somewhat simplified by the choice of $G_{II}$, which we adopt henceforth. \section{Cosmic Ray Spectra} \label{sec:positron} In this section, we investigate the cosmic ray $e^\pm$ and proton/antiproton spectra of our model. Our treatment of cosmic ray propagation follows that of Ref.~\cite{Ibarra:2009dr}. We show that model parameters may be chosen to accommodate the positron excess and the rising electron-positron flux observed by the PAMELA and Fermi-LAT experiments, respectively. In Eq.~(\ref{eq:newop}), we identified the operator responsible for dark matter decays. More explicitly, this operator may be written \begin{equation} {\cal L}_{decay} = c_{ij} \frac{\langle \phi_D \rangle^2}{M M_*^2}\, \chi \overline{L}^i_L H e^j_R + \mbox{ h.c.}, \label{eq:opagain} \end{equation} where $i$ and $j$ are generation indices, and $c_{ij}$ represents unknown order-one coefficients. Different choices for the couplings $c_{ij}$ will lead, in principle, to different cosmic ray spectra. To simplify the analysis, we focus on two possibilities: In the lepton mass eigenstate basis, the fermions appearing in the decay operators are either ({\em i}) muons exclusively, or ({\em ii}) taus exclusively. We will find that either of these choices is consistent with the data, even though we have not fully exploited the parametric freedom available in the $c_{ij}$. This is sufficient to demonstrate the viability of our model. The remaining factors in the operator coefficient are chosen to obtain the desired dark matter lifetime, as we discussed in the previous section. In unitary gauge, the operator (\ref{eq:opagain}) can be be expanded \begin{equation} {\cal L}_{decay} = \frac{1}{\sqrt{2}} g_{ij} (v_{ew}+h) \, \chi \overline{e}_L^i \, e_R^j + \mbox{ h.c.}, \end{equation} where $h$ is the standard model Higgs field, which we will assume has a mass of $117$~GeV, $v_{ew} = 246$~GeV, and $g_{ij} \equiv c_{ij} \langle \phi_D \rangle^2 / (M M_*^2)$. The term proportional to the Higgs vev leads to the two-body decay $\chi \rightarrow \ell^+ \ell^-$, for $\ell=\mu$ or $\tau$, while the remaining term contributes to $\chi \rightarrow \ell^+ \ell^- h$. We take both of these decay channels into account in our numerical analysis. The final state particles in these primary decays will subsequently decay. The electrons, positrons, protons and antiprotons that are produced must be added to expected astrophysical backgrounds to predict the spectra at experiments like PAMELA and Fermi-LAT. Electrons and positrons that are produced in dark matter decays must propagate through the Milky Way before reaching the Earth. In order to determine the observed fluxes, one must model this propagation. The transport equation for electron and positrons is given by \begin{equation} \label{eq:transport} 0 = \nabla \cdot \left[ K(E,\vec r) \nabla f_{e^\pm}\right] + \frac{\partial}{\partial E}\left [b(E,\vec r)f_{e^\pm}\right ] + Q_{e^\pm} (E,\vec r), \end{equation} where $f_{e^\pm} (E, \vec r, t)$ is the number density of electron or positrons per unit energy, $K(E,\vec r)$ is the diffusion coefficient and $b(E,\vec r)$ is the energy loss rate. We assume the MED propagation model described in Ref.~\cite{Delahaye:2007fr}. The diffusion coefficient and the energy loss rate are assumed to be spatially constant throughout the diffusion zone and are given by \begin{equation} K(E, \vec r) = 0.0112 \epsilon^{0.70} \textrm{ kpc}^2/\textrm{Myr} \end{equation} and \begin{equation} b(E, \vec r) = 10^{-26} \epsilon^2 \textrm{ GeV/s} \, , \end{equation} where $\epsilon = E/1$~GeV. The last term in Eq.~(\ref{eq:transport}) is the source term given by \begin{equation} \label{eq:source} Q(E,\vec r) = \frac{\rho(\vec r)}{M_\chi \tau_\chi} \frac{dN}{dE}, \end{equation} where $M_\chi$ is the dark matter mass and $\tau_\chi$ is the dark matter lifetime. In models like ours, where the dark matter can decay via more than one channel, the energy spectrum $dN/dE$ is given by \begin{equation} \frac{dN}{dE} = \sum_i \frac{\Gamma_i}{\Gamma} \left( \frac{dN}{dE}\right)_i, \end{equation} where $\Gamma_i / \Gamma$ is the branching fraction and $(dN/dE)_i$ is the electron-positron energy spectrum of the $i^{\rm th}$ decay channel. We use PYTHIA~\cite{Sjostrand:2007gs} to determine the $(dN/dE)_i$. For the dark matter density, $\rho(\vec r)$, we adopt the spherically symmetric Navarro-Frenk-White halo density profile~\cite{Navarro:1995iw} \begin{equation} \rho(r) = \frac{\rho_0}{(r/r_c)[1+(r/r_c)]^2} \, , \end{equation} with $\rho_0 \simeq 0.26 \textrm{ GeV/cm}^3$ and $r_c \simeq 20 \textrm{ kpc}$. The solutions to the transport equation are subject to the boundary condition $f_{e^\pm}=0$ at the edge of the diffusion zone, a cylinder of half-height $L = 4$ kpc and radius $R = 20$ kpc measured from the galactic center. The solution of the transport equation can be written \begin{equation} f_{e^\pm}(E) = \frac{1}{M_\chi \tau_\chi} \int_{0}^{M_\chi} dE' G_{e^\pm} (E,E') \frac{dN_{e^\pm} (E')}{dE'}, \end{equation} where $G_{e^\pm} (E,E')$ is a Green's function, whose explicit form can be found in Ref.~\cite{Ibarra:2008qg}. The interstellar flux then follows immediately from \begin{equation} \Phi^{DM}_{e^\pm} = \frac{c}{4\pi} f_{e^\pm}(E). \end{equation} We adopt a parameterization of the interstellar background fluxes given in Ref.~\cite{Ibarra:2009dr}: \begin{equation} \Phi_{e^-}^{bkg}(E) = \left( \frac{82.0\epsilon^{-0.28}}{1+0.224\epsilon^{2.93}} \right) \textrm{ GeV}^{-1}\textrm{m}^{-2}\textrm{s}^{-1}\textrm{sr}^{-1}, \end{equation} \begin{equation} \Phi_{e^+}^{bkg}(E) = \left( \frac{38.4\epsilon^{-4.78}}{1+0.0002\epsilon^{5.63}} +24.0\epsilon^{-3.41} \right) \textrm{ GeV}^{-1}\textrm{m}^{-2}\textrm{s}^{-1}\textrm{sr}^{-1}. \end{equation} Finally, the flux at the top of the earth's atmosphere, $\Phi_{e^\pm}^{TOA}$, is corrected by solar modulation effects~\cite{Ibarra:2009dr}, \begin{equation} \Phi_{e^\pm}^{TOA} (E_{TOA}) = \frac{E_{TOA}^2}{E^2_{IS}} \Phi_{e^\pm}^{IS} (E_{IS}) \, , \end{equation} where $E_{IS} = E_{TOA} + |e| \phi$, and $|e| \phi = 550$~MeV. $E_{IS}$ and $E_{TOA}$ are the energy of positron/electron at the heliospheric boundary and at the top of atmosphere, respectively. The total electron and positron flux is determined by \begin{equation} \Phi^{tot} (E) = \Phi^{DM}_{e^-}(E) + \Phi^{DM}_{e^+}(E) + k \Phi^{bkg}_{e^-}(E) + \Phi^{bkg}_{e^+}(E) , \end{equation} where $k$ is a free parameter that determines the normalization of the primary electron flux background. The positron excess is given by \begin{equation} PF(E) = \frac{ \Phi^{DM}_{e^+}(E) + \Phi^{bkg}_{e^+}(E) }{\Phi^{tot} (E)}. \end{equation} The results of our analysis are presented in Figs.~\ref{fig:positronmu} and \ref{fig:positrontau}. In the case where the dark matter decays only to $\mu^+\mu^-$ and $\mu^+\mu^-h$, we find good agreement with the data for $\tau_\chi=1.8\times 10^{26}$~s and $M_\chi=2.5$~TeV. In this case, the branching fraction to the two-body decay mode is $90.2\%$. In the case where the decay is to $\tau^+\tau^-$ and $\tau^+\tau^-h$ only, our best results are obtained for $\tau_\chi=9.0 \times 10^{25}$~s and $M_\chi=5$~TeV, corresponding to a two-body branching fraction of 69.6\%. In all these results, the background electron flux parameter $k$ is set to $0.88$, following Ref.~\cite{Ibarra:2008qg}. \begin{figure} \centering \includegraphics[width=8cm]{positronmuexcess.eps} \includegraphics[width=8cm]{positronmuflux.eps} \caption{\textit{Left panel}: The positron excess for dark matter decaying into $\mu^+\mu^-$ and $\mu^+\mu^-h$. The dark matter mass is $2.5$~TeV and lifetime $1.8 \times 10^{26}$~s; the branching fraction to the two-body decay mode is $90.2\%$. The dashed line represents the background and the solid line represents the background plus dark matter signal. Data from the following experiments are shown: PAMELA~\cite{Adriani:2008zr} (solid dots), HEAT~\cite{Barwick:1997ig} ($\circ$), AMS-01~\cite{Aguilar:2007yf} ($\bigtriangledown$), and CAPRICE~\cite{Boezio} ($\bigtriangleup$). \textit{Right panel}: The corresponding graph for the total electron and positron flux. Data from the following experiments are shown: Fermi-LAT~\cite{Ackermann:2010ij} (solid dots), HESS~\cite{Aharonian:2008aa} ($\bigtriangledown$), PPB-BETS~\cite{Torii:2008xu} ($\diamond$), HEAT~\cite{DuVernois:2001bb} ($\bigtriangleup$).} \label{fig:positronmu} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{positrontauexcess.eps} \includegraphics[width=8cm]{positrontauflux.eps} \caption{\textit{Left panel}: The positron excess for dark matter decaying into $\tau^-\tau^+$ and $\tau^-\tau^-h$. The dark matter mass is $5.0$~TeV and lifetime $9.0 \times 10^{25}$~s; the branching fraction to the two-body decay mode is 69.6\% . \textit{Right panel}: The corresponding graph for the total electron and positron flux.} \label{fig:positrontau} \end{figure} Since the dark matter decays in our model include the production of standard model Higgs bosons in the final state, it is worthwhile to check that subsequent Higgs decays do not lead to an excess of cosmic ray antiprotons, in conflict with the experimental data. This will not be the case at our two benchmark parameter choices since the branching fraction to the three-body decay mode is suppressed compared to the two-body mode. The procedure for computing the cosmic ray antiproton flux is similar to that of the cosmic ray electrons and positrons. The transport equation for antiproton propagation within the Milky Way is given by \begin{equation} 0 = \nabla \cdot \left[ K(T,\vec r)\nabla f_{\bar p} - \vec V_c(\vec r) f_{\bar p} \right] + Q_{\bar p} (T,\vec r) \end{equation} where $T$ is the antiproton kinetic energy, $\vec V_c(\vec r)$ is the convection velocity, and the source term $Q_{\bar p}$ has the same form as Eq.~(\ref{eq:source}). As in the case of $e^\pm$ propagation, the antiproton number density can be expressed in terms of a Green's function \begin{equation} f_{\bar p} (T) = \frac{1}{M_\chi \tau_\chi} \int_0^{T_{max}} dT' G_{\bar p} (T, T') \frac{dN_{\bar p} (T')}{dT'}, \end{equation} where $G_{\bar p}(T,T')$ can be found in Ref.~\cite{Ibarra:2008qg}. The relation between the antiproton number density and the interstellar flux of antiproton is given by \begin{equation} \Phi_{\bar p}^{DM} (T) = \frac{v}{4\pi} f_{\bar p} (T) \, , \end{equation} where $v$ is the antiproton velocity. We also take account the solar modulation effect on the antiproton flux at the top of atmosphere, $\Phi_{\bar p}^{TOA}$, which is given by \begin{equation} \Phi_{\bar p}^{TOA} (T_{TOA}) = \left( \frac{2m_pT_{TOA} + T_{TOA}^2}{2m_pT_{IS} + T_{IS}^2} \right) \Phi_{\bar p}^{IS} (T_{IS}), \end{equation} where $T_{IS}$ and $T_{TOA}$ are the antiproton kinetic energies at the heliospheric boundary and at the top of atmosphere, respectively, with $T_{IS}= T_{TOA} + |e| \phi$. For the proton and antiproton flux, we adopt the background given in Ref.~\cite{Ptuskin:2005ax}. Again assuming the MED propagation model~Ref. \cite{Delahaye:2007fr}, we compute the antiproton flux and the antiproton to proton ratio for dark matter decays to $\mu^-\mu^+$ and $\mu^-\mu^+ h$, shown in Fig.~\ref{fig:protonmu}, and for decays to $\tau^-\tau^+$ and $\tau^-\tau^- h$, shown in Fig.~\ref{fig:protontau}. We see that in both cases, the antiproton excess above the predicted background curves is small and consistent with the data shown from a variety of experiments. \begin{figure} \centering \includegraphics[width=8cm]{protonmuflux.eps} \includegraphics[width=8cm]{protonmuexcess.eps} \caption{\textit{Left panel}: The antiproton flux for dark matter decaying into $\mu^+\mu^-$ and $\mu^+\mu^-h$. The dark matter mass is $2.5$~TeV and lifetime $1.8 \times 10^{26}$~s; the branching fraction to the two-body decay mode is $90.2\%$. The dashed line represents the background and the solid line represents the background plus dark matter signal. Data from the following experiments are shown: PAMELA \cite{Adriani:2010rc} (solid dots), WiZard/CAPRICE \cite{Boezio:1997ec} ($\diamond$), and BESS \cite{Orito:1999re} ($\bigtriangleup$). \textit{Right panel}: The corresponding graph for the antiproton to proton ratio. Data from the following experiments are shown: PAMELA \cite{Adriani:2010rc} (solid dots), IMAX \cite{Mitchell:1996bi} ($\star$), CAPRICE \cite{Boezio:1997ec} ($\diamond$) and BESS \cite{Orito:1999re} ($\bigtriangleup$).} \label{fig:protonmu} \end{figure} \begin{figure} \centering \includegraphics[width=8cm]{protontauflux.eps} \includegraphics[width=8cm]{protontauexcess.eps} \caption{\textit{Left panel}: The antiproton flux for dark matter decaying into $\tau^-\tau^+$ and $\tau^-\tau^-h$. The dark matter mass is $5.0$~TeV and lifetime $9.0 \times 10^{25}$~s; the branching fraction to the two-body decay mode is 69.6\%. \textit{Right panel}: The corresponding graph for the antiproton to proton ratio.} \label{fig:protontau} \end{figure} \section{Relic Density and Direct Detection} In this section, we show that the model we have presented can provide the correct dark matter relic density while remaining consistent with the direct detections bounds. The part of the Lagrangian that is relevant for computing the relic density, as well as the dark matter-nucleon elastic scattering cross section, is the coupling between $\chi$ and standard model Higgs \begin{equation} \mathcal L \supset \lambda \chi^\dagger\chi H^\dagger H. \end{equation} In unitary gauge, this can be expanded \begin{equation} \mathcal L \supset \frac{\lambda}{2} \left( \chi^\dagger \chi\, h^2 + 2\, v_{ew}\,\chi^\dagger\chi\, h \right). \label{eq:unints} \end{equation} \begin{figure} \centering \includegraphics[width=4cm]{fourpoint.eps} \includegraphics[width=5.7cm]{shiggs.eps} \includegraphics[width=4cm]{thiggs.eps} \includegraphics[width=4cm]{uhiggs.eps} \includegraphics[width=5.5cm]{sfermion.eps} \includegraphics[width=6.2cm]{sW.eps} \caption{Dark matter annihilation diagrams.} \label{fig:annihilation} \end{figure} As a consequence of Eq.~(\ref{eq:unints}), $\chi$ and $\overline{\chi}$ pairs may annihilate into a variety of standard model particles. The leading diagrams are shown in Fig.~\ref{fig:annihilation}. The cross section for annihilations into fermions is given by \begin{equation} \label{eq:interaction} \sigma_{\chi \bar\chi \rightarrow f\bar f} = \frac{N_c}{8\pi}\frac{\lambda^2m_f^2}{s\,(s-m_h^2)^2}\sqrt{\frac{\left(s- 4m_f^2\right)^3}{s-4m_\chi^2}}, \end{equation} where $N_c$ is the number of fermion colors ($N_c=1$ for leptons and $N_c=3$ for quarks) and $m_f$ is the fermion mass. The cross sections for annihilations into $W$ and $Z$ bosons are given by \begin{equation} \sigma_{\chi \bar\chi \rightarrow ZZ} = \frac{\lambda^2}{8\pi}\frac{m_Z^4}{s\, (s-m_h^2)^2}(3-\frac{s}{m_Z^2}+\frac{s^2}{4m_Z^4})\sqrt{\frac{s-4m_Z^2}{s-4m_\chi^2}}, \end{equation} \begin{equation} \sigma_{\chi \bar \chi \rightarrow W^+W^-} = \frac{\lambda^2}{4\pi}\frac{m_W^4}{s\, (s-m_h^2)^2}(3-\frac{s}{m_W^2}+\frac{s^2}{4m_W^4})\sqrt{\frac{s-4m_W^2}{s-4m_\chi^2}}, \end{equation} where $m_W$ ($m_Z$) is the mass of $W$ ($Z$) boson. In the case where the dark matter annihilates into a pair of standard model Higgs bosons, we can safely ignore the $t$- and $u$-channel diagrams since the typical momenta are much smaller than $m_\chi$ at temperatures near freeze out. Hence, the cross section is given by \begin{equation} \sigma_{\chi \bar\chi \rightarrow hh} = \frac{\lambda^2}{32\pi \, s} \sqrt{ \frac{s-4 m_h^2}{s-4m_\chi^2}} \left( 1 + \frac{6 m_h^2}{s-m_h^2} + \frac{9m_h^4}{(s-m_h^2)^2} \right). \end{equation} The evolution of dark matter number density, $n_\chi$, is governed by the Boltzmann equation \begin{equation} \frac{dn_{\chi}}{dt}+3H(t) n_{\chi} = -\langle \sigma v \rangle [n_{\chi}^2-(n_{\chi}^{EQ})^2], \end{equation} where $H(t)$ is the Hubble parameter as a function of time and $n_\chi^{EQ}$ is the equilibrium number density. The thermally-averaged annihilation cross section, $\langle \sigma v \rangle$, can be calculated by evaluating the integral \cite{Gondolo:1990dk} \begin{equation} \langle \sigma v \rangle =\frac{1}{8m_\chi^4TK_2^2(m_{\chi}/T)}\int_{4m_{\chi}^2}^\infty (\sigma_{tot} )\, (s-4m_{\chi}^2)\sqrt{s} \, K_1(\sqrt{s}/T) \, ds \,\,\, , \end{equation} where $\sigma_{tot}$ is the total annihilation cross section and the $K_i$ are modified Bessel functions of order $i$. We find the freeze out temperature, $T_f$, using the freeze-out condition~\cite{KolbTurner} \begin{equation} \frac{\Gamma}{H(t_F)} \equiv \frac{n_{\chi}^{EQ} \langle \sigma v \rangle}{H(t_F)} \approx 1 \,\, , \end{equation} where equilibrium number density as a function of temperature is given by \begin{equation} n_{\chi}^{EQ} = \left(\frac{m_\chi T}{2 \pi}\right)^{3/2} e^{-m_\chi/T} \, . \end{equation} The Hubble parameter may be re-expressed as a function of temperature $T$ \begin{equation} H=1.66 \,g_*^{1/2} \,T^2/m_{Pl} \, . \end{equation} where $g_*$ is the number of relativistic degrees of freedom and $m_{Pl}=1.22\times 10^{19}$~GeV is the Planck mass. It is customary to normalize the temperature with the dark matter mass, $x = m_\chi/T$. For the points in parameter space discussed below, we found that the freeze out happens when $x_f \approx 28$. The present dark matter density can be calculated using the relation \begin{equation} \frac{1}{Y_0} = \frac{1}{Y_f} + \sqrt{\frac{\pi}{45}}m_{Pl} \: m_{\chi}\int_{x_f}^{x_0} \frac{g_*^{1/2}}{x^2} \langle \sigma v \rangle \, dx \; , \end{equation} where $Y$ is the ratio of number to entropy density and the subscript $0$ denotes the present time. The ratio of the dark matter relic density to the critical density $\rho_c$ is given by $\Omega_D = 2\, Y_0s_0m_{\chi}/\rho_c$, where $s_0$ is the present entropy density, or equivalently \begin{equation} \Omega_D h^2 \approx 5.6 \times 10^8\mbox{ GeV}^{-1} \, Y_0 \, m_\chi \,\,\, . \end{equation} Note that the factor of $2$ included in the expression for $\Omega_D$ takes into account the contribution from $\chi$ particles and $\bar\chi$ antiparticles. In the case $m_\chi=2.5$~TeV, we find numerically that the dark matter-Higgs coupling $\lambda = 0.9$ in order that $\Omega_D h^2 = 0.1$. For $m_\chi=5$~TeV, we find $\lambda=1.8$. These order-one couplings are perturbative. One should keep in mind that the physics responsible for dark matter annihilations is not directly linked to the mechanism that we have proposed to account for dark matter decay; other contributions to the total annihilation cross section can easily be arranged. For example, if the Higgs sector includes mixing with a gauge singlet scalar $S$ such that there is a scalar mass eigenstate near $2 m_\chi$, then the annihilation through the $s$-channel exchange of this state can lead to a resonantly enhanced annihilation channel, as in the model of Ref.~\cite{CEP}. In this case, the correct relic density could be obtained for smaller $\lambda$ than the values quoted above. Finally, we confirm that the model does not conflict with bounds from searches for dark matter-nuclear recoil. In this case, the most relevant contribution comes from the interaction between the dark matter and quarks mediated by a $t$-channel Higgs exchange. The effective Lagrangian is given by \begin{equation} \mathcal L = - \frac{\lambda \: m_q}{m_h^2} \chi^\dagger \chi \bar q q. \end{equation} Following Refs~\cite{McDonald:1993ex,Ellis:2000ds}, we can write an effective interaction between the nucleons and dark matter, \begin{equation} \mathcal L = -(f_p \chi^\dagger \chi \overline{p} \, p + f_n \chi^\dagger \chi \overline{n} \, n) \,, \end{equation} where $f_N = m_N \mathcal A_N \lambda / m_h^2$, for $N=p$ or $n$. The coefficient $\mathcal A_N$ can be evaluated using the results of Ref.~\cite{Ellis:2000ds}; numerically, one finds $f_p \approx f_n \approx \mathcal A_N m_N \lambda / m_h^2$ with $\mathcal A_N \approx 0.35$. Given the effective dark matter-nucleon interaction, we find that the spin-independent cross section is given by \begin{equation} \sigma_{SI} = \frac{\lambda^2 \mathcal A_N^2}{4 \pi}\frac{m_N^4}{m_h^4(m_\chi+m_N)^2}. \end{equation} For both of the cases discussed earlier, $(m_\chi=2.5\mbox{ TeV}, \, \lambda=0.9)$ and $(m_\chi=5\mbox{ TeV}, \, \lambda=1.8)$, we find $\sigma_{SI} \sim \mathcal O(10^{-45}) \textrm{ cm}^2$. This is two orders of magnitude smaller than the strongest bounds, from CDMS~\cite{Ahmed:2009zw}, which range from $\sim 2 \times 10^{-43}$~cm$^2$ at $m_\chi=1$~TeV to $2 \times 10^{-42}$~cm$^2$ at $m_\chi=10$~TeV. \section{Conclusions} Models of decaying dark matter require a plausible origin for the higher-dimension operators that lead to dark matter decays. The data from cosmic ray experiments like PAMELA and Fermi-LAT require that these operators involve lepton fields preferentially. We have shown how the desired higher-dimension operators may originate from Planck-suppressed couplings between a TeV-scale scalar dark matter particle $\chi$ and vector-like states at a mass scale $M$ that is intermediate between the weak and Planck scales. The vector-like sector has the structure of a Froggatt-Nielsen model: charged lepton Yukawa couplings arise only after these states are integrated out and a discrete gauged Abelian flavor symmetry is broken. Couplings between $\chi$ and the standard model gauge-invariant combination $\bar L_L H e_R$ are then also generated, with coefficients of order $\langle \phi \rangle^2/(M_*^2\,M)$, where $\langle \phi \rangle$ is the scale at which the flavor symmetry is broken. Taking $M$ and $\langle \phi \rangle$ near the geometric mean of the reduced Planck scale and the weak scale, $O(10^{10})$~GeV, leads to the desired dark matter lifetime. Neutrino masses can be generated via a conventional see-saw mechanism with the mass scale of right-handed neutrinos also near $M$. We pointed out that the symmetry structure of our model leads to an overall suppression factor multiplying the charged lepton Yukawa matrix, but does not constrain the standard model Yukawa textures otherwise. Hence, our framework is potentially compatible with a wide range of possible solutions to the more general problem of quark and lepton flavor in the standard model. We presented the necessary PYTHIA simulations to confirm that our model can account for the anomalies observed in the cosmic ray experiments discussed earlier. The leading contribution to the primary cosmic ray electron and positron flux in our model comes from two-body decays, in which the Higgs field is set equal to its vev in the operator described above; the subleading three body decays, $\chi \rightarrow \ell^+ \ell^- h^0$, are also possible. We have checked that these decay channels do not lead to an observable excess in the spectrum of cosmic ray antiprotons, since the cosmic ray antiproton flux is in agreement with astrophysical predictions. Our model demonstrates that the desired lifetime and decay channels of TeV-scale scalar dark matter candidate can be the consequence of renormalizable physics at an intermediate lepton flavor scale and gravitational physics at $M_*$. This presents an alternative scenario to the one in which dark matter decay is a consequence of physics at a unification scale located somewhere between $M$ and $M_*$. \begin{acknowledgments} We thank Josh Erlich and Marc Sher for useful comments. This work was supported by the NSF under Grant PHY-0757481. In addition, C.D.C. gratefully acknowledges support from a William \& Mary Plumeri Fellowship. \end{acknowledgments}
proofpile-arXiv_068-487
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction}% In a companion paper, \cite{BCDPS}, referred to as I, we have started the study of gravitational Chern-Simons terms in higher dimensions. The motivation comes from the observation that, while the 2+1 dimensional case can count on a considerable number of analyses \cite{DJT1,DJT2,Solodukhin:2005ah,Perez:2010hk,Kraus:2005zm,Park:2006gt,Miskovic:2009kr}, little is known about higher dimensional gravitational Chern-Simons theories (we will specify shortly what we mean with this terminology). In I we started searching for systematic answers to the questions raised by the presence of these terms in higher dimensions. One of the problems they raise is how their addition modifies black hole solutions and associated charges. Another important problem is how to compute the black hole entropy in their presence. In this paper, relying in particular on the results of I, we address these issues for spherically symmetric configurations. Let us briefly review the above mentioned problems by introducing a few definitions and basic properties. We wish to investigate the properties of gravitational actions extended with Chern-Simons terms, \begin{eqnarray} \label{lagrgen} \mathbf{L} = \mathbf{L}_{\mathrm{cov}} + \mathbf{L}_{\mathrm{CS}} \end{eqnarray} By $\mathbf{L}_{\mathrm{cov}}$ we denote some generic manifestly diffeomorphism covariant gravitational Lagrangian $D$-form in $D$ dimensions, while $\mathbf{L}_{\mathrm{CS}}$ contain Chern-Simons terms, which are, on the contrary, not manifestly covariant. A general gravitational Chern-Simons (CS) term, in $D = 2n - 1$ dimensions, has the form \begin{eqnarray}\label{LCS} \mathbf{\Upsilon}_n(\mathbf{\Gamma}) = n \int_0^1 dt \ P_n (\mathbf{\Gamma}, \mathbf{R}_t, \dots, \mathbf{R}_t) \end{eqnarray} where $\mathbf{R}_t= t d\mathbf{\Gamma} + t^2 \mathbf{\Gamma}\Gam$, $\mathbf{\Gamma}$ is the Levi--Civita connection and $P_n$ denotes an invariant symmetric polynomial of the appropriate Lie algebra, which, for purely gravitational CS terms, is the Lie algebra of the $SO(1,D-1)$ group (in this case $P_n$ are symmetrized traces). In general the polynomial $P_n$ may be irreducible or reducible. It is important to recall that for $n=2k-1$, that is, in $D = 4k - 3$ dimensions, {\it irreducible} invariant symmetric polynomial of the Lie algebra of $SO(1,D-1)$ identically vanish, $P_{2k-1}= 0$. So, purely gravitational irreducible CS terms can appear only in $D = 4k- 1$ dimensions. In this paper we will consider both reducible and irreducible $P_n$'s.\footnote{Our notation is mainly as in \cite{Bertl}.} We will also consider action terms where a gravitational CS term is multiplied by an invariant polynomial made of one or more gauge field strengths, so as to fill up a $D$-form. The simplest of such reducible action terms have the form \begin{eqnarray}\label{mixCS} \mathbf{L}_{1,\mathrm{mix}} = \mathbf{\Upsilon}_m(\mathbf{\Gamma}) P_k(\mathbf{F}) \quad \textrm{or} \quad \mathbf{L}_{2,\mathrm{mix}} = \mathbf{\Upsilon}_k(\mathbf{A}) P_m(\mathbf{R}) \end{eqnarray} where $\mathbf{F}$ represents the generic curvature of a gauge connection $\mathbf{A}$ and $P_k$ is an invariant polynomial of order $k$, such that $D= 2m+2k-1$. $\mathbf{A}$ can be either a non-Abelian or an Abelian gauge connection, or even an RR field (in the latter case the relation between form orders and spacetime dimension may be different from the just mentioned one). We will refer to terms like (\ref{mixCS}) as {\it mixed Lagrangian} terms. To summarize and state our problem with precision, in this paper we will consider a broad class of CS terms \begin{eqnarray} \label{LCSgen} \mathbf{L}_{\mathrm{CS}} = \sum_i \mathbf{L}_{\mathrm{gCS}}^{(i)} + \sum_j \mathbf{L}_{\mathrm{aCS}}^{(j)} \end{eqnarray} where \begin{eqnarray} \mathbf{L}_{\mathrm{gCS}}^{(i)} &=& \mathbf{\Upsilon}_{n_{i1}}(\mathbf{\Gamma}) \, P_{n_{i2}}(\mathbf{R}) \, P_{n_{i3}}(\mathbf{R}) \cdots \, P_{r_{i1}}(\mathbf{F}_1) \, P_{r_{i2}}(\mathbf{F}_2) \cdots \label{mixgCS} \\ \mathbf{L}_{\mathrm{aCS}}^{(j)} &=& \mathbf{\Upsilon}_{m_j}(\mathbf{A}_{j}) \, P_{n_{j1}}(\mathbf{R}) \, P_{n_{j2}}(\mathbf{R}) \cdots \, P_{r_{j1}}(\mathbf{F}_1) \, P_{r_{j2}}(\mathbf{F}_2) \cdots \label{mixaCS} \end{eqnarray} and $\mathbf{\Gamma}$ is the symmetric Levi-Civita connection. It is assumed in (\ref{mixaCS}) that there is at least one $P_n(\mathbf{R})$ term present. Also, it is understood in (\ref{mixgCS}) and (\ref{mixaCS}) that indices $n$ are even integers, otherwise, as explained before, the terms vanish identically (up to total derivatives). Terms present in (\ref{LCSgen}), which are of gravitational and mixed gauge-gravitational type, for brevity we shall sometimes call gravitational CS Lagrangian terms. For completeness, we note that purely gauge Chern-Simons Lagrangian terms, which are generally of the form \begin{eqnarray} \label{gaugeCS} \mathbf{L}_{\mathrm{CS}}^{(j)} = \mathbf{\Upsilon}_{m_j}(\mathbf{A}_{j}) \, P_{r_{j1}}(\mathbf{F}_1) \, P_{r_{j2}}(\mathbf{F}_2) \cdots \ , \end{eqnarray} due to manifest diff-covariance are, if present, contained in the $\mathbf{L}_{\mathrm{cov}}$ part of Lagrangian (\ref{lagrgen}). The CS terms (\ref{mixaCS}) too are manifestly diff-covariant, but due to their common origin with (\ref{mixgCS}) it is useful to treat them jointly. In I we have analyzed the consequences of adding gravitational or mixed CS terms in gravity theory, as far as black hole entropy is concerned. Extending the covariant phase space formalism of \cite{IyerWald}, according to \cite{Tach}, we have computed in detail the CS induced modifications to Wald entropy formula. We have analyzed the covariance of the ensuing entropy formula and concluded that, although it looks superficially non-covariant, it can be cast in a covariant form. In the present paper we continue the analysis of the consequences of adding CS terms (\ref{LCSgen}){-(\ref{mixaCS})} to gravitational actions, by considering the specific case of spherically symmetric metrics. We will show that solutions with spherically symmetric metric are generally not modified in $D>3$ dimensions, with the only possible exceptions when the following two conditions are simultaneously present: (i) a gravitational CS Lagrangian term, which is a wedge-product of the irreducible gravitational $n=2$ CS term and a purely gauge factor, is present, (ii) this gauge factor (which is by definition gauge invariant) is not spherically symmetric for the solution in question. If both conditions are fulfilled then this ``exceptional'' gravitational CS Lagrangian term may possibly modify such solution of $\mathbf{L}_{\mathrm{cov}}$. Moreover, relying on the results of I, we will show that the black hole entropy is not modified either, with the same exception understood. The paper is organized as follows. In Sec.\ \ref{sec:eom} we write down the contributions to the equations of motion of CS Lagrangian terms present in (\ref{LCSgen}). In Sec.\ \ref{sec:sssol} we analyze these contributions in the case of spherically symmetric metric, where the central results are stated in Theorems 1 and 2. In Sec.\ \ref{sec:entropy} we analyze contributions to black hole entropy, the main result being stated in Theorem 3. In Sec.\ \ref{sec:concl} we summarize our findings. In Appendices some technical aspects of the calculations are presented in more detail. \section{Equations of motion \label{sec:eom Adding gravitational CS terms in the Lagrangian brings about additional terms in the equations of motion. It was shown in \cite{Solodukhin:2005ns} that the equation for the metric tensor $g_{\alpha\beta}$ acquires an additional term $C^{\alpha\beta}$ of the form \begin{equation} \label{CSeom} C^{\alpha\beta} = \nabla_{\!\rho} \, S^{(\alpha\beta)\rho} \end{equation} where tensor the $S^{(\alpha\beta)\rho}$, whose exact form depends on the explicit form of the CS Lagrangian terms, is antisymmetric in the last two indices. Moreover the tensor $C^{\alpha\beta}$ is traceless and covariantly conserved \begin{eqnarray} \tensor{C}{^\alpha_\alpha} = 0 \ , \quad \nabla_{\!\alpha} \, C^{\alpha\beta} = \nabla_{\!\alpha} \nabla_{\!\rho} \, S^{(\alpha\beta)\rho} = 0 \end{eqnarray} and $S$ satisfies the following properties \begin{eqnarray} S^{\alpha\beta\rho} = S^{\alpha[\beta\rho]} \ , \quad \tensor{S}{^\alpha^\beta_\alpha} = 0 \ , \quad \nabla_{\!\alpha} \, S^{\alpha\beta\rho} = 0 \end{eqnarray} These follow from the symmetries of the Riemann tensor, Bianchi identities and the form of $S$. It was also shown in \cite{Solodukhin:2005ns} that the tensor $C^{\alpha\beta}$ can be viewed as a generalization of the Cotton tensor. Mixed CS Lagrangian terms contribute also to equations of motion for gauge fields participating in these terms. What follows is an overview of contributions to the equation of motion of different types of gravitational CS terms. We consider a mixed Lagrangian with one gravitational CS term, $N-1$ gravitational terms and $S$ gauge terms, \begin{eqnarray} \mathbf{L} = \mathbf{\Upsilon}_{n_1}(\mathbf{\Gamma}) \, P_{n_2}(\mathbf{R}) \cdots P_{n_{N}}(\mathbf{R}) \,\, P_{r_1}(\mathbf{F}_1) \cdots P_{r_{S}}(\mathbf{F}_S) \end{eqnarray} where each gauge field strength $F_i$ is a $p_i$-form. Using partial integration and discarding boundary terms, the variation of this Lagrangian can be put in the following form \begin{eqnarray} \label{genEOMs} \delta \mathbf{L} &=& \sum_{i=1}^N P_{n_1}(\mathbf{R}) \cdots \delta\mathbf{\Upsilon}_{n_i}(\mathbf{\Gamma}) \cdots P_{n_{N}}(\mathbf{R}) \, P_{r_1}(\mathbf{F}_1) \cdots P_{r_S}(\mathbf{F}_S) + \\ &+& \sum_{j=1}^{S} (-1)^{1+\gamma_j} \, P_{n_1}(\mathbf{R}) \cdots P_{n_N}(\mathbf{R}) \, P_{r_1}(\mathbf{F}_1) \cdots \delta\mathbf{\Upsilon}_{r_j}(\mathbf{A}_j) \cdots P_{r_{S}}(\mathbf{F}_{S})\nonumber \end{eqnarray} Here $P_{n_1}(\mathbf{R}) \cdots \delta\mathbf{\Upsilon}_{n_i}(\mathbf{\Gamma}) \cdots P_{n_{N}}(\mathbf{R})$ denotes the wedge product $P_{n_1}(\mathbf{R}) \cdots P_{n_{N}}(\mathbf{R})$ in which $P_{n_{i}}(\mathbf{R})$ is replaced by $\delta\mathbf{\Upsilon}_{n_i}(\mathbf{\Gamma})$. The order of the form $P_{r_1}(\mathbf{F}_1) \cdots \, P_{r_{j-1}}(\mathbf{F}_{j-1})$ is denoted by $\gamma_j$ and is equal to $\gamma_j = \sum_{l=1}^{j-1} p_l r_l$. The first line of \refb{genEOMs} determines the equation of motion \refb{CSeom} for the metric and the second line the equations of motion for the gauge fields. Now we write the equation of motion for the metric in more detail. To write it in compact form we first introduce the $2(n-1)$-form $\left(\mathbf{K}_{(n)}\right)^{\alpha\beta} \equiv \left(\mathbf{R}^{n-1}\right)^{\alpha\beta}$, whose components are \begin{eqnarray} K_{(n)}^{\alpha\beta}{}_{\mu_1 \cdots \mu_{2n-2}} = \frac{(2n-2)!}{2^{n-1}} \, \tensor{R}{^\alpha_{\sigma_1}_[_{\mu_1}_{\mu_2}} \, \tensor{R}{^{\sigma_1}_{| \sigma_2 |}_{\mu_3}_{\mu_4}} \cdots \tensor{R}{^{\sigma_{n-2}}^{\beta}_{\mu_{2n-3}}_{\mu_{2n-2}]}} \end{eqnarray} Using the symmetries of the Riemann tensor, and the fact that $n$ is even, it can be easily shown that this tensor is antisymmetric in its upper indices. It is also convenient to introduce the $(2n-1)$-form $\bar{\mathbf{K}}{}^{\alpha\beta\gamma}_{(n)}$ whose components are \begin{eqnarray} \bar{K}_{(n)}^{\alpha\beta\gamma}{}_{{\mu_1} \cdots {\mu_{2n-1}}} = (2n-1)K_{(n)}^{\alpha\beta}{}_{[\mu_1 \cdots {\mu_{2n-2}} } \delta^\gamma_{\mu_{2n-1}]} \end{eqnarray} The generic variation of $\mathbf{\Upsilon}_n(\mathbf{\Gamma})$ (see I) can now be written as \begin{eqnarray}\label{deltaLCS2} \delta \mathbf{\Upsilon}_n(\mathbf{\Gamma}) = n \, P_n (\delta \mathbf{\Gamma}, \mathbf{R}^{n-1}) + d(\ldots) = n \, \bar{\mathbf{K}}_{(n)}^{\alpha}{}_\beta{}^\gamma \delta \Gamma^\beta{}_{\alpha \gamma} + d(\ldots) \end{eqnarray} The part of variation which is exact does not affect the equation of motion, so we will dispense from writing it down explicitly% \footnote{The interested reader can find these boundary terms, which contribute to the symplectic potential, written down explicitly in \cite{BCDPS}.}. Now, using \begin{eqnarray} \delta \Gamma^\lambda_{\alpha \mu} = \frac{1}{2} \, g^{\lambda \beta} \left( \nabla_{\!\alpha} \, \delta g_{\beta \mu} + \nabla_{\!\mu} \, \delta g_{\beta \alpha} - \nabla_{\!\beta} \, \delta g_{\alpha \mu} \right) \end{eqnarray} together with antisymmetry of $\bar{\mathbf{K}}_{(n)}^{\alpha\beta\gamma}$ in $\alpha$ and $\beta$, we obtain (up to boundary terms) \begin{eqnarray} \delta \mathbf{\Upsilon}_n(\mathbf{\Gamma}) = n \, \bar{\mathbf{K}}_{(n)}^{\rho\alpha\beta} \, \nabla_{\!\rho} \delta g_{\alpha\beta} \end{eqnarray} Comparing the first line of \refb{genEOMs} (after the partial integration) to $\nabla_\rho S^{\alpha\beta\rho} \delta g_{\alpha \beta} {\mathbf{\epsilon}}$ we obtain the tensor $S$ \begin{eqnarray} \label{genS} S^{\alpha\beta\rho} = (-)^{s+1}\sum_{i=1}^N *\left( n_i \, P_{n_1}(\mathbf{R}) \cdots \bar{\mathbf{K}}_{(n_i)}^{\rho\alpha\beta} \cdots P_{n_{N}}(\mathbf{R}) \, P_{r_1}(\mathbf{F}_1) \cdots P_{r_S}(\mathbf{F}_S) \right) \end{eqnarray} where $*$ denotes the Hodge dual\footnote{$(*A)^{a_{p+1}\ldots a_D} = \frac{1}{p!} A_{a_1\ldots a_p} \epsilon^{a_1 \ldots a_D}$}. Here $s$ denotes the number of minuses in the metric signature. Note that the form in the brackets on the right hand side is a $D$-form. In the case $N=1$ and $S=0$, i.e.\ irreducible pure gravity CS, the last expression reduces to: \begin{eqnarray} S^{\alpha\beta\rho} = (-)^{s+1}* \left( n \bar{\mathbf{K}}_{(n)}^{\rho\alpha\beta} \right) = -\frac{(-)^s n}{(2n-2)!} \, \epsilon^{\mu_1 \ldots \mu_{2n-2}\beta} K_{(n)}^{\rho\alpha}{}_{\mu_1 \ldots {\mu_{2n-2}} } \end{eqnarray} The contribution to the equation of motion is then \begin{eqnarray} \label{Cirred} C^{\alpha\beta} = \frac{(-1)^s n}{(2n-2)!} \ \epsilon^{\mu_1 \cdots \mu_{2n-2} (\alpha} \, \nabla_{\!\rho} K_{(n)}^{\beta)\rho}{}_{\mu_1 \ldots {\mu_{2n-2}}} \end{eqnarray} As a simple example, let us apply (\ref{Cirred}) to the pure gravity irreducible CS Lagrangian term in $D=7$ dimensions. Its explicit contribution to the action is \begin{eqnarray} S_7 &=& \int \mathbf{\Upsilon}_4 = \int \textrm{str}( \mathbf{R}^3 \mathbf{\Gamma} - \frac{3}{5} \mathbf{R}^2 \mathbf{\Gamma}^3 + \frac{1}{5} \mathbf{R} \mathbf{\Gamma}^5 - \frac{1}{35}\mathbf{\Gamma}^7) \label{s7} \\ &=& \int \rm tr( \mathbf{R}^3 \mathbf{\Gamma} -\frac{2}{5} \mathbf{R}^2 \mathbf{\Gamma}^3 -\frac{1}{5} \mathbf{R} \mathbf{\Gamma}^2 \mathbf{R} \mathbf{\Gamma} +\frac{1}{5} \mathbf{R} \mathbf{\Gamma}^5 -\frac{1}{35}\mathbf{\Gamma}^7) \nonumber \end{eqnarray} and to the equations of motion is \begin{eqnarray} \label{C7} C^{\alpha\beta} = \frac{(-1)^s }{2} \ \epsilon^{\mu_1 \cdots \mu_{6} (\alpha} \, \nabla_{\!\rho} \, \left( \tensor{R}{^{\beta)}_{\sigma_1}_{\mu_1}_{\mu_2}} \, \tensor{R}{^{\sigma_1}_{ \sigma_2 }_{\mu_3}_{\mu_4}} \tensor{R}{^{\sigma_{2}}^{\rho}_{\mu_{5}}_{\mu_{6}}} \right) \end{eqnarray} \section{Spherically symmetric solutions \label{sec:sssol} The most general spherically symmetric $D$-dimensional metric can be written (see Appendix B of \cite{HawkingEllis}) in the following form, \begin{eqnarray}\label{sphsym} ds^2 = -f(t,r) \, dt^2 + \frac{dr^2}{g(t,r)} + h(t,r) \, d\Omega^2_{D-2} \end{eqnarray} We are using here coordinates \begin{eqnarray} \label{gensphcoord} x^0 = t \ , \quad x^1 = r \ , \quad x^i = \theta^i \quad (i = 2, 3, \dots , D-1) \end{eqnarray} where $\theta^i$ are angular coordinates on spheres defined by $t,r=\textrm{constant}$. Angular coordinates are such that $$0 \le \theta^i < \pi \quad \textrm{for} \quad i = 2, \dots, D-2 \quad \textrm{and} \quad 0 \le \theta^{D-1} < 2\pi$$ (this last coordinate $\theta^{D-1}$ is more frequently denoted by $\phi$). In the rest of the paper we will use indices $i$, $j$ and $k$ for angular coordinates ($i,j,k = 2, \dots, D-1$). Introducing the auxiliary function \begin{eqnarray} \Pi(k) = \left\{ \begin{array}{ccl} 1 & , & k = 2 \\ & & \\ \prod_{m=2}^{k-1} \, \sin^{2} \theta^m & , & k \ge 3 \end{array} \right. \end{eqnarray} we can write the metric components in a simple way as \begin{eqnarray} g_{00} = - f(t,r) \ , \quad g_{11} = \frac{1}{g(t,r)} \ , \quad g_{ii} = h(t,r) \, \Pi(i) \ , \quad i = 2, \dots , D-1 \end{eqnarray} In the argument that follows, the crucial piece of information consists in identifying the nonvanishing components of the Riemann tensor for this type of metric.\\ \noindent \textbf{Lemma 1.} The only non-vanishing components of the Riemann tensor for a spherically symmetric metric in coordinates \refb{gensphcoord} are, up to symmetries of their indices, of the form $R_{\mu\nu\mu\nu}$ and $R_{0i1i}$. Since the metric is diagonal, this remains valid even if some of the indices are raised.\\ \noindent From this property immediately it follows that \begin{eqnarray} \label{rrr0} \mathbf{R}^3 = 0 \quad\quad \quad \rm tr (\mathbf{R}^2) = 0 \end{eqnarray} The proof is by direct calculation of the Riemann tensor components, and is presented in Appendix B. Using Lemma 1 we can look at the product of Riemann tensors inside the tensor $C^{\mu\nu}$. A consequence is the following theorem:\\ \noindent \textbf{Theorem 1.} \ The contribution of the gravitational and mixed gauge-gravitational Chern-Simons Lagrangian terms (\ref{LCSgen}){-(\ref{mixaCS})} to the equations of motion vanishes identically for any configuration with spherically symmetric metric in $D > 3$ dimensions, unless the gravitational contribution is the sole $\mathbf{\Upsilon}_2(\mathbf{\Gamma})$ factor wedge-multiplied by the spherically asymmetric gauge factor (we refer to such terms in Lagrangian as \emph{exceptional} terms).\\ \emph{Proof}: First notice that in all cases, except in those in which $\mathbf{\Upsilon}_2(\mathbf{\Gamma})$ is present as the only gravitational factor, all contributions to the equations of motion of the CS terms contain as a factor either $\mathbf{R}^3$ or $\rm tr (\mathbf{R}^2)$, which, by Lemma 1, vanish for spherically symmetric metrics (\ref{sphsym}). In $D>3$ this leaves only the following gravitational CS Lagrangian terms as potentially nontrivial \begin{equation} \label{gcs2mix} \mathbf{\Upsilon}_2(\mathbf{\Gamma}) \, P_{r_1}(\mathbf{F}_1) \cdots P_{r_S}(\mathbf{F}_S) \ , \end{equation} where $F_i$ are some $p_i$-form gauge field strengths. The total contribution of such terms to the Lagrangian is obviously of the form \begin{eqnarray} \label{gcs2tot} \mathbf{\Upsilon}_2(\mathbf{\Gamma}) \, G(\mathbf{F}) \;\;, \end{eqnarray} where $G(\mathbf{F})$ is a gauge invariant $(D-3)$-form. From (\ref{genEOMs}) it follows that (\ref{gcs2tot}) contributes only to the equation of motion for the metric $g_{ab}$ (due to the $\textrm{str}(\mathbf{R}^2)=0$ factor appearing in the equations for gauge fields). The tensor $S^{\alpha\beta\rho}$ is \begin{eqnarray} \label{Sspher} S^{\alpha\beta\rho} &=& (-)^{s+1} *\left( 2 \bar{\mathbf{K}}_{(2)}^{\rho\alpha\beta} G(\mathbf{F}) \right) \nonumber\\ &=& (-)^{s+1} \, \frac{2\cdot 3}{D!} \, \epsilon^{\mu_1\cdots\mu_{D-1}\beta} R^{\rho\alpha}{}_{\mu_1\mu_2} \, G(\mathbf{F})_{\mu_{3}\cdots\mu_{D-1}}, \end{eqnarray} which follows from \refb{genS}. Following the statement in Theorem 1, we now restrict ourselves to configurations of gauge fields for which $G(\mathbf{F})$ is spherically symmetric. Spherically symmetric forms correspond to one, or to the linear combination of two, of the following cases: \begin{itemize} \item[(a)] $a(t,r) \, dt + b(t,r) \, dr \quad$ (1-form) \item[(b)] $a(t,r) \, dt \wedge dr \quad$ (2-form) \item[(c)] $a'(t,r) \, \tilde{\epsilon}_{D-2} \quad$ ($(D-2)$-form) \item[(d)] $a'(t,r) \, dt \wedge \tilde{\epsilon}_{D-2} + b'(t,r) \, dr \wedge \tilde{\epsilon}_{D-2} \quad$ ($(D-1)$-form) \item[(e)] $a'(t,r) \, dt \wedge dr \wedge \tilde{\epsilon}_{D-2} \quad$ ($D$-form) \end{itemize} where $\tilde{\epsilon}_{D-2}$ is the volume form of $S^{D-2}$ sphere. Since $G(\mathbf{F})$ is $(D-3)$-form, the possibilities (c)-(e) are trivially excluded. Thus we are left with the first two, mutually excluding, possibilities (a) and (b). For (a), which exists only in $D=4$ dimensions, and where $G$ is a derivative of some scalar field, it was shown in the literature \cite{Grumiller:2007rv} that the spherical symmetry of the metric forces $C^{\alpha\beta}=0$, i.e., there is no contribution to the equations of motion.\footnote{Results from \cite{Cantcheff:2008qn} suggest that this may not be valid in Einstein-Cartan first-order approach to gravity. However, in this paper we stick to standard general relativistic formulation of gravity with torsionless connection.} For (b), which exists only in $D=5$ dimensions, by using $G(\mathbf{F}) \propto dt \wedge dr$ and expressions for components of the Riemann tensor (\ref{Rspher}), we obtain that the only nonvanishing components of the $S^{\alpha\beta\rho}$ tensor have the form $S^{ijk} \propto \epsilon^{ijk01}$. We see that $S^{\alpha\beta\rho}$ is antisymmetric in the first two indices, so $S^{(\alpha\beta)\rho} = 0$, which by (\ref{CSeom}) gives $C^{\alpha\beta}=0$. Thus we have obtained that terms of the type (\ref{gcs2mix}) will not contribute either to the equations of motion if, in addition to the metric tensor, the total gauge factor $G(\mathbf{F})$ is spherically symmetric. This completes the proof of Theorem 1.\\ Theorem 1 extends the recent result from \cite{LuPang} and states that addition of any combination of gravitational and mixed gauge-gravitational Chern-Simons terms {(\ref{LCSgen})-(\ref{mixaCS})} to any Lagrangian leaves all spherically symmetric solutions unchanged in $D>3$ dimensions.\footnote{By spherically symmetric configurations we mean configurations in which metric and gauge-invariant tensors, such as $G(\mathbf{F})$, are spherical symmetry invariants.} It also shows that this is still valid in much broader circumstances. Even if we require only for the metric to be spherically symmetric, and allow other fields to be spherically asymmetric, Theorem 1 states that solutions can be affected only if two conditions are met: (i) CS terms of the type (\ref{gcs2tot}) are added to Lagrangian, (ii) the total gauge part (of such terms collected together) $G(\mathbf{F})$ evaluated on the solution is \emph{not} a spherically symmetric $(D-3)$-form. We postpone further discussion on these ``exceptional'' cases to Sec. \ref{sec:concl}. For completeness, we mention that purely gauge CS Lagrangian terms (\ref{gaugeCS}) cannot be included in the general statement of Theorem 1. It was shown in the literature on some explicit examples that such CS Lagrangian terms can affect spherically symmetric solutions \cite{Brihaye:2009cc,Brihaye:2010wp,Brihaye:2011nr}. However, we remind the reader that purely gauge CS terms are manifestly diff-covariant and so do not fall in the class of CS terms we are interested in this paper.\\ Although the $D=3$ case is not meant to be explicitly addressed in this paper, let us summarize what can be said about it on the basis of the existing literature. In $D=3$ dimensions Theorem 1 is in general not valid. Here one has the $n=2$ irreducible gravitational Chern-Simons term with contribution to equations of motion proportional to the Cotton-York tensor, \begin{eqnarray}\label{Cotton} C^{\mu\nu} = \epsilon^{\mu\alpha\beta} \, \nabla_{\!\alpha} \left( \tensor{R}{^\nu_\beta} - \frac{1}{4} \, \tensor{\delta}{^\nu_\beta} R \right) \end{eqnarray} which does not vanish for a general spherically symmetric metric (\ref{sphsym}). For example, it was shown in \cite{Garcia} that already for a static spherically symmetric metric \begin{eqnarray} ds^2 = -f(r) dt^2 + \frac{dr^2}{f(r)} + r^2 d\phi^2 \end{eqnarray} one obtains \begin{eqnarray} C^{t\phi} = \frac{f'''}{4\sqrt{f}}, \end{eqnarray} which is generally non-vanishing. However, it is known that the Cotton tensor vanishes for several important classes of 3D metrics, so in these cases the 3D gravitational CS Lagrangian term is effectively irrelevant as far as the equations of motion are concerned. For example, the Cotton tensor vanishes for any 3D Einstein metric\footnote{$g_{ab}$ is an \emph{Einstein metric} if the Ricci tensor is proportional to the metric itself, $R_{ab} = k g_{ab}$ for some \emph{constant} $k$. It is true in any dimension $D \ge 3$ that $g_{ab}$ is an Einstein metric if and only if it is a solution to the vacuum Einstein field equations with cosmological constant $\Lambda = (D-2)k/2$.}. In particular, the Cotton tensor vanishes for the famous BTZ black hole solution \cite{Kaloper:1993kj}. The generalized Cotton tensor is still conformally invariant \cite{Solodukhin:2005ns}, but the Einstein metrics in $D > 3$ are not necessary maximally symmetric.\\ Birkhoff's theorem in the presence of gravitational CS term in $D=3$ was examined by Cavagli\`a \cite{Cavaglia}, and in particular class of $D=4$ theories with mixed-type gravitational CS Lagrangian term in \cite{Yunes:2007ss}. Using our Theorem 1 we generalize these results to a wide range of higher dimensional cases:\\ \noindent \textbf{Theorem 2.} (\textsf{Chern-Simons-Birkhoff}) \ Assuming that we have a theory in $D>3$ dimensional spacetime (described by the Lagrangian $\mathbf{L}_G$) in which all spherically symmetric solutions are necessarily static, this property remains valid even if we include additional gravitational (pure or mixed) Chern-Simons Lagrangian terms $\mathbf{L}_{CS}$ (\ref{LCSgen}){-(\ref{mixaCS})}.\\ For example, Birkhoff's theorem is known to be valid for the Lovelock--type gravitational Lagrangians in any dimension (see \cite{Zegers,Deser:2005gr}). Some other examples can be found in \cite{Oliva:2011xu}.\\ \section{Entropy of static spherically symmetric black holes \label{sec:entropy} In the previous section we have shown that, generically, gravitational CS Lagrangian terms do not affect spherically symmetric solutions in $D>3$. Here we will show that they do not affect the entropy of static spherically symmetric black holes. By applying staticity to (\ref{sphsym}) one obtains that any static spherically symmetric metric can be written in the form \begin{eqnarray}\label{staticm} ds^2 = -f(r) dt^2 + \frac{dr^2}{g(r)} + h(r) d\Omega_{D-2}^2 \end{eqnarray} which can be used for static spherically symmetric black holes outside the horizon. In the theories we consider the Lagrangian (\ref{lagrgen}) can be written in the following forms \begin{equation} \label{Lgen} \mathbf{L} = \mathbf{L}_{\mathrm{cov}} + \mathbf{L}_{\mathrm{CS}} = \mathbf{L}_{\mathrm{cov}} + \mathbf{L}_{\mathrm{aCS}} + \mathbf{L}_{\mathrm{gCS}} = \mathbf{L}_{\mathrm{diff-cov}} + \mathbf{L}_{\mathrm{gCS}} \end{equation} where the part $\mathbf{L}_{\mathrm{diff-cov}}$ contains all manifestly diff-covariant terms (which includes also $\mathbf{L}_{\mathrm{aCS}}$ part), while the second piece contains all non-manifestly diff-covariant terms. In the class of theories we consider in this paper, $\mathbf{L}_{\mathrm{gCS}}$ is made of terms as in (\ref{mixgCS}). In such theories the entropy assigned to the black hole solutions can correspondingly be split in several ways \begin{equation} \label{entgen} S_{\mathrm{bh}} = S_{\mathrm{cov}} + S_{\mathrm{CS}} = S_{\mathrm{cov}} + S_{\mathrm{aCS}} + S_{\mathrm{gCS}} = S_{\mathrm{diff-cov}} + S_{\mathrm{gCS}} \end{equation} The piece $S_{\mathrm{diff-cov}}$ can be obtained from $\mathbf{L}_{\mathrm{diff-cov}}$, and is given by the general Wald formula \cite{Wald1,JKM,IyerWald} \begin{eqnarray} \label{Swald} S_{\mathrm{diff-cov}} = S_W = 2\pi \int_{\mathcal{B}} \epsilon^a{}_b \left( \frac{\delta \mathbf{L}_{\mathrm{diff-cov}}}{\delta \tensor{R}{^a_b_\mu_\nu}} \right)_{\mu\nu\rho_1\cdots\rho_{D-2}} \end{eqnarray} The second piece in (\ref{entgen}), $S_{\mathrm{gCS}}$, can be obtained from $\mathbf{L}_{\mathrm{CS}}$ by the generalization of Wald's procedure to non-manifestly diff-covariant Lagrangians, as described in \cite{Tach}. In \cite{BCDPS} it was shown that for the gravitational CS Lagrangian terms (\ref{mixgCS}) there is a general expression also for this part of the entropy \begin{eqnarray} \label{genwaldent} S_{\mathrm{gCS}} = 2\pi \int_{\mathcal{B}} \tensor{\epsilon}{^a_b}\left( \frac{\delta \mathbf{L}_{\mathrm{gCS}}}{\delta \tensor{\mathbf{R}}{_t^a_b_\mu_\nu}} \right)_{\mu\nu\rho_1\cdots\rho_{D-2}} \end{eqnarray} which is similar in form to Wald formula (\ref{Swald}), the only difference being that one takes the variation with respect to $\mathbf{R}_t$ instead of $\mathbf{R}$.\footnote{Here it is understood that the variation with respect to $\mathbf{R}_t$ acts inside the $t$ integral present in the Lagrangian. The Lagrangians used here are of the form $\mathbf{L}_{\mathrm{gCS}}^{(i)}$ from (\ref{LCSgen}), which means that there will always be exactly one $\mathbf{\Upsilon}$ factor, and consequently exactly one $t$ integral.} In both terms (\ref{Swald}) and (\ref{genwaldent}) one inserts the black hole solution to the equations of motion obtained from the complete Lagrangian (\ref{Lgen}), and integrates over the $(D-2)$-dimensional horizon cross-section (bifurcation surface) with binormal $\epsilon_{ab}$, normalized by $\epsilon_{ab}\epsilon^{ab} = -2$.\\ \noindent \textbf{Theorem 3.} \ Gravitational CS Lagrangian terms {(\ref{LCSgen})-(\ref{mixaCS})} do not affect the entropy of static spherically symmetric black holes\footnote{Static spherically symmetric black holes are defined here as being characterized by a static and spherically symmetric metric tensor(\ref{staticm}), with no conditions on the other fields.} in $D > 3$ dimensions, apart from the exceptional cases mentioned in Theorem 1 (which have spherically asymmetric gauge field configurations).\\ \emph{Proof}: First we note that Theorem 1 guarantees (up to the exceptional cases mentioned below) that gravitational CS Lagrangian terms {(\ref{LCSgen})-(\ref{mixaCS})} do not change static spherically symmetric black hole solutions (obtained from $\mathbf{L}=\mathbf{L}_{\mathrm{cov}}$), which means that they do not change the $S_{\mathrm{cov}}$ part of the black hole entropy. The only possible exceptions are the cases excluded in Theorem 1: CS Lagrangian terms having $n=2$ gravitational CS term as the sole gravitational contribution in the case of configurations in which the gauge part of such CS Lagrangian term is not spherically symmetric. When such terms change a solution, then they obviously may change $S_{\mathrm{cov}}$.\footnote{However, as discussed in Sec. \ref{sec:concl}, in such exceptional cases spherical symmetry of the metric will be ruined.} Let us turn now to the $S_{\mathrm{CS}} = S_{\mathrm{gCS}} + S_{\mathrm{aCS}}$ contribution. Inspection of the relevant Lagrangian terms (\ref{mixgCS}) and (\ref{mixaCS}), together with the corresponding entropy formulae (\ref{genwaldent}) and (\ref{Swald}), shows that properties (\ref{rrr0}) imply that all terms give vanishing contribution to the entropy, except those which have just one gravitational factor with $n=2$. These terms have one of the two following forms \begin{eqnarray} \mathbf{\Upsilon}_2(\mathbf{\Gamma}) \, \textrm{str}(\mathbf{F}_1^{r_1}) \cdots \, \textrm{str}(\mathbf{F}_S^{r_S}) \label{Th3gCS} \\ \mathbf{\Upsilon}_m(\mathbf{A}) \, \textrm{str}(\mathbf{F}_1^{r_1}) \cdots \, \textrm{str}(\mathbf{F}_S^{r_S}) \, \textrm{str}(\mathbf{R}^2) \label{Th3aCS} \end{eqnarray} The Lagrangian term (\ref{Th3gCS}) produces a contribution to the entropy formula proportional to \cite{BCDPS} \begin{eqnarray} \label{Th3gCSent} \int_{\mathcal{B}} \mathbf{\Gamma}_N \, \textrm{str}(\mathbf{F}_1^{r_1}) \cdots \textrm{str}(\mathbf{F}_S^{r_S}) \end{eqnarray} where $(\mathbf{\Gamma}_N)_\mu = \frac{1}{2} \, \epsilon^\alpha{}_\beta \Gamma^\beta{}_{\alpha\mu}$, while the Lagrangian term (\ref{Th3aCS}) produces a contribution to the entropy formula proportional to \begin{eqnarray} \label{Th3aCSent} \int_{\mathcal{B}} \mathbf{\Upsilon}_m(\mathbf{A}) \, \textrm{str}(\mathbf{F}_1^{r_1}) \cdots \textrm{str}(\mathbf{F}_S^{r_S}) \, \mathbf{R}_N \end{eqnarray} where $(\mathbf{R}_N)_{\mu\nu} = \frac{1}{2} \, \tensor{\epsilon}{^\alpha_\beta} \tensor{R}{^\beta_\alpha_\mu_\nu}$. From the fact that in the metric (\ref{staticm}) the components of the binormal $\tensor{\epsilon}{^\alpha_\beta}$ lie in $(t,r)$-plane, and from the explicit form of the connection and the Riemann tensor given in (\ref{Gamstat}) and (\ref{Rstat}), it follows that \begin{eqnarray} (\mathbf{\Gamma}_N)_i = 0 \; , \qquad (\mathbf{R}_N)_{ij} = 0 \end{eqnarray} This entails that the contributions (\ref{Th3gCSent}) and (\ref{Th3aCSent}) to the black hole entropy also vanish. This completes the proof of Theorem 3.\footnote{Note that purely gauge CS terms (\ref{gaugeCS}), which we included into $\mathbf{L}_{\mathrm{diff-cov}}$, do not produce additional terms in $S_{\mathrm{diff-cov}}$. However, as they contribute to the equation of motion they may change the entropy by affecting the black hole solution.}\\ As a simple example, we apply our results to the special case of CS modified gravity in $D=4$ with the Lagrangian density \begin{eqnarray} \label{4DCSL} \mathbf{L} = R \mbox{\boldmath $\epsilon$} + \mathbf{L}_{\vartheta} + \lambda \mathbf{\Upsilon}_2(\mathbf{\Gamma}) \wedge d\vartheta \end{eqnarray} where $\vartheta(x)$ is scalar field, and $\lambda$ is the coupling constant (which does not appear in $\mathbf{L}_{\vartheta}$). As the CS Lagrangian term in these theories can be written in manifestly diff-covariant form $\mathbf{L}_{\mathrm{CS}} = \vartheta \, \mathbf{R} \wedge \mathbf{R}$, where $\vartheta(x)$ is some scalar field, they do not belong to the type of theories characterized by Lagrangians which cannot be written in manifestly diff-covariant form, which are of our primary interest. However, our results extend also to these theories and it is interesting to compare them with those existing in the literature on these specific $D=4$ theories. The theory (\ref{4DCSL}) is rather well-studied\footnote{Mostly in cases $\mathcal{L}_{\vartheta}=0$ (in which case $\vartheta(x)$ is non-dynamical field) and $\mathcal{L}_{\vartheta} = (\partial \vartheta)^2$. See \cite{Alexander:2009tp} for a review.} so we use it to compare our results with the existing literature. Let $g_{0\mu\nu}$ and $\vartheta_0(r,t)$ be some arbitrary spherically symmetric solution of the theory with $\lambda=0$. Then Theorem 1 says that it will be a solution for all values of $\lambda$, in agreement with the known results \cite{Grumiller:2007rv}. Theorem 2 says that, if $\mathbf{L}_{\vartheta}$ is such that in the theory with $\lambda=0$ Birkhoff's theorem holds, then it holds for all $\lambda$'s. This extends \cite{Yunes:2007ss} where such result was shown in case when $\mathbf{L}_{\vartheta}$ is such that the $\lambda=0$ theory posesses a Schwarzschild black hole as solution. Finally, Theorem 3 says that if this solution describes a black hole, then its entropy does not depend on $\lambda$. In \cite{Grumiller:2008ie} it was shown, using Euclidean methods, that the thermodynamics of such spherically symmetric black holes will not depend on $\lambda$ \emph{modulo} possible contribution of some boundary term $\Delta\mathcal{F}$ in the on-shell action which the authors were unable to calculate. Our results show that this unknown boundary term does not influence the entropy of spherically symmetric black holes in theories (\ref{4DCSL}). \section{Conclusion \label{sec:concl} In this paper we have analyzed the consequences of adding a broad class of Chern-Simons terms to a gravitational action in the case of spherical symmetry. We have considered both gravitational and mixed gauge-gravity CS terms and focused on the case of a general spherically symmetric metric. We have found that in $D>3$ dimensions (the case of the gravitational CS term in 3D must be considered separately and, generally, has already been studied in the literature) the contribution of such terms to the equations of motion vanishes identically, except in the case when the following two conditions are met: (i) a mixed gauge-gravitational CS Lagrangian term, which is a wedge-product of the irreducible gravitational $n=2$ CS term and a purely gauge factor, is present, (ii) this gauge factor (which is by definition gauge invariant) is \emph{not} spherically symmetric for the configuration in question. A consequence is that the gravitational and mixed gauge-gravitational CS Lagrangian terms, apart from the previously mentioned exceptions, do not affect Birkhoff's theorem or any solution with spherically symmetric metric. We have then considered the problem of computing the entropy for spherically symmetric black holes in the presence of such CS terms. To this end we have used a general formula obtained in a previous paper, \cite{BCDPS}, by means of the covariant phase space formalism, adapted to the presence of CS terms. This formula is similar to the one obtained by Wald for covariant gravity Lagrangians. Applied to spherically symmetric black holes it tells us that the contribution of the CS terms to the entropy is null. Let us briefly analyze the ``exceptional cases'' in more detail. By Theorem 1, they may appear only for configurations in which the total gauge part of exceptional CS Lagrangian terms is not spherically symmetric. It is important to note that this form is by definition gauge invariant, which means that such exceptional configurations must contain some gauge fields which are not spherically symmetric. Let us now assume that one such configuration is a solution to equations of motion obtained from some Lagrangian $\mathbf{L}_{\mathrm{cov}}$. Then spherical symmetry of the metric requires that the energy-momentum tensor (obtained from $\mathbf{L}_{\mathrm{cov}}$) be spherically symmetric for such solution. One can now see that ``exceptional'' solutions are somewhat exotic, possessing spherical asymmetry in the gauge fields which disappears in the energy-momentum tensor, but survives in the gauge invariant factor present in the ``exceptional'' CS Lagrangian term we want to add to $\mathbf{L}_{\mathrm{cov}}$. We are not aware of any explicit examples of such behavior, but we are not aware either of a proof that such cases are not possible in complicated theories with several gauge fields. So, let us now assume that there are such ``exceptional'' solutions, and see what we should expect when we add to the Lagrangian some gravitational and mixed gauge-gravitational CS part $\mathbf{L}_{\mathrm{CS}}$ which includes ``exceptional'' CS terms. This will produce contribution to the equations of motion for the metric, which, for ``exceptional'' terms, is proportional to (in symbolic notation) $\nabla (\mathbf{R} G(\mathbf{F}))$. As $G(\mathbf{F})$ is for the unperturbed solution not spherically symmetric, we see that in general we should expect a spherically asymmetric perturbation of the metric equation. So, it appears that in general in ``exceptional cases'' one should expect that addition of CS terms completely breaks the spherical symmetry, even for the metric tensor. Of course in this paper the hypothesis of spherical symmetry of the metric plays a crucial role, and it is of utmost interest to understand in a systematic way when and how relaxing of this hypothesis will change the null results (in the case of irreducible gravitational CS term in 7-dimensions some specific examples are given in \cite{LuPang}). This is in fact what we intend to investigate next.\\ \vspace{0.5cm} {\bf Acknowledgements}\\% \noindent One of us (L.B.) would like to thank the Theoretical Physics Department, Univ.\ of Zagreb, for hospitality and financial support during his visits there. I.S.\ would like to acknowledge the financial support of CEI Fellowship Programme CERES. Also, M.C., P.D.P., S.P.\ and I.S.\ would like to thank SISSA for hospitality and financial support during visits there and would also like to acknowledge support by the Croatian Ministry of Science, Education and Sport under the contract no.~119-0982930-1016. \vspace{0.5cm} \section*{Appendix \appendi \section{Connection and curvature components For the general spherically symmetric metric \begin{eqnarray} ds^2 = -f(t,r) \, dt^2 + \frac{dr^2}{g(t,r)} + h(t,r) \, d\Omega^2_{D-2} \end{eqnarray} the nonvanishing components of the Christoffel symbols and the Riemann tensor in coordinates \refb{gensphcoord} are listed below\footnote{$\dot{f}$ indicates derivative with respect to coordinate $t$, and $f'$ with respect to coordinate $r$}, \begin{eqnarray} \Gamma^0_{00} = \frac{\dot{f}}{2f} \ , \quad \Gamma^0_{11} = -\frac{\dot{g}}{2fg^2} \ , \quad \Gamma^0_{ii} = \frac{\dot{h}}{2f} \, \Pi(i) \ , \quad \Gamma^{0}_{01} = \frac{f'}{2f} \ , \quad \nonumber\\ \Gamma^1_{00} = \frac{g f'}{2} \ , \quad \Gamma^1_{11} = - \frac{g'}{2g} \ , \quad \Gamma^1_{10} = -\frac{\dot{g}}{2g} \ , \quad \Gamma^1_{ii} = -\frac{gh'}{2} \, \Pi(i) \ , \quad \\ \Gamma^i_{0i} = \frac{\dot{h}}{2h} \ , \quad \Gamma^i_{1i} = \frac{h'}{2h} \ , \quad \Gamma^i_{ij} = \textrm{ctg} \, \theta^j \quad (\textrm{for} \ i > j) \ , \quad \Gamma^{i}_{jj} = -\textrm{ctg} \, \theta^i \prod_{k=i}^{j-1} \sin^{2}{\theta^k} \quad (\textrm{for} \ j > i) \nonumber \end{eqnarray} \begin{eqnarray} R_{0101} = \frac{1}{4} \left( 2f'' - \frac{(f')^2}{f} + \frac{f' g'}{g} - \frac{\dot{f}\dot{g}}{fg^2} - \frac{3(\dot{g})^2}{g^3} + \frac{2\ddot{g}}{g^2} \right) \ , \quad \nonumber\\ R_{0i0i} = \frac{1}{4} \left( gf'h' + \frac{\dot{f}\dot{h}}{f} + \frac{(\dot{h})^2}{h} - 2\ddot{h} \right) \Pi(i) \ , \quad \nonumber\\ R_{1i1i} = \frac{1}{4} \left( -\frac{g'h'}{g} - \frac{\dot{g}\dot{h}}{fg^2} + \frac{(h')^2}{h} - 2h'' \right) \Pi(i) \ , \quad \label{Rspher}\\ R_{0i1i} = \frac{1}{4} \left( -\frac{\dot{g}h'}{g} + \left( \frac{f'}{f} + \frac{h'}{h} \right) \dot{h} - 2\dot{h}' \right) \Pi(i) \ , \quad \nonumber\\ R_{ijij} = \left( h - \frac{g(h')^2}{4} + \frac{(\dot{h})^2}{4f} \right) \Pi(i) \Pi(j) \nonumber \end{eqnarray} In the static case the metric can be written in the form \begin{eqnarray}\label{staticm2} ds^2 = -f(r) dt^2 + \frac{dr^2}{g(r)} + h(r) d\Omega_{D-2}^2 \end{eqnarray} and the above components reduce to the following nonvanishing ones \begin{eqnarray} \Gamma^{0}_{01} = \frac{f'}{2f} \ , \quad \Gamma^{1}_{00} = \frac{g f'}{2} \ , \quad \Gamma^{1}_{11} = - \frac{g'}{2g} \ , \quad \Gamma^{1}_{ii} = -\frac{gh'}{2} \, \Pi(i) \ , \quad \Gamma^{i}_{1i} = \frac{h'}{2h} \ , \quad\nonumber\\ \Gamma^{i}_{ij} = \textrm{ctg}{\theta^{j}} \quad (\textrm{for} \ i > j) \ , \quad \Gamma^{i}_{jj} = -\textrm{ctg} \theta^{i} \prod_{k=i}^{j-1} \sin^{2}{\theta^k} \quad (\textrm{for} \ j > i) \label{Gamstat} \end{eqnarray} \begin{eqnarray} R_{0101} = \frac{f''}{2} - \frac{(f')^2}{4f} + \frac{f' g'}{4g} &,& \quad R_{0i0i} = \frac{h' f' g}{4} \, \Pi(i) \nonumber\\ R_{1i1i} = \frac{1}{4} \left( -\frac{g'h'}{g} + \frac{(h')^2}{h} - 2h'' \right) \, \Pi(i) &,& \quad R_{ijij} = \left( h - \frac{g(h')^2}{4} \right) \Pi(i) \Pi(j) \label{Rstat} \end{eqnarray} Due to fact that (\ref{staticm2}) is diagonal, the Riemann tensor components in this case can be written by means of the generalized Kronecker delta symbol, \begin{eqnarray}\label{Rsd} \tensor{R}{^\alpha^\beta_\mu_\nu} = s(\alpha, \beta) \, \delta^{\alpha\beta}_{\mu\nu} \end{eqnarray} where $s$ is a symmetric function, $s(x,y) = s(y,x)$, defined implicitly by the components from above. \section{Important property of spherical Riemann tensor Here we shall prove that $\mathbf{R}^3 = 0$ for the general spherically symmetric (not necessary static) metric. The analysis is done case by case. Note that a set equality indicates that the elements are equal up to permutation (e.g. $\{a,b\} = \{0,1\}$ implies that either $a = 0$ and $b = 1$ or $a = 1$ and $b = 0$). $$\left(\mathbf{R}^3\right){}^{\alpha}{}_{\beta\,{\mu_1}\ldots{\mu_6}} = \frac{6!}{2^3} \, \tensor{R}{^\alpha_{\sigma_1}_{[\mu_1}_{\mu_2}} \tensor{R}{^{\sigma_1}_{|\sigma_2|}_{\mu_3}_{\mu_4}} \tensor{R}{^{\sigma_2}_{|\beta|}_{\mu_5}_{\mu_6]}}$$ For $D<6$ this vanishes trivially. For $D \geq 6$, using components of Riemann tensor from the previous Appendix, we see that there are following potentially nonvanishing components of $\mathbf{R}^3$,\\ \noindent 1) $\alpha = 0$\\ 1a) $\sigma_1 = 1$ and $\sigma_2 = 0$ implies that $\{\mu_1,\mu_2\} = \{0,1\} = \{\mu_3,\mu_4\}$;\\ 1b) $\sigma_1 = 1$ and $\sigma_2 = i$ implies that $i \in \{\mu_1,\mu_2\}$ and $i \in \{\mu_3,\mu_4\}$;\\ 1c) $\sigma_1 = i$ implies that $i \in \{\mu_1,\mu_2\}$ and $i \in \{\mu_3,\mu_4\}$;\\ \noindent 2) $\alpha = 1$ (completely analogous to the first case)\\ \noindent 3) $\alpha = i$\\ 3a) $\sigma_1 = 0$ and $\sigma_2 = 1$ implies $\{\mu_1,\mu_2\} \in \{ \{0,i\} , \{1,i\} \}$ and $\{\mu_3,\mu_4\} = \{0,1\}$;\\ 3b) $\sigma_1 = 1$ and $\sigma_2 = 0$ implies $\{\mu_1,\mu_2\} \in \{ \{0,i\} , \{1,i\} \}$ and $\{\mu_3,\mu_4\} = \{0,1\}$;\\ 3c) $\sigma_1 \in \{0,1\}$ and $\sigma_2 = j$ implies $\{\mu_1,\mu_2\} \in \{ \{0,i\} , \{1,i\} \}$, $j \in \{\mu_3,\mu_4\}$ and $j \in \{\mu_5,\mu_6\}$;\\ 3d) $\sigma_1 = j$ implies $j \in \{\mu_1,\mu_2\}$ and $j \in \{\mu_3,\mu_4\}$;\\ \noindent In all these cases $\mathbf{R}^3$ vanishes identically due to antisymmetrization of indices $\{\mu_1, \dots, \mu_6\}$.\\
proofpile-arXiv_068-537
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Motivation} Statistical physics studies of cooperative phenomena on random graphs have attracted a lot of new attention and undergone impressive new development since it has become clear that many real-life interconnected structures are well approximated by random scale-free networks \cite{barab1,strogatz,newmanrev,dorogov,heiko}. One can say that a paradigm shift is occurring from studies of models for critical phenomena on (Bravais) lattices to studies in which such models are defined on random networks. To some extent this paradigm shift resembles that from Euclidean geometry to fractal geometry, in the modeling of various natural phenomena, but scale-free networks are specific in that they are characterized by the presence of a small but important set of {\em hubs}. The hubs are highly connected nodes which typically have a large influence on the operation and coherence of the structure. When we are concerned with the distribution of electricity, or the production and sale of material goods by commercial centers to consumers, or the delivery of the daily mail ..., we can envisage distribution centers as nodes and consumers as links on a network. The consumers have certain demands and the distribution centers certain deliverables. The distribution centers can be active or inactive depending on internal conditions and on external criteria such as the demands of consumers that are linked to them and the status of other nearby centers. The occurrence of hubs in such networks is rather common (main providers, supermarkets, ...). We propose a model that can capture three types of common distribution policies in realistic trade environments by implementing an appropriate Ising spin Hamiltonian on a random network, mainly of scale-free type, but we also consider networks with a scale. We present the three policies in a logical order, from obviously cooperative to rather ambivalent, and our main goal in this paper will be to demonstrate how a policy in the latter category can cause a breakdown of the distribution network. The first policy is one that is {\em guided by demand}. The distribution center aims at satisfying the consumer demand as closely as possible and will strive to adjust its activity to that of neighbouring centers so that the difference between demand and delivery is minimal on the links that lead to those centers. Note that every consumer (link) can be served by two centers (the adjacent nodes) in this model. This is not unusual. Many consumers rely on an alternative option in case their normal provider is not available. A distribution center $i$ is active or {\em up} when $\sigma_i=1$ and inactive or {\em down} when $\sigma_i=-1$. The {\em demand-guided policy} is characteristic of a market in which {\em compensation or complementarity is more important than competition}. Many physical, chemical and biological systems are equipped with similar {\em negative feedback} mechanisms, rendering operation stable under small enough perturbations. The second distribution policy is concerned with a {\em competitive market} guided by product quality and quantity. In this policy production or distribution centers strive to maximize their activity in order to get {\em their} deliverables consumed rather than those of their rivals. The centers aim at manipulating consumers by {\em creating new demand}, preferably in situations where the existing demand is low. In this {\em product-guided policy} the centers tend to become active especially when their neighbours are already satisfying the consumer demand. This policy can therefore be mimicked by maximizing the difference between supply and demand, which is opposite to what we postulated in the demand-guided policy described before. Thirdly, we wish to capture a {\em policy of solidarity} rather than competition, in which providers display a voluntary or forced shutdown in situations where {\em too high a demand cannot reasonably be met}. As long as all centers are active there are no problems, but as soon as some become inactive, the burden of satisfying the high consumer demand can become too heavy to carry. The centers can be unwilling or plainly incapable of rectifying the drop in supply, and increasingly so as time progresses and more neighbouring centers fail. This is a pronounced {\em positive feedback} mechanism which can lead to blackouts in power stations or certain kinds of strikes (when workers are unable or unwilling to do more than their peers). A similar conduct may be observed, for example, among partners in projects. Here we assume that a link is a project shared between adjacent nodes, which can model persons or companies. Companies tend to close or, equivalently, persons tend to stop doing their job, when their neighbours no longer invest in a joint project (i.e., when neighbouring sites are down). Like in the (second) competitive policy we discussed, in this (third) solidarity policy the difference between supply and demand is again maximized, but this time by the supply dropping to zero. In order to integrate these three policies in an Ising model Hamiltonian description we define, besides the spin states associated with the distribution centers, the supply and demand variables on the links. For the purpose of this Motivation, we focus on the simplest version of the model, but still sufficiently equipped to bring out the essential physics. In later Sections a more refined approach will be outlined. For the time being we assume that each distribution center, when active, delivers a fixed total amount (set to unity) to all of the links attached, divided equally over all links. For the link between nodes $i$ and $j$ with respective degrees $q_i$ and $q_j$, the delivery thus equals \begin{equation} \mathcal{L}_{ij}({\sigma_i,\sigma_j}) =\frac{1}{q_i}\left(\frac{\sigma_i+1}{2}\right)+\frac{1}{q_j} \left(\frac{\sigma_j+1}{2}\right) \end{equation} Treating, provisionally, the demand as a uniform constant $\mathcal{D}$ throughout the network, we arrive according to the policies described above at an energy per link which can be written as \begin{equation} E_{ij}({\sigma_i,\sigma_j}) =-J [ \mathcal{L}_{ij}({\sigma_i,\sigma_j})- \mathcal{D} ]^2, \end{equation} where the nearest-neighbour coupling $J$ is {\em negative} for a {\em demand-guided} policy and {\em positive} for the policies driven by {\em competition or solidarity}. The Hamiltonian of the Ising distribution network is then the sum of these energies over all links. We obtain the {\em spin Hamiltonian} of a given realization of a network with $N$ nodes, \begin{equation} \mathcal{H}(\mathbf{\{\sigma\}})=\sum_{< ij>}E_{ij}({\sigma_i, \sigma_j})=-\frac{J}{2}\sum_{< ij>} \frac{\sigma_i \sigma_j}{ q_i q_j} - \sum_{i=1}^{N}H_i \sigma_{i} + const. \label{Hami} \end{equation} where $<ij>$ denotes nearest-neighbour pairs and the {\em quenched random field} $H_i$ acting on the spin at node $i$ is given by \begin{equation} H_{i}=\frac{J}{2q_i} \left ( 1 + \sum_{j=1}^{q_i}\frac{1}{q_j}\right ) - J\mathcal{D} \end{equation} The spin-independent constant (last term in \eqref{Hami}) is irrelevant for our purposes and will henceforth be omitted. The interpretation of the quenched random field is fairly transparent. For $J<0$ (demand-guided policy) a distribution center experiences a bias to become active when the total demand of the attached links, $q_i \mathcal{D}$, exceeds the total {\em average delivery} to these links, $(1+\sum_j q_j^{-1})/2$, where the average is taken over the spin values, i.e., over active and inactive states. For $J>0$ the spin bias is opposite. A distribution center tends to become inactive when the demand is high. The nearest-neighbour interaction, $J/(2q_iq_j)$, is also a quenched random variable, the sign of which is equal to that of $J$. Note that this degree-dependent interaction is reminiscent of that in the Special Attention Network model \cite{indekeu1}, in the sense that {\em high degree is compensated by weak interaction}. In the distribution model this means that the mutual influence between neighbouring centers is proportional to the product of their deliveries to the link that connects them, or to some power of this product. We conclude that we are dealing with a random-field (anti-)ferromagnetic Ising model on a network for $(J<0)$ $J>0$. The ground state, which minimizes the total energy, is in general non-trivial. In addition, ``thermal" noise is present, due to occasional maintenance shutdowns of centers or fluctuations of their activity caused by other internal factors. We therefore consider the model at a finite temperature $T$ to allow for these more or less random perturbations. The ratio $k_BT/J$ is a measure of their importance. Preliminary results of Monte Carlo simulations of the model at hand were reported by Giuraniuc \cite{thesisGiuraniuc}. \section{General formulation of the distribution model} \subsection{Construction of a general Hamiltonian} Consider a static scale-free network with $N$ nodes. The normalized distribution of the degrees, the number of links attached to each node, $P(q)$, with $\sum_q P(q) = 1$, is assumed to follow a decreasing power-law $P(q) \propto q^{-\gamma}$ for large $q$. The topological exponent $\gamma$ usually lies in the regime $2<\gamma<3$ for real-life networks \cite{newmanrev}. Each node $i$ represents a distribution center which can be either active ($\sigma_i=+1$) or inactive ($\sigma_i=-1$). A link between nodes $i$ and $j$, denoted by $< ij>$, is a \textit{consumer} or a region across which the products are distributed. Contrary to the simplified model proposed in the Motivation, the nodes can produce unequal amounts of goods. We assume that the total delivery depends on the degree of the supplier. Therefore, the delivery $\mathcal{L}_{ij}({{\sigma_i,\sigma_j}})$ to a link $<ij>$ is defined as \begin{equation} \mathcal{L}_{ij}({\text{{$\sigma_i,\sigma_j$}}}) =\frac{1}{q_i^\mu}\left(\frac{\sigma_i+1}{2}\right)+\frac{1}{q_j^\mu} \left(\frac{\sigma_j+1}{2}\right), \end{equation} where $q_i$ and $q_j$ are the degrees of the nodes linked by $<ij>$. The exponent $\mu$ controls the total production of a supplier. When node $i$ is active or $\sigma_i=+1$, supplier $i$ furnishes $1/q_i^{\mu}$ products to each of its $q_i$ consumers. The total delivery of the active node is thus $q_i^{1-\mu}$. The case $\mu = 1$ corresponds to the special case in the Motivation in which all active nodes provide the same total supply. Consumers attached to highly-connected nodes then receive a smaller amount. In general, the hubs deliver less goods to a single consumer for $\mu >0$. For $\mu = 0$, every consumer would be receiving the same amount of products in an active state, which also implies that hubs have to deliver more goods in total. Finally, $\mu <0$ corresponds to the situation in which the highly-connected suppliers can deliver more goods to each consumer. Following the reasoning introduced in the Motivation, every consumer has a demand $\mathcal{D}$. We assume that the demand of link $<ij>$ may depend on $q_i$ and $q_j$. The energy per link is now given by \begin{equation} E_{ij}(\sigma_i,\sigma_j)=-J\left [\mathcal{L}_{ij}({\sigma_i,\sigma_j})-\mathcal{D}_{ij} \right ]^2. \end{equation} The total Hamiltonian becomes \begin{eqnarray} \label{hamiltoniaan} \mathcal{H}({\{\sigma\}})&=&\sum_{< ij>}E_{ij}(\sigma_i,\sigma_j)=-\sum_{i=1}^{N} (H_i+I_i(\{\sigma\}))\sigma_{i}\\ \text{with}&\quad\quad& H_{i}= \sum_{j=1}^{q_i}\frac{J}{2q_i^{\mu}}\left(\frac{1}{q_i^\mu}+\frac{1} {q_j^\mu}-2 \mathcal{D}_{ij}\right)\quad\quad \text{and}\quad\quad I_{i}(\{\sigma\})= \frac{J}{2q_i^\mu} \sum_{j=1}^{q_i}\frac{\sigma_j}{q_j^\mu},\nonumber \end{eqnarray} up to constant (spin-independent) terms, which are neglected. The sums over $j$ run over the $q_i$ neighbours of node $i$. This short reasoning leads to two fields applied to each node $i$. The \textit{quenched random field} $H_i$ is inherent in the network and the \textit{interaction field} $I_i$ originates from the interactions with spins linked to $\sigma_i$. In this Paper we focus mainly on the third policy introduced in the Motivation. We describe avalanches in distribution networks which occur due to the {\em solidarity among suppliers}. The interaction constant $J$ is thus positive. Note that avalanches in anti-ferromagnetic spin systems with a uniform external field on complex networks have been studied in Ref.~\cite{malarz}. Avalanches in distribution models were also studied using sand piles on networks \cite{goh} and in various models based on critical loads \cite{sachtjen,watts,motter,crucitti}. Recently, also models studying percolation in interdependent networks were used \cite{buldyrev}. \subsection{The demand function} Finally, we propose a suitable demand function. The demand of a link $<ij>$ between nodes $i$ and $j$ is a combination of a link-dependent and a homogeneous part: \begin{align}\label{demand} \mathcal{D}_{ij}=\frac{a}{2}\left(\frac{1}{q_i^\mu} +\frac{1}{q_j^\mu}\right) +\frac{b}{2}\frac{\langle q^{1-\mu}\rangle}{\langle q\rangle}, \end{align} where $a(>0)$ and $b$ are real constants. The notation $\langle\cdot \rangle$ denotes an average over the degree distribution $P(q)$. The first term, the \textit{supply-adjusted demand} is a consequence of the form of the fields $H_i$ in \eqref{hamiltoniaan}. The parameter $a$ controls which fraction of the normal capacity is demanded by the consumers on the link. Each link $<ij>$ also features an intrinsic uniform demand, independent of the deliveries of the nodes $i$ and $j$. The second term, regulated by the constant $b$, provides such a \textit{global demand}. The onset of a homogeneous consumer demand ($b\neq 0$) results in a total field on the network which has the same order of magnitude as the interaction field in an active state averaged over an ensemble of different networks. Note that in certain systems individual consumers can deliver goods {\em to} the market. For instance, individual households can contribute to the power grid with the output of their own solar cells. Negative values of $b$ are therefore not excluded. The total field acting on node $i$ is now given by \begin{align}\label{totaalveld} H_{i}+I_{i}(\{\sigma\})=\sum_{j=1}^{q_i}\frac{J}{2q_i^ {\mu}}\left[(1-a)\left(\frac{1}{q_i^\mu}+\frac{1}{q_j^\mu}\right) -b\frac{\langle q^{1-\mu}\rangle }{\langle q\rangle} +\frac{\sigma_j}{q_j^\mu}\right]. \end{align} Note that only the last term depends on the interaction with the spins on the neighbouring sites. It will play a prominent role in the cascading effect. For general values of $\mu$, both the interaction and the quenched random field applied on node $i$ will depend on the connectivity of node $i$ and on the connectivity of its neighbours. However, in some cases simplifications can be made. If $\mu = 0$, the parameters $a$ and $b$ can be replaced by a single parameter $a_0 = a+ b/2$. The total field is then simplified to \begin{equation} H_i + I_i(\{\sigma\}) = Jq_i \left(1-a_0\right) + \frac{J}{2}\sum_{j=1}^{q_i}\sigma_j;\hspace{0.5cm} \textrm{for} \quad\mu = 0. \label{richtingkust} \end{equation} Both the interaction field and the quenched random field acting on a node are then independent of the degrees of the neighbouring sites. For arbitrary $\mu$ the quenched random field is independent of the connectivity of the neighbouring nodes for $a= 1$. Note finally that for $a=1$ and $b=0$ the demand just matches the average supply (the average being taken over active and inactive providers) and the quenched random field vanishes, i.e., $H_i =0$, $\forall i$. An Ising model with connectivity-dependent coupling constants in the absence of an external field was studied by Giuraniuc {\em et al.}~\cite{giuraniuc1}. It was shown to be equivalent to a model with homogeneous couplings and with a modified topological exponent $\gamma' = (\gamma - \mu)/(1-\mu)$. The quenched random and interaction fields featured in \eqref{totaalveld} depend on the local structure of each network. Handy analytic approximations can be obtained by performing an ensemble average over different network realizations with the same degree distribution. In a first step, we average over the degree of the neighbours to obtain the average field applied on node $i$. Let $P^{(i)}_n(q_j)$ denote the probability that the neighbour of node $i$ has connectivity $q_j$. We assume that the network is uncorrelated. The distribution $P^{(i)}_n(q_j)$ is then independent of node $i$ and related to the degree distribution $P(q_j)$ by the relation $P_n(q_j) = q_jP(q_j)/\langle q \rangle$. Therefore we define the average over the degrees of the neighbouring sites of node $i$ as follows: \begin{align}\label{averagen} \ll\cdot\gg_i\:=\prod_{j=1}^{q_i}\sum_{q_j}\frac{q_jP(q_j)}{\langle q\rangle}. \end{align} This type of averaging is a ``topological" mean-field approximation. Simultaneously, we may in some situations wish to apply a ``thermal" mean-field approach to the thermally fluctuating spin variables and replace the actual spins of the neighbours by a mean spin. If we furthermore assume that the mean spin is homogeneous throughout the network, the average quenched random and interaction fields acting on node $i$ in the mean-field approach are \begin{eqns}{velden} \ll H_i\gg_i&=&Jq_i^{1-2\mu}\frac{1-a}{2}+Jq_i^{1-\mu}\frac{\langle q^{1-\mu}\rangle} {2\langle q\rangle}\left(1-a-b\right),\\ \ll I_i(\{\sigma\})\gg_i&=&Jq_i^{1-\mu}\frac{\langle q^{1-\mu}\rangle}{2\langle q\rangle} \sigma_{av}, \end{eqns} with $\sigma_{av}$ the average spin of a node. Deviations from these averages are important for a network in which the degree distribution $P(q)$ possesses diverging moments. For instance, in a scale-free network with topological exponent $\gamma\leq 3$, the second moment diverges. Note that the average quenched random and interaction fields fields on a node diverge for $\mu \leq 2-\gamma$. Therefore we will henceforth limit our attention to the case $\mu > 2 - \gamma$. Since we only consider scale-free networks with a finite mean degree, i.e., with $\gamma > 2$, this limitation is only important for $ \mu < 0$. In a second step, we average over the degrees of the nodes using the degree distribution $P(q)$. The average Hamiltonian $\langle\mathcal{H}(\sigma_{av})\rangle$ is then given by \begin{equation} \langle\mathcal{H}(\sigma_{av})\rangle = \sum_i \sum_{q_i} P(q_i) \sigma_{av}\ll H_i+ I_i(\sigma_{av})\gg_i. \end{equation} Inserting both members of \eqref{velden} we obtain \begin{equation}\label{meanham} \langle\mathcal{H}(\sigma_{av})\rangle=-N J\sigma_{av} \left(\averageq{1-2\mu}\frac{1-a}{2} + \frac{\averageq{1-\mu}^2}{2\langle q \rangle}\left(1-a-b + \sigma_{av}\right) \right). \end{equation} \eqref{meanham} provides a mean-field expression for the Hamiltonian averaged over different realizations of the same network. \section{Requirements and favourable circumstances for a network collapse} For the study of a collapsing network, driven by competition or solidarity ($J>0$), we exploit the {\em ferromagnetic} phase which appears in the Ising model at sufficiently low temperatures and for sufficiently weak random-field fluctuations. Recall that the ground state ($T=0$) in the absence of fields consists of two degenerate configurations: the \textit{active} network in which $\forall$ $i$, $\sigma_i=+1$, and the \textit{inactive} network in which $\forall$ $i$, $\sigma_i=-1$. By applying a uniform bulk field, the degeneracy is lifted and a unique equilibrium state emerges. Nevertheless, the network can reside for a long time in the oppositely magnetized ``metastable" state if the fields and (thermal) spin fluctuations are too small to trigger the collapse to the equilibrium state. Metastability is defined {\em dynamically} in our context as partial stability with respect to certain perturbations, in contrast with absolute stability (with respect to any perturbation). In our model, in the presence of random fields of microscopic origin, which are inhomogeneous and regulated by the parameters $a$, $b$ and $\mu$, the ground state may be non-trivial and will coincide with the all-up or all-down states only under certain conditions on the random fields. Therefore, considering metastable states, we need to distinguish between states that decay to a ferromagnetic state or evolve to a ``glassy" state. We will be interested mostly in the ferromagnetic state because in a glassy state the network is still more or less active. Thus, in our study of distribution networks driven by solidarity, we will mainly focus on the decay of the all-up state (active network) to an essentially all-down state (inactive network). When the global consumer demand (modeled by $b$) is increased, the metastable states we study by means of a suitable spin-flip dynamics, mimic the behavior of certain realistic systems. For example, from 1988 to 1998, US electricity demands increased by nearly 30\% while the network capacity grew only by 15\%~\cite{gellings}. Apparently as a result of this widening gap between demand and supply, the system became metastable, which only became apparent when a large part of the power grid broke down. In the following, we start from an active network and determine under which conditions a collapse to the inactive ground state is likely to occur. After an active period, avalanches to the inactive state can only take place if the system initially resides in a metastable active state and then decays to the energetically more favourable inactive state. In the following these requirements are converted into conditions in terms of the constants of the distribution model. There are indeed two basic restrictions in order to see avalanche effects. The first one concerns the metastability of the active state. The active state should remain intact for a sufficiently long time. Focussing on single-spin-flip dynamics, this requirement corresponds to the impossibility that at zero temperature a spin spontaneously flips from $+1$ to $-1$ while its surrounding remains active (+1). In such a hypothetical process, only the local energy associated with node $i$ changes by an amount $\Delta E^{sf}_i$ given by \begin{equation} \Delta E^{sf}_i = 2( H_i + I_i (\sigma_{av} = +1)) . \end{equation} At zero temperature, single-spin flips are excluded provided $\Delta E^{sf}_i>0$. The definition of the fields, \eqref{totaalveld}, then leads to the requirement \begin{equation} b < \frac{\langle q \rangle}{\averageq{1-\mu}} \left ( \frac{1-a}{q_i^{\mu}} + \frac{2-a}{q_i} \sum_{j=1}^{q_i}\frac{1}{q_j^\mu} \right ), \;\;\forall i \label{2belgen} \end{equation} where the sum is over the $q_i$ neighbours of node $i$. The condition depends on the specific local structure around each node in the network. In order to obtain a simple and useful analytical approximation to this, we can average over the degree of the nearest neighbours. The averaging procedure defined in \eqref{averagen} entails the replacement \begin{equation}\label{replacement} \sum_{j=1}^{q_i}\frac{1}{q_j^\mu} \rightarrow \frac{\averageq{1-\mu}}{\langle q \rangle}q_i. \end{equation} Note that the substitution defined in \eqref{replacement} is only exact for $\mu = 0$. For $\mu \neq 0$ it is a topological mean-field approximation. Within this approximation the active state is metastable (i.e., stable against single-spin flips at $T=0$) if for all possible degrees $q_i$, \begin{equation} b < \frac{\langle q \rangle}{\averageq{1-\mu}}\frac{(1-a)}{q_i^{\mu}} + 2-a, \; \forall i \label{mayo}. \end{equation} This condition is equivalent to \begin{equation} \ll H_i + I_i (\sigma_{av} = +1)\gg_i \; >0, \; \forall i, \end{equation} which is the metastability criterion for the mean fields defined in \eqref{velden}. The mean-field approximation can therefore by formulated directly in terms of the mean fields. \eqref{mayo} clearly sets an upper limit to the global demand. If it is too large, a metastable active state is not possible and there will be an ``immediate" decay to the inactive ground state. Depending on the signs of $1-a$ and $\mu$, two situations can be distinguished. For $\mu>0$ and $a<1$, it suffices that the largest possible degree, $K$, satisfies \eqref{mayo}. On the other hand, for $\mu>0$ and $a>1$, the smallest possible degree, $m$, is the important one. For $\mu < 0$, the converse is true. If more than one spin is allowed to flip at the same time, the criterion \eqref{mayo} must be extended. The energy difference associated with a multiple-spin-flip process can be smaller in magnitude than the energy difference pertaining to a single-spin-flip process. For instance, for a process in which two nearest neighbours $i$ and $j$ flip, the total energy difference of the double-spin-flip process $\Delta E^{df}_{ij}$ would be \begin{equation} \Delta E^{df}_{ij} = \Delta E^{sf}_i + \Delta E^{sf}_j - 2J(q_iq_j)^{-\mu}, \end{equation} which (for $J>0$) is not only always lower than the sum of the energy differences of the two single-spin-flip processes separately ($ \Delta E^{sf}_i + \Delta E^{sf}_j$), but can even be lower than the energy change involved in a single-spin-flip process. This can be appreciated by considering for example the special case $a= b =1$, for which the average value $ \ll \Delta E^{sf}_i \gg_i$ vanishes. The energy difference may in general be reduced further if more than two neighbouring spins can flip simultaneously. In real distribution networks multiple-spin flips can model multiple suppliers failing at the same time, when, for instance, raw materials are exhausted. However, we will not consider these instances but rather allow for random technical failures besides deliberate decisions arising from interactions, making it unlikely that multiple suppliers break down exactly simultaneously. Therefore, for simplicity, single-spin-flip dynamics is assumed throughout the remainder of our paper. As a second restriction, we require the inactive state to have a lower total energy at $T=0$ than any other state. This condition ensures that the breakdown occurs towards the inactive state rather than to another, glassy state, which minimizes the energy in regions of the parameter space where the local random fields dominate the ferromagnetic interaction. The reason for this limitation is that we wish to study principally the blackout phenomenon in which, after some time, a state with {\em very low activity} is reached. A sufficient condition for the inactive state (all spins down) to be the ground state (at $T=0$) is that all local quenched random fields have the same sign, in this case negative. The exact requirement is $H_i < 0, \; \forall i$, or \begin{equation} b > (1-a) \frac{\langle q \rangle}{\averageq{1-\mu}} \left ( \frac{1}{q_i^{\mu}} + \frac{1}{q_i} \sum_{j=1}^{q_i}\frac{1}{q_j^\mu} \right ), \;\;\forall i \label{2fransen} \end{equation} In the mean-field approximation adopted in the previous paragraph this becomes \begin{equation} b > \frac{\langle q \rangle}{\averageq{1-\mu}}\frac{(1-a)}{q_i^{\mu}} + 1-a, \; \;\forall i \label{wontonton}. \end{equation} \eqref{2fransen} and its simplified version \eqref{wontonton} provide a threshold for the demand sufficient to observe decays to the inactive state. When the demand is smaller than this threshold, the absolute stability of the inactive state is not guaranteed. Then the system may remain stable in the active configuration (all spins up) or evolve to a glassy state. Once more, different regimes can be identified according to the values of $a$ and $\mu$. For example, for positive $\mu$, if $a<1$ it suffices that the smallest possible degree satisfies \eqref{wontonton}. However, if $a>1$, the relevant quantity is the largest degree. These statements are to be interchanged if $\mu$ is negative. The two conditions \eqref{2belgen} and \eqref{2fransen}, or their mean-field approximations \eqref{mayo} and \eqref{wontonton}, determine the region in a $(a,b)$-phase diagram where collapses of an active network to an inactive one should be observable. A graphical representation of the phase diagram for distribution models with $\mu =0$ and $\mu =0.2$ can be found in Fig.~\ref{fasediagramvsmu}. The shaded region on the Figure indicates the ranges of $a$ and $b$ within which the two requirements are satisfied for a scale-free network with topological constant $\gamma = 3$, minimal degree $m = 2$ and 1000 nodes. The uppermost line (dotted) is the numerically exact upper bound for the stability of the active state against single-spin flips at $T=0$, the so-called metastability limit, given by \eqref{2belgen}. This exact condition was determined by simulation of the model on scale-free networks. The networks were generated in two steps using the uncorrelated configuration model. In a first step, $N$ nodes are created, each with their own degree, chosen randomly according to the distribution $P(q)$, which is a decreasing power law. The minimal degree is $m=2$, while the maximal degree is set to $K = \sqrt{N}$. For $\gamma > 3$ this is not a restriction, since the maximal degree satisfies $K \propto N^{1/(\gamma - 1)}$ \cite{cohen}. On the other hand, for $\gamma < 3$ this restriction avoids correlations in the network \cite{catanzaro}. In a second step, links are laid randomly between all the nodes, with the constraint that at the end of the linking procedure every node should have the degree it was given in the first step. Self linking and multiple linking are avoided. The results obtained from different network realizations differ slightly. In practice, averaging over 10 networks is sufficient to obtain accurate reproducible results. In this manner we determine the values of $a$ and $b$ for which \eqref{2belgen} and \eqref{2fransen} are satisfied for all nodes. The latter condition leads to the second uppermost line (solid) in the figures (straight line for $\mu=0$; line consisting of two straight segments, with a break at $a=1$, for $\mu \neq 0$). Above this line the inactive state is, with certainty, the ground state (at $T=0$). The region in between the two lines discussed so far is susceptible of avalanches of spin flips, i.e., blackouts of the network, and is shown shaded (in dark grey). The model possesses an overall symmetry, which is reflected in the phase diagrams in Fig.1. Inspecting \eqref{hamiltoniaan} and \eqref{totaalveld}, we conclude that the full Hamiltonian is invariant under the transformation: $a\rightarrow a' = 2-a$; $b \rightarrow b' = - b$; $\sigma \rightarrow \sigma' = - \sigma$ (flipping all the spins). This symmetry implies that the lines in the phase diagram pertaining to the active state (all spins up) can easily be drawn by applying the above transformation of $b$ and $a$ on the lines associated with properties of the inactive state (all spins down). In particular, the lowermost line (dashed) marks the metastability limit of the inactive state, and the second lowermost line (dash-dotted; broken for $\mu \neq 0$) indicates the limit of absolute stability of the active state. Below this line the active state is, with certainty, the ground state (at $T=0$). A remarkable feature of the phase diagram now emerges. While for $\mu = 0$ the two limits of absolute stability coincide, so that there are only (two) ferromagnetic ground states, a new zone appears for $\mu \neq 0$. In this zone, which we term ``no-man's land", the ground state is not with certainty ferromagnetic. It may be a glassy state, characterized by local random fields of either sign (up or down) that try to orient the spins along their direction, at low $T$, as they compete with the ferromagnetic spin-spin coupling. The no-man's land is shaded in light grey. Note that the width of this no-man's land vanishes for $b = 0$, and also, for $a = 1$, for which the random fields are trivially all of the same sign, determined by the value of the remaining free parameter $a$ or $b$, respectively. In general, the phase diagram depends on the network topology. However, the case $\mu = 0$ provides an exception. As mentioned before, the single parameter $a_0 = a + b/2$ suffices for describing the supply and demand functions, if $\mu$ vanishes. The conditions \eqref{mayo} and \eqref{wontonton}, which are {\em exact} for $\mu = 0$, then correspond to the simple conditions $1 < a_0 < 3/2$. Note that these can also be obtained directly starting from \eqref{richtingkust}. These conditions imply that there are no distinctions between different networks at zero temperature if $\mu$ vanishes. Indeed, for $\mu = 0$, the sign of the quenched random field is independent of the network structure, as can be seen in \eqref{richtingkust}. Therefore, the topology of the network does not affect the state of the system at $T=0$. This prediction is verified by comparing simulations on scale-free networks with simulations on random Erd\H{o}s-R\'{e}nyi graphs. In an Erd\H{o}s-R\'{e}nyi network all pairs of edges have equal probability to be linked, which implies a Poissonian degree distribution. We verified that the condition \eqref{2belgen} leads to one and the same straight line in the $(a,b)$-plane on an Erd\H{o}s-R\'{e}nyi network and on a scale-free network with $\gamma = 3$. The same is true for \eqref{2fransen}. However, if $\mu \neq 0$, the network topology exerts a crucial influence on the phase diagram. We verified that, for example, both with respect to the mean-field conditions and the exact ones, the phase diagram is different for networks with different values of $\gamma$ \cite{thesisSVL}. In the following we go into more detail as regards the interesting comparison of the mean-field approximation and the (numerically) exact results for the boundaries in the phase diagram. As we already noted, the topological mean-field approximation is (only) exact for $\mu = 0$. The simulations confirm this. To estimate the quantitative difference between, for example \eqref{2belgen} and \eqref{mayo} we calculate the variance, over all network realizations, of a quantity associated with the sum in the last term of \eqref{2belgen}. We obtain \begin{equation} {\rm Var}\left ( \frac{1}{q_i} \sum_{j=1}^{q_i} q_j^{-\mu} \right ) = \frac{1}{q_i} \left ( \frac{\averageq{1-2\mu}}{\langle q \rangle} - \frac{\averageq{1-\mu}^2}{\langle q \rangle^2}\right ) \label{varian} \end{equation} Consequently, the ratio of the standard deviation to the mean is given by \begin{equation} {\rm SD}\left ( \frac{1}{q_i} \sum_{j=1}^{q_i} q_j^{-\mu}\right )/ \ll \frac{1}{q_i} \sum_{j=1}^{q_i} q_j^{-\mu} \gg_i \; = \frac{1}{\sqrt{q_i}} \sqrt{ \frac{\averageq{1-2\mu}\langle q \rangle}{\averageq{1-\mu}^2}-1} \label{standdev} \end{equation} This result allows us to estimate the importance of the {\em random field fluctuations}. Clearly, the hubs are least affected by the topological disorder since the SD scales with their degree $q_i$ as $1/\sqrt{q_i}$. The amplitude of this power law depends on $\mu$ in a manner which is easy to interpret. For example, for $\mu = -1, 0.5$ or $1$ the variance is proportional to $ \langle q \rangle \averageq{3} - \averageq{2}^2$, $ \langle q \rangle - \averageq{1/2}^2$ or $\averageq{-1} - \langle q \rangle^{-1}$, respectively. We now proceed to assess quantitatively the difference between the exact {\em metastability limit} \eqref{2belgen} and the approximate one \eqref{mayo} for $\mu = 0.2$. Clearly, as Fig. 2(a) shows, the mean-field approximation leads to a continuous piece-wise linear curve which is broken at $a =1$, since for $a < 1 (>1)$ the right-hand-side of \eqref{mayo} is minimized by the maximal (minimal) degree present in the network. Interestingly, the exact (dotted) curve for $\mu=0.2$, shown in Fig. 2(a) and also in Fig. 1(b), is a straight line, without any singularity, since the second term in the right-hand-side of \eqref{2belgen} is typically minimized by a node $i$ of degree $q_i=2$ connecting two hubs ($q_j \gg 1$). This minimal value lies some 3 to 4 standard deviations below the mean. Consequently, the entire right-hand-side of \eqref{2belgen} is minimized by one and the same node $i$ with a low value of $q_i$ for all $a \in [0,2]$. For $\mu$ larger than some threshold this is no longer the case and the numerically exact metastability limit is no longer a straight line but a gently bent concave curve. Still, its shape appears smooth and is therefore qualitatively different from the broken curve found in the mean-field approximation. This is conspicuous in Fig. 2(b) where both are shown for $\mu = 1$. We repeat our analysis for the {\em absolute stability limit} of the active state. This is the curve below which the active state is with certainty the ground state (at $T=0$). It is derived numerically exactly by applying the transformation $a \rightarrow a' = 2-a$; $b \rightarrow b' = - b$ to \eqref{2fransen} and in the mean-field approximation by applying the same symmetry to \eqref{wontonton}, which in both cases simply reverses the inequalities concerned. In contrast with the metastability limit, the slope of the absolute stability limit displays a discontinuity at $a =1$ for both the exact and the mean-field versions, as can be seen from the fact that both \eqref{2fransen} and \eqref{wontonton} contain the prefactor $1-a$. For $a < 1$ the limit is defined through a hub (typically the node with the highest degree) and for $a > 1$ through a node with a low degree. For the former case, the mean-field approximation is accurate as expected since for hubs random field fluctuations are small, being proportional to $1/\sqrt{q_i}$. For the latter case, random field fluctuations are more important and, indeed, a clear difference emerges between the mean-field upper bound and the exact one. In Fig.~\ref{fasediagramvsmu2}(a) we compare the exact and the mean-field curves for $\mu = 0.2$, both for the metastability limit of the active state and for the absolute stability limit of the same state. The region in which avalanches can occur depends sensitively on the exponent $\mu$. Comparing Fig.~\ref{fasediagramvsmu}(a), Fig.~\ref{fasediagramvsmu}(b) and Fig.~\ref{fasediagramvsmu2}(b), the size of the region in which blackouts are possible shrinks as $\mu$ increases. The same trend is observed when $\mu$ is decreased from zero \cite{thesisSVL}. We conclude that the region in parameter space in which our two criteria for avalanches are fulfilled shrinks as $|\mu|$ increases. This region is also sensitive to the topological exponent $\gamma$. For instance, for $\mu = 1$, when $\gamma$ is decreased from 5 to 2 there appear typically more nodes with higher degrees. Consequently, the mean degree $\langle q \rangle$ increases. This raises the absolute stability limit of the inactive network for $a < 1$ and lowers the metastability limit of the active state for $a > 1$, while leaving the other segments of the phase boundaries unchanged (as can be seen qualitatively by inspecting the equations for the mean-field (meta-)stability criteria). As a consequence the region susceptible of avalanches is squeezed more tight about the center ($a = 1, b = 0$) of the phase diagram. At this point we would like to mention other studies in which the resistance against cascades was studied as a function of geometrical disorder. In models based on critical loads and percolation in interdependent networks it was found that a heterogeneous network is less resistant against higher loads than a homogeneous one \cite{motter,crucitti,buldyrev}. In our model degree heterogeneity (low $\gamma$) appears to strengthen the network at low $a$ but to weaken it at high $a$. Also the size of the network influences the region available for collapses, mainly through the maximal degree which depends on $N$. The region in which avalanches are observed decreases if more suppliers are present in the network, as can be seen from Eqs.~(\ref{mayo}) and (\ref{wontonton}). However, a numerical evaluation of these criteria indicates that the effect of the number of suppliers is rather small compared to the effects of the values of $\mu$ and $\gamma$ \cite{thesisSVL}. We conclude that the ranges of the demand parameters $a$ and $b$ for which collapses occur are influenced most strongly by the network topology and the degree dependence of the delivery in the distribution system. \section{Collapse properties} \subsection{Distribution model at finite temperature} In real-life distribution systems, individual suppliers can fail to deliver their goods, for instance due to a defect or malfunction in the production process. After repair the delivery resumes. Therefore, we introduce a generic ``temperature" $T$ in the distribution model to quantify the rate of random spin fluctuations, from the active to the inactive state and back. At sufficiently high temperatures, the mean spin magnetization $M = \sum_i \sigma_i /N$, which represents the mean network activity, tends to zero as a function of $T$ according to a Curie-Weiss-type law, i.e., $M \propto 1/T$ for $T \rightarrow \infty$ (cf. a paramagnet in a small external field). We will, however, focus mainly on lower temperatures, still in the ferromagnetic regime, for which a network failure can occur. We perform simulations on scale-free graphs which are constructed using the uncorrelated configuration model introduced in the previous Section. All networks contain 1000 nodes, the minimal degree of which is $m=2$ and the maximal one $K = \sqrt{1000} \approx 32$. Spins are updated using single-spin-flip dynamics with the Metropolis updating rule. The simulations start from a metastable active state. Collapses can be found for various values of the parameters $\mu$, $a$, $b$ of the distribution model and for different values of the topological exponent $\gamma$. Some examples are shown in Fig.~\ref{collapses}, in which the time evolution of the magnetization is plotted. A single time step corresponds to 1000 single-spin flips. Different types of collapses are observed in different ranges of model parameters, as we will illustrate in the following. A first point of attention concerns the nodes which initiate the collapse. Apart from the mean magnetization of the network, Fig.~\ref{collapses} also shows the mean magnetization of the highly connected nodes (with at least 8 links) and that of the poorly connected nodes (with only two neighbours). In the first and second collapses, Fig.~\ref{avalanche1} and Fig.~\ref{avalanche4}, the poorly connected nodes exhibit the largest fluctuations before the collapse. The nodes with a low degree thus initiate the breakdown. However, the actual collapse only takes place when also the hubs start to flip. As long as the hubs remain active, their large influence in the network prevents a blackout. The opposite behavior is found in the avalanches shown in Fig.~\ref{avalanche3} and Fig.~\ref{avalanche2}, in which the hubs initiate the collapse. Again, the mean magnetization remains positive until also the multitude of less-connected nodes starts to collapse. Thus in all four cases only a certain subset of nodes initiates the network collapse, but the network undergoes a full breakdown as a consequence of a collective effect, in which all nodes are involved. Our model thus displays the cooperative character of distribution networks as described in the Motivation. A simple mean-field argument suggests a condition for determining whether the hubs or the poorly connected nodes initiate the collapse. The collapse is initiated by the nodes which have the largest fluctuations in the active state. We introduce an {\em activation temperature} for a node with degree $q$, $T_{act}(q)$, by equating its thermal fluctuation energy to the average energy needed to spin flip a node with degree $q$ from $+1$ to $-1$. When $T > T_{act}(q)$ the spin of the node will undergo significant thermal fluctuations. Within the mean-field approximation, $T_{act}(q)$ is given by \begin{equation}\label{crit_temp} k_B T_{act}(q_i)=2\ll H_i+I_i(\{\sigma\})\gg_i =Jq_i^{1-2\mu}(1-a)+Jq_i^{1-\mu}\frac{\langle q^{1-\mu}\rangle}{\langle q\rangle}\left(1- a-b+\sigma_{av}\right). \end{equation} We focus on the degree-dependent activation temperature when the network is in an (almost) fully active state, thus with $\sigma_{av} \lesssim 1$. Then, the nodes with the smallest $T_{act}(q)$ display the largest fluctuations in the active state and can thus initiate the collapse of the network when $T$ is slowly raised from zero. Note that whether the activation temperature is either increasing or decreasing as a function of the node degree, depends on the signs of $1-a$, $\mu - 1/2$ and $2-a-b$ (assuming $\sigma_{av}=1$). Let's test these ideas against some of the simulations of network breakdowns shown in Fig.~\ref{collapses}. For example, using the model parameters associated with Fig.~\ref{avalanche1}, we obtain, with $\sigma_{av} \lesssim 1$, $k_BT_{act}(q)\approx J(0.30 \,q^{0.6}+0.31\, q^{0.8})$, which implies that the activation temperature increases with the node degree. Therefore, the poorly connected nodes should initiate the collapse, which is confirmed by the simulations. Taking the parameters associated with Fig.~\ref{avalanche2}, we find $k_BT_{act}(q)\approx J(0.28 + 0.15\, q^{-1})$, so that the highly connected nodes should show the largest fluctuations in the active state. The simulations confirm that the hubs indeed initiate the network collapse. Information about the type of nodes that initiates a collapse is of great interest in real-life networks. If hubs tend to be the most fragile and thus most fluctuating suppliers, it is best to invest more effort in protecting them rather than the least connected nodes. Of course the converse is true if the poorly-connected nodes are more prone to initiate the collapse. The criterion of \eqref{crit_temp} could therefore be used to strengthen networks purposefully against the consequences of accidental malfunction of distribution centers. \subsection{Effective strength of thermal fluctuations} As can be seen in Fig.~\ref{collapses}, the magnitude of the thermal fluctuations depends not only on the value of $T$ but also on that of other parameters of the distribution model. Even if the demand is fixed (constant $a$ and $b$), thermal fluctuations may still be reduced or amplified depending on the network structure, through the topological exponent $\gamma$, and depending on the delivery in the model, through the delivery exponent $\mu$. The starting point of our further analysis is the mean Hamiltonian, Eq.~(\ref{meanham}). The expression for this Hamiltonian together with the expressions for the amplitude of the fluctuations of the non-local term in the quenched random fields, Eqs.~(\ref{varian}) and (\ref{standdev}), suggest that, subject to conditions to be specified, the dependence on the network topology and on $\mu$ might be captured by the single parameter $\averageq{1-\mu}^2/\langle q \rangle$. This prompts us to test the conjecture that the mean Hamiltonian may possess the following scaling property, \begin{equation} \langle\mathcal{H}(\sigma_{av})\rangle \approx -NJ\sigma_{av}\frac{\averageq{1-\mu}^2}{2\langle q \rangle}f(a,b,\sigma_{av}),\label{australie} \end{equation} where the function $f(a,b,\sigma_{av})$ is independent of $\mu$ and $\gamma$. This Ansatz is most likely to be valid in at least one of the two following circumstances. Firstly, if $a$ is sufficiently close to one ($a \approx 1$), the first term in \eqref{meanham} is negligible compared to the second one and \eqref{australie} holds with $f(a,b,\sigma_{av}) \approx -b+\sigma_{av}$. The second situation occurs for large networks and small enough $\mu$, i.e., $|\mu| \ll \gamma -2$, which leads to $\averageq{1-2\mu}\approx\averageq{1-\mu}^2/ \langle q \rangle$, as can readily be shown analytically. Indeed, in the thermodynamic limit, $N, K \rightarrow \infty$, and converting sums over the degree distribution to integrals \cite{opm}, one finds \begin{equation} \frac{\averageq{1-2\mu}\langle q \rangle}{ \averageq{1-\mu}^2} \approx 1 + \frac{\mu^2}{(\gamma-2)^2}\frac{1}{1+\frac{2\mu}{\gamma -2}} \end{equation} Note that, in view of \eqref{standdev}, the condition $|\mu| \ll \gamma -2$ thus ensures that the random-field fluctuations are small. Under these circumstances \eqref{meanham} indeed takes the scaling form of \eqref{australie} with $f(a,b,\sigma_{av}) = 2 - 2a-b+\sigma_{av}$. Although, strictly speaking, this last result is only valid for the range of $\mu$ specified above, numerical inspection shows that it is a good approximation even for values of $\mu$ up till about unity, provided the network is large enough for the $K$-dependence of the averages to be negligible. In general, the approximation remains useful also for finite networks, as numerical analysis shows. In the remainder of this section we assume $\mu \geq 0$. According to Boltzmann statistics and using the Ansatz of \eqref{australie}, the probability to observe a network with mean spin $\sigma_{av}$ satisfies \begin{equation} \mathcal{P}(\sigma_{av}) \propto \exp\left(\frac{NJ\averageq{1-\mu}^2}{ k_B T\langle q \rangle}\frac{\sigma_{av}f(a,b,\sigma_{av})}{2}\right).\label{boltzmann} \end{equation} Apart from the constants related to the demand function, all model parameters ($\mu$, $\gamma$ and $T$) are only present in the first factor in the exponential function. We therefore absorb the dependence on the delivery exponent, the topology and the temperature into a single parameter $\Theta$, defined as \begin{equation}\label{theta} \Theta = \frac{\langle q \rangle}{\averageq{1- \mu}^2}T. \end{equation} For systems with fixed $a$ and $b$, the finite-temperature behavior of the system is thus controlled by $\Theta$, which acts as an {\em effective temperature}. In the next subsections, the effects of both the delivery exponent $\mu$ and the topological exponent $\gamma$ on finite-temperature collapses are investigated separately. \subsubsection{Effect of the delivery exponent $\mu$} We now focus on a distribution model with fixed demand constants $a$ and $b$ on a scale-free network with fixed topological constant $\gamma$ and investigate the effect of different values of the delivery exponent $\mu \in [0,1]$. Since $\averageq{1-\mu}$ decreases with increasing $\mu$, the effective temperature $\Theta$ of the network defined in \eqref{theta} increases as a function of $\mu$. Thermal spin fluctuations are thus larger in a distribution model with larger $\mu$ than in a model with smaller $\mu$ at the same temperature $T$. As $\mu$ increases, the network thus becomes more vulnerable, since the collapse temperature decreases. The considered distribution network could therefore be effectively strengthened by a decrease of $\mu$, i.e., by rendering the amounts of goods delivered to each customer more homogeneously distributed. In addition to this ``mean-field" effect there is an effect of the fluctuations of the quenched random nearest-neighbour couplings, or {\em random-bond fluctuations}. These are of topological origin and induced by the (quenched) degree fluctuations of the nodes. To understand this we recall that previous studies have shown that the (equilibrium) critical temperature $T_c$ of the model in zero external field depends rather sensitively on the values of $\gamma$ and $\mu$. In particular, for $\mu = 0$, the critical temperature is finite as long as the second moment of the degree distribution is finite, but diverges as a function of the network size $N$ when $\averageq{2}$ diverges \cite{Aleksiuk,dorogov}, which is the case for $\gamma \leq 3$. For $\mu \neq 0$, whether or not $T_c$ is finite (for an infinite network) is determined by whether or not the effective exponent $\gamma' = (\gamma - \mu)/(1 - \mu)$ exceeds the value 3 \cite{giuraniuc1}. Therefore, it is important to check also the value of $\gamma'$ in our distribution systems. In particular, when $\mu$ is increased (from 0 towards 1), the exponent $\gamma'$ increases and the hubs become less numerous and less pronounced. Consequently, $T_c$ decreases and this also renders the thermal spin fluctuations more important. This effect will be most relevant for networks with $\gamma \leq 3$ and $\gamma' > 3$. The effects we described are confirmed by simulations in which the mean magnetization is studied versus temperature. In each such simulation we initialize our network in the metastable active state and we update the spins using single-spin flips during 4000 time steps. At sufficiently low temperatures, the metastable state remains stable during the entire simulation and the final magnetization (after 4000 time steps) remains close to one. Repeating this procedure for a sequence of fixed temperatures, {\em upon increase of the temperature} one observes a transition to the regime in which the active state is no longer metastable on the time scale of the simulations. This (non-equilibrium) transition, which takes place at a {\em breakdown temperature} $T_b$, is marked by a magnetization jump to a negative value. An inactive state, with $\sigma_{av} \gtrsim -1$, is then reached before the end of the simulations. At still higher temperatures, the final magnetization becomes smaller in absolute value and displays a Curie-Weiss behavior reminiscent of the paramagnetic state. Note that, in the absence of symmetry-breaking fields, the final magnetization would approach zero rather sharply at the equilibrium critical temperature $T_c$. We also verified this in our simulations. Obviously, $T_b < T_c$. Simulation results for different values of $\mu$ are shown in Fig.~\ref{effectmu1}, for $a=1$, and in Fig.~\ref{effectmu3} for $a\neq 1$. The magnetization curves tend to be stretched and shifted as $\mu$ decreases, reflecting the fact that networks remain stable up to a higher temperature for smaller values of $\mu$. Interestingly, for $a=1$, if we plot the magnetization as a function of the effective temperature $\Theta$, a good {\em data collapse} occurs, as is conspicuous in Fig.~\ref{effectmu2}. Not only do the different curves for $T > T_b$ in Figs.~\ref{effectmu1} fall onto a single curve in Fig.~\ref{effectmu2}, but also the different values of $T_b$ lead to practically one and the same value of $\Theta_b$. We argue that the high quality of the data collapse has two reasons. Firstly, for $a=1$ the scaling Ansatz \eqref{australie} is properly valid (at mean-field level) and secondly, the equilibrium critical temperatures for the different networks (in zero field) are not very different. (An exception could, in principle, have occurred in the borderline case $\gamma=3$ and $\mu=0$, for which $T_c$ diverges. But this divergence is very slow, in the manner $T_c \propto \log N$, for large $N$.) Notice how the quality of the data collapse degrades if $a\neq1 $, as can be seen by comparing Figs.~\ref{effectmu2} and~\ref{effectmu4}. In addition, there is now also a significant spread on the values of $\Theta_b$. These effects could have been expected, since the scaling Ansatz \eqref{australie} is not well satisfied for $a \neq 1$, if $\mu$ is arbitrary. We conclude that, at least in the range $0 < \mu < 1$, the effects of varying $\mu$ and varying $T$ are not independent. A change in $\mu$ can be compensated or ``absorbed" by a change in $T$. This is most clearly so for $a \approx 1$ and for $\gamma$ sufficiently large (i.e., in practice for $\gamma \geq 3$). \subsubsection{Effect of the topological exponent $\gamma$} In a second application of the use of the effective temperature of \eqref{theta}, we determine the effect of the network topology on the network activity or spin magnetization. We focus on the distribution model on scale-free networks with $m = 2$, 1000 nodes, constant $a$, $b$ and $\mu$ and we vary the topological exponent $\gamma$. Converting sums over the distribution function into integrals \cite{opm}, \eqref{theta} leads to \begin{equation} \Theta = \frac{\left(\gamma-2+\mu\right)^2}{(\gamma -1)(\gamma-2)}\frac{\left(K^{2-\gamma}-m^{2-\gamma}\right)\left(K^{1-\gamma}-m^{1-\gamma}\right)}{\left(K^{2-\gamma-\mu} - m^{2-\gamma-\mu}\right)^2}\:T . \end{equation} As in the previous subsection, we distinguish between effects at mean-field level and effects of the quenched random-bond fluctuations on the characteristic temperatures $T_b$ (breakdown) and $T_c$ (bulk criticality). At mean-field level, the behavior of $\Theta$ as a function of $\gamma$ is complex and depends both on the parameter $\mu$ and on the size of the network through the maximal degree $K$. However, when $\mu = 0$ or $\mu > 1$, some simplifications apply. For $\mu = 0$, $\Theta \propto 1/\langle q \rangle$ so that $\Theta$ increases with increasing $\gamma$, regardless of $m$ and $K$. For vanishing $\mu$, the network thus becomes more vulnerable to collapses if $\gamma$ increases, i.e., when there are less hubs in the network. In such a regime, hubs produce more goods than poorly connected nodes. The network thus appears more resilient against collapses when there are more hubs and when they are more productive. The opposite behavior occurs for $\mu \geq 1$. For constant temperature $T$, $\Theta$ decreases with increasing $\gamma$ and thus the network becomes more vulnerable to collapses when $\gamma$ is decreased. The system is thus less prone to failure when there are less hubs and more nodes with a small degree. For large $\mu$, nodes with small degrees provide the larger quantities of goods. Between the two regimes, i.e., for $0<\mu<1$, there is a complex transition region in which the behavior of $\Theta$ as a function of $\gamma$ depends more subtly on the value of $\mu$ and the size of the network. Numerical simulations indicate that the effect of topology on the thermal fluctuations is rather small in this regime. The effect of quenched random-bond disorder has already been discussed in the previous subsection. It suffices to recall that the value of the effective topological exponent $\gamma'$ determines whether we are dealing with a strongly expanded temperature scale ($T_c \rightarrow \infty$) or a normal one (with finite $T_c$). We illustrate the above qualitative features with simulation results in Fig.~\ref{fig5}. In Fig.~\ref{5a}, the simulations use a distribution model with $\mu = 0$, while in Fig.~\ref{5c} $\mu = 1$ is taken. In both cases $a=1$ is assumed so that the mean-field Hamiltonian has the simple scaling form of \eqref{australie}. For $\mu = 0$ network collapses occur at lower temperatures for networks with larger $\gamma$, which implies that networks with more hubs and more highly connected hubs are more robust against thermal fluctuations. The opposite behavior is observed for $\mu = 1$ (see Fig.~\ref{5c}). The simulations thus confirm our expectations based on the dependence of $\Theta$ on $\mu$ and $\gamma$. In Figs.~\ref{5b} and \ref{5d}, the magnetization of the networks for the systems of Figs.~\ref{5a} and \ref{5c} is plotted as a function of the parameter $\Theta$. In Fig.~\ref{5b} the data collapse is far from perfect. This is at first sight surprising because the conditions $a=1$ and $\mu = 0$ seem ideal prerequisites from the point of view of the validity of the Ansatz of \eqref{australie}. However, the networks examined are qualitatively different in the sense that for $\gamma \leq 3$ the degree fluctuations are important ($\averageq{2}$ diverges for $N\rightarrow\infty$), while for $\gamma > 3$ they are not. If we take into account the finite network size, we obtain the following estimates for the equilibrium critical temperatures in zero external field, using the analytic results of previous works \cite{giuraniuc1,dorogov}. For $\gamma = $ 5, 3, and 2.2 we find $k_BT_c/J \approx $ 0.60, 1.95 and 3.89, respectively, which spans a broad range. These values appear consistent with the behaviour of $M(T)$ (in non-zero external field) shown in Fig.~\ref{5a}. The simple ``mean-field" scaling underlying the definition of the effective temperature $\Theta$ is not quite sufficient to suppress the rather large effect of the degree fluctuations for low $\gamma$. This is why the magnetization curves and the breakdown temperatures show only a rather poor data collapse in Fig.~\ref{5b}. In contrast, a much better ``universality" is clearly emerging in Fig.~\ref{5d}, which is for systems with $\mu =1$. For these systems $\gamma' = \infty$ so that the networks all behave effectively as Poissonian networks, with a finite $T_c$ which scales in a simple manner with $\langle q \rangle$. It is conspicuous in Fig.~\ref{5d} that both the $M(T)$ and the effective breakdown temperatures coincide well for all three cases. \section{Conclusions} In this Paper, we introduced a model to describe cooperative behavior in various real-life distribution systems. We specialized to networks in which a policy of competition (attempting to create new demand) or solidarity (reluctance to meet too high a demand) among suppliers takes effect so that there is a tendency towards maximizing the gap between delivery and demand. The resulting positive feedback mechanism can cause an initially active network to function during a certain time and then collapse to an inactive state. To implement this our model utilizes an Ising-spin system with quenched random fields. The couplings are ferromagnetic in order to describe the positive feedback policies: spins align preferentially with their neighbours (up: active state; down: inactive state). Each node, of degree $q$, models a supplier which has a certain degree-dependent delivery controlled by an exponent $\mu$ and each link models (a number of) consumers with a certain demand, controlled by the amplitudes $a$ (supplier-adjusted demand) en $b$ (global demand). The degree distribution $P(q)$ is (mostly) of power-law type, pertaining to a scale-free network with topological exponent $\gamma$. The system is studied analytically, in part by using mean-field approximations, and numerically using Monte-Carlo simulations with the Metropolis updating rule. Avalanches between an active and an inactive state are typically only observed if the inactive state has a lower total energy than the active state, and if a metastable active state is present initially. These conditions lead to restrictions on the demand parameters $a$ and $b$. If the demand is too large, the active state will immediately decay to the ground state and no metastable active state exists. If the demand is too small, the active state or some glassy state with lower activity is the ground state and no breakdowns can be observed. The region in which collapses can occur also depends on the topological exponent $\gamma$ of the network and on the delivery exponent $\mu$. If $|\mu|$ is increased, the region in which avalanches can occur shrinks gradually. Random malfunction in the suppliers is modeled by a temperature parameter $T$. At finite temperatures, thermal fluctuations can cause network collapses, which are prevented at $T=0$ by the metastability of the active state. During a collapse the roles of the hubs and of the poorly connected nodes has been monitored separately. It has been possible to identify which type of nodes is responsible for initiating the breakdown as a function of the distribution and network parameters. The lowest temperature $T_b$ at which a network blackout takes place depends strongly on the different parameters in the model. Increasing this ``breakdown temperature" is of great interest in the protection of the distribution system. The most stable situation appears to be that in which there are many large suppliers in the network. For large values of $\mu$, this requires many poorly connected nodes while for $\mu=0$ many hubs are needed. Rendering the amount of goods every consumer receives more homogeneous throughout the network also reduces the impact of thermal fluctuations in the system. Such a procedure can for instance be realized by decreasing $|\mu|$. Not all model parameters are independent. We have identified a scaled or effective temperature variable $\Theta$ which incorporates the $\gamma$ and $\mu$ dependencies in such a way that a good data collapse can occur when the network activity for various cases is measured as a function of $\Theta$ instead of $T$. While this scaling is very useful at the level of a mean-field approximation, we also observed that fluctuations in the node degree, which are large for networks with $\gamma \leq 3$, are responsible for deviations from simple scaling if also $\gamma' = (\gamma - \mu)/(1-\mu) \leq 3$. The similarity between the model and real-life distribution systems can be improved in various ways. In the distribution model all the consumers depend on two suppliers. Using a bipartite network, i.e., a network with two kinds of nodes with links running only between unlike nodes, we can extend the model to incorporate an unrestricted number of suppliers for each consumer, reflecting the consumer's freedom of choice. Another extension is concerned with partly active suppliers. In the current model, a node is active or inactive. Using continuous spins, or discrete Potts spins, we could also model suppliers with tunable activity. Extensions of our model could also describe systems that evolve in time, for instance by implementing the model on growing networks or on networks in which rewiring is possible. Time-dependent demand parameters $a$ and $b$ could be used to model the evolving economic characteristics of the consumers, etc. We conclude that the distribution model introduced in this Paper offers a possible starting point for studying collapses in certain real-life distribution systems, from the viewpoint of the phenomenology of dynamical critical phenomena with a non-conserved order parameter. Note that in contrast with (most) other models of distribution or transportation networks \cite{Simonsen} there is no conservation law or continuity equation in our network. The amounts of goods flowing along the edges of the network are stochastic variables controlled by fluctuating spin states. In this sense our distribution network based on spin variables provides a complementary approach. \vspace{1cm} {\em Acknowledgements} It is a pleasure and an honour to dedicate this paper to Professor David Sherrington on the occasion of his 70th birthday. We thank economist Marc Lambrecht of K.U.Leuven for his interest in and his comments on this model. H.H. and B.V.S. thank the FWO-Vlaanderen for support.
proofpile-arXiv_068-634
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Energy Balance Models (EBMs) rest on the concept that in the earth's equilibrium state the energy received from the sun's radiation is balanced by the energy re-emitted back to space at the earth's temperature. Budyko's EBM aims to model the latitudinal distribution of the surface annual mean temperature, by taking into account the feedback from ice albedo (icecap's reflectivity factor). As a dynamical system then, Budyko's model takes place in an infinite dimensional state space. While the discussion of climate feedbacks easily enters realm of great complexity, the concept of ice albedo feedback and its effect on the planet, is not difficult to explain. Ice albedo is the degree of which ice reflects light or energy. Since ice cover in general has whiter color than the blue ocean, it reflects more light or energy back into space. This creates a positive feedback, properly called the \textit{ice albedo feedback}, because more ice means less energy absorbed, which induces cooler climate favoring condition for more ice formation, and so on. On the other hand, shrinkage of ice mass induces more ocean surface, which absorbs more energy, creates warmer climate and in turn, promotes more melting. But despite this easy to explain concept, the conclusions on the stability of icecaps vary greatly, even among the simplest models. While previous results were useful in understanding steady states of the energy balance models, they did not include mechanisms for ice-water boundary or iceline movement, making it difficult to obtain rigorous results for icecap stability. In this paper, to go along with Budyko's model, we introduce an equation that describes the movement of the ice line. We call this coupled system, Dynamic Iceline Budyko's Model (DIBM), and it consists of two equations: the first is a version of Budyko's model, similar to that written by Ka Kit Tung \cite{tung07} describing the evolution of the temperature profile, and the second is the novel equation that models the iceline dynamics. \\ Our planet currently has a small ice cover surrounding both poles, and in light of the ice albedo feedback concept, many has pondered on the stability of these polar ice caps. The idea of small ice cap instability, or sometimes abbreviated SICI, perhaps was started in 1924 by CEP Brooks \cite{brooks}, \cite{notz09}, when he hypothesized that in this planet, because of a mechanism later coined as ice albedo feedback, only two stable climates are possible: ice free and large ice capped earth, with the cryospheres extending from the poles pass the $78^o$ latitude. In late sixties, a Russian climatologist Mihail Budyko proposed an ice albedo feedback model based on the energy balance principle, where transport is represented by a simple relaxation process in which the temperature of each latitude dissipates to the global average temperature. He concluded that both ice free and polar or small ice cap climates are unstable (see p. 618 \cite{bud69}). Around the same time, William Sellers from University of Arizona proposed a minimal complexity climate model which includes planetary albedo, but uses separate atmospheric and oceanic transport processes \cite{sellers69}. In the next decade, Gerald North explored in depth the diffusive transport version of the energy balance model with the ice albedo feedback, and in his works, upon varying some of the parameters, the small ice cap instability disappears \cite{n75}, \cite{n79}. \\ While the small ice cap instability discussions in the mid twentieth century were fuelled by the possibility that the planet is going into its glaciation period, the more recent discussion on the state of the cryosphere is motivated by the rapid decrease of the Arctic sea ice extent, induced by global warming and placed more emphasis on the reversibility of an ice free state in a warmer climate. Recent results that attempt to answer this question have also been characterized by the use of computer simulations such as the one developed by in 2008 by Merryfield, Holland and Monahan \cite{hmm08}, a simple climate model based on the Community Climate System Model version 3 (CCSM 3)-a global climate model developed by NCAR. The simple model is nonlinear and admits abrupt sea ice transitions resembling those in the CCSM 3 simulations. In early 2009, Eisenman and Wettlaufer \cite{ew09} examined an energy balance model with seasonal features and a nonlinear forcing given by the sea ice thermodynamics. Their results suggest that there exists a critical threshold in the climate warming, beyond which a sudden loss of the remaining seasonal sea ice is possible. Another recent paper published in 2009 by Notz \cite{notz09} summarized the discussions on the state of the "existence of the cryospheric tipping points". \\ We will show that Dynamic Iceline Budyko's Model admits an unstable large ice cap, a stable ice covered planet, a stable small ice cap, and an unstable ice free planet indicating polar ice cap loss reversibility. Furthermore, we show that the infinite dimensional system has a 1-D center stable manifold, hence, the dynamic is essentially one dimensional. The existence of this one dimensional attractor will also be illustrated in the computer simulations, but in the end, it is the mathematical analysis of the dynamics that gives us the confidence in the numerical executions.\\ Unlike the more complex computer simulation models, our result should not be seen as predictive, but instead, as a tool to understand qualitatively the mechanism involved in climate processes. The advantage of our formulation is that its tractability allows us to isolate the effects of a single climate feedback, ie, the ice albedo feedback. For example, we see from this result that despite being a positive feedback, the ice albedo feedback is not enough to cause a tipping point phenomena as seen in recent studies, see \cite{ew09} and reference therein. The comparison of these two results indicates the fruitfulness of exchanges between computer simulation models and theoretical analysis, hence, between climate scientists and mathematicians.\\ We will organize this paper as follows: in section 2, we introduce Budyko's energy balance model along with a new feature to account for the iceline dynamics, and we discuss the equilibria of the systems. Then we show some animations that results from the numerics to illustrate the dynamics. Section 3 discusses the analysis of the Dynamics Iceline Budyko's model and its map, with detailed proofs using graph transform method in Section 4. Section 5 provides concluding remarks and a sketch of some future directions. \section{Dynamic Iceline Budyko's Model} The equation governing the temperature profile evolutions by itself does not induce the movement of the iceline, as illustrated by computer simulations in the following section. To capture the feedback from the ice albedo, we propose a system of two time scales, integro-difference equations governing the temperature distribution and the iceline dynamics. With this proposed system, we asked the following questions: \\ \begin{itemize} \item What is the appropriate function space for the temperature profiles? \item What are the dynamics of the proposed system? \item What are the parametric conditions to have an equilibrium state at the current ice line? \item What does the model suggest if the current condition is perturbed so that the planet is ice free?\\ \end{itemize} The main results of this study are:\\ \begin{itemize} \item The identification of the function space in which the proposed system is well defined. \item The existence of an invariant manifold for the Euler approximation of the proposed system. \item A parametric condition to have an equilibrium at the present climate. \item The instability of the ice free earth.\\ \end{itemize} We first introduce the details and some background of the model, then we explore the equilibria, the invariant manifold and, finally, the instability of the ice free planet. \\ \subsection{Details of Budyko's Model} While Budyko introduced his original model as a concept, following his steps many have formulated and parametrized the energy balance and ice albedo feedback concept. KK Tung has summarized the formulation as a differential equation with a nice exposition in his book Tung (2007): \begin{equation} R \frac {\partial}{\partial t} T(y) =Q \cdot s(y) \cdot [1 - \alpha(y)]-[A+BT(y)]-C \cdot[T(y)-\int_0^1{T(y)dy}] \end{equation} The function $T(y)$ is the annual average surface temperature at the zone $y$, and its graph over the domain of $y$ is called \textit{\textbf{the temperature profile}}. Other previous authors, eg. \cite{n75}, \cite{dg81}, also treated the model as a \textit{differential equation}, hence, presupposed the continuous dependence of the annual average temperature profile $T(y)(t)$ on the time variable $t$. We argue that the evolution of the yearly averaged temperature profile $T$ should be governed by a \textit{difference equation}. \\ Let $\Delta_t [f](k)$ denotes the difference equation of $f$ at time node $k$, that is $$\Delta_t[f](k)=\frac{f(k+t)-f(k)}{t}.$$\\ We consider the following Budyko's difference equation at each time node $k$: \begin{equation} R\Delta_t [T(y)](k) = Qs(y)(1-a(\eta)(y))-(B+C)T(y)(k)-A+C\overline{T}(k) \end{equation}\\ Here, the constant $R$ measures the heat capacity of the surface, and we assume that $R=1$. Such assumption does not change the qualitative behavior of the system. The independent variable $y$ is $\sin(\theta)$, where $\theta$ represents some latitude in the northern hemisphere, therefore $y$ lies on the unit interval. This model assumes that the annual distribution of the surface temperature is symmetric about the equator. \\ We will now examine in details the terms on the right hand side of the EBM equation above. The constant $Q$ is the solar constant, representing the amount of energy received from the sun at the top of the atmosphere. The function $s(y)$ is the latitudinal distribution of that energy which could be computed from earths orbital elements. Many authors, such as KK Tung \cite{tung07}, and North \cite{n75} used the Legendre polynomial function approximation $s(y)=1-0.482\frac{3y^2-1}{2}$ for this distribution. We will use the same approximation as well. \\ The term $1-\alpha(y)$ represents the fraction of the radiative energy absorbed by the earth at location $y$. To establish the ice albedo feedback effect, it is assumed that ice is formed when the temperature at a certain location stays below a critical temperature $T_c$, taken to be $-10^oC$. Also, it is assumed that the surface is either water (ice free) or ice covered and that there is only one ice line, $\eta$. Since ice reflects more sunlight than water does, area covered with ice has higher albedo than that covered with water. In this paper we use the approach that the albedo function $a(y)$ at location $y$ depends on $y$ relative to the location of the iceline, and not on the temperature $T(y)$. Let $\eta \in [0,1]$ denote the location of the ice line. The albedo function we use in this paper is smooth and is iceline dependent:\\ \begin{definition} {\textbf{Iceline-Dependent Smooth Albedo}}\\ Given that the ice line is at $\eta$, the albedo at $y$ is $$a(\eta)(y)=0.47+0.15 \cdot tanh[M \cdot (y -\eta)]$$ \end{definition} Here $M$ is the parameter representing the steepness of the albedo near the icelineand is a fixed quantity, presumably dictated by the planet.\\ As in Tung \cite{tung07}, balancing the absorbed radiative energy contained in the first term are the re-emission term, $A+BT(y)$ and the transport term, $C \cdot (T(y)-\overline{T})$, where $\overline{T}$ is the global average $\int_{[0,1]} T(y)dy$. The constants $A = 202 \text{ watts per squared meters}$ and $B=1.9 \text{ watts per squared meters per } ^oC$ are derived from fitting a linear function through satelite data of Outgoing Long-wave Radiation (OLR) at the top of the atmosphere \cite{graves93}. The constant $C$ in the transport term is taken to be $1.6B=3.04$ and is chosen so that one equilibrium fits the current climate with ice line near the pole. \\ \subsection{Dynamic Iceline} So far we have an equation that describes the evolution of the temperature profile over the northern hemisphere, and that takes into account the ice albedo feedback. We will add to this an equation that describes the evolution of the ice line $\eta$. Previous literatures such as Budyko \cite{bud69}, Tung \cite{tung07}, North \cite{n75}, etc. have used the idea that ice is formed when the temperature is below a critical threshold, $T_c$. We adapt this idea to the ice line evolution by prescribing a poleward movement of the ice line when the temperature at the ice line $T(\eta)$ is above a critical temperature $T_c$, and equatorward movement otherwise. Also, we assume that the movement of the ice line happens at a much slower rate compared to the evolution of the temperature profile. Therefore, for $\epsilon<<1$ and with $T_c=-10$ as used by previous authors, the equation for the ice line evolution at time node $k$ can be written as: \begin{equation} \Delta_t [\eta](k)= \frac{\eta(k+t)-\eta(k)}{t} = \epsilon \cdot [T(\eta)(k)-T_c] \end{equation} Some recent computation has suggested that to be physically relevant, the value of $\epsilon$ is very small, possibly in the order of $10^{(-12)}$ \cite{mcg}. We will see in the simulation that using a much larger $\epsilon \cong 0.025$ the attracting 1-D manifold still appears. \\ The object of our analysis, Dynamic Iceline Budyko's Model, is therefore the following infinite dimensional, two time scale, integro-difference equations:\\ \begin{equation} \label{budref} \begin{cases} \Delta_t [T(y)](k) = F([T(y),\eta])(k)\\ \Delta_t [\eta](k)= G([T(y),\eta])(k) \end{cases} \end{equation} with $F$ and $G$ as the following: \begin{align*} F([T(y), \eta]) &= Qs(y)(1-\alpha(\eta)(y))-(B+C)T(y)-A+C \int_0^1 T(y)dy\\ G([T(y),\eta])&= \epsilon (T(\eta)-T_c) \end{align*} \subsection{Equilibria and Animations} When the ice line $\eta$ is fixed, the \textit{\textbf{temperature profile equilibrium, with ice line at $\eta$}} is: \[ T^*(\eta)(y) = \frac{Q \cdot s(y) \cdot(1-a(\eta)(y)-A+C\int_0^1{T^*(\eta)(y)dy}}{B+C} \] We call the set of the Lipschitz continuous functions $\mathbb{T}^*:=\{ T^*(\eta): \eta \in [0,1] \}$ \textit{\textbf{the local equilibrium}} set. This set is the steady state of the first equations in (\ref{budref}). Notice that, without a mechanism specifying the movement of the iceline, the temperature profile will dissipate to the local temperature profile $T^*(\eta)(y)$ at the initial ice line $\eta$. The following figures simulate the evolutions of the temperature profile $T_0(y)=14-54y^2$ following only the first equation of (\ref{budref}), with the ice lines fixed at \textbf{$\eta_0=0.1, 0.3, 1$}. \\ Note: on the electronic copy, the following figures are the initial temperature profile and the initial ice line, and the local equilibria at the respective ice lines. Click anywhere on the left figures to start the animations following Budyko's equations. Similar animations can also be found on the web through: {\bf \url{http://math.arizona.edu/~ewidiasih/index.html/ewidiasih/Research.html} } \begin{figure}[h] \centering \subfloat{\label{strticov}} \href{run:static-icecovered.avi}{\includegraphics[width=1.5in]{starticecovered.jpg}} \subfloat{\label{strtmid}} \href{run:static-middle.avi}{\includegraphics[width=1.5in]{startmiddle.jpg}} \subfloat{\label{strticf}} \href{run:static-icefree.avi}{\includegraphics[width=1.5in]{starticefree.jpg}} \subfloat{\label{endicov }\includegraphics[width=1.5in]{staticend-icecovered.jpg}} \subfloat{\label{endmid }\includegraphics[width=1.5in]{staticend-middle.jpg}} \subfloat{\label{endicf }\includegraphics[width=1.5in]{staticend-icefree.jpg}} \caption{The horizontal axes are the domain of the temperature profile $T$. The solid blue curve is the temperature profile, and the big red 'X' represents the location of the ice line. The figures on the left are the initial temperature profile, $T(y)=14-54y^2$ and ice lines at $\eta=0.1, 0.5$ and $1$, and the figure on the right is the steady state temperature profile $T^*(\eta)(y)$ at the respective ice lines. Clicking on the top figures will start the animation. } \end{figure} \newpage The next animations track the dynamics using the proposed system with parameter $\epsilon=0.025$ and with the same starting temperature profile and ice lines as the previous series of figures. It should be noted here that we choose this value of $\epsilon$, not based on the physical considerations, but rather to create a reasonable animations run. \\ Notice that the initial temperature profile first evolves to a temperature profile similar to the local equilibrium temperature profile, one with a large slope at some ice line $\eta$ close to the initial ice line, and eventually they together move toward an equilibrium. This suggests the existence of an invariant manifold in the phase space $(T,\eta)$. We can directly compute the global equilibria for the proposed system by setting the ice line temperature of the local equilibria to the critical temperature and they are: \[ (T^*(\eta_1)(y), \eta_1) \text{ where } \eta_1 \cong 0.225 \text{, and } (T^*(\eta)(y), \eta_2) \text{ where } \eta_2 \cong 0.962 \] \begin{figure}[h] \centering \subfloat{\label{strticecov}} \href{run:ebm-covered.avi}{\includegraphics[width=1.5in]{starticecovered.jpg}} \subfloat{\label{strtic}} \href{run:ebm-middle.avi}{\includegraphics[width=1.5in]{startmiddle.jpg}} \subfloat{\label{strticfree}} \href{run:ebm-icefree.avi}{\includegraphics[width=1.5in]{starticefree.jpg}} \subfloat{\label{endicecov}\includegraphics[width=1.5in]{ebmend-icecov.jpg}} \subfloat{\label{endic }\includegraphics[width=1.5in]{ebmend-icemiddle.jpg}} \subfloat{\label{endicfree }\includegraphics[width=1.5in]{ebmend-icefree.jpg}} \caption{The figures on the left are the initial temperature profile, $T(y)=14-54y^2$ with ice lines at $\eta_0=$ 0.1, 0.5 and 1, and the figure on the right is the steady state temperature profile $T^*(\eta)(y)$, with the ice lines at the equator or $\eta=0$, $\eta \cong 0.962$. As before, clicking on the top figures will start the animations. } \label{icf} \end{figure} \newpage \section{Analysis of Dynamic Iceline Budyko's Model } Our goal is to explain the dynamics of the temperature profiles $T(y)$, coupled with the ice line $\eta$ in the simulation above. The main result is the existence of a one dimensional center stable manifold. The idea of an attracting invariant manifold in a fast slow system has been explored and developed by many including J. Carr \cite{carr}, S. Wiggins \cite{wiggins}, Fenichel \cite{fenichel}, and Vanderbauwhede \cite{van} to name a few. First, we argue that it is reasonable to assume that the albedo functions has bounded local variation, that is, for each fixed ice line $\eta$, the albedo function $\alpha(\eta)$ has a bounded Lipschitz contant. Let $M$ be this bound, ie. for any real values $x$ and $z$, $$|\alpha(\eta)(x)-\alpha(\eta)(z)|<M|x-z|.$$ When the ice line is at the equator, we have an ice covered planet, otherwise, when it is at the pole, our planet is ice free. The forcings in Budyko's model as explained in section 2 is fixed to the current climate which allow for a small ice cap as an equilibrium. As mentioned in the introduction, an important question in light of the current global climate issue is: \\ \begin{flushleft} {"What does the model suggest if this equilibrium is perturbed so that the planet suddenly becomes ice free?"}\\ \end{flushleft} To be able to describe the dynamics of the ice line $\eta$ at the end points, we will embed the ice line interval and the domain of the temperature profile in the real line. We will also embed the vector field in such a way that preserves the dynamics in the unit interval including its end points. This embedding allows the dynamical analysis of the polar and the equatorial ice line and is a mathematical convenience for showing the existence of an inertial manifold.\\ We thus consider the following system similar to (\ref{budref}): \begin{equation} \label{budxt} \left( \begin{array}{cc} &\Delta_t [ T(y)]\\ &\Delta_t [\eta] \end{array} \right) := H([T(y), \eta]) = \left( \begin{array}{c} F([T(y), \eta])\\ G([T(y),\eta]) \end{array} \right) \end{equation} where we define:\\ $F([T(y),\eta])$:= \begin{equation*} \begin{cases} & Q\cdot s(0) \cdot [1-a(\eta)(0)] - [A+BT(y)] + C [\int_0^1{T(y)dy}-T(y)] \text{, when } y <0\\ & Q\cdot s(y) \cdot [1-a(\eta)(y)] - [A+BT(y)] +C [\int_0^1{T(y)dy}-T(y) ]\text{, when } 0 \leq y \leq 1\\ & Q\cdot s(1) \cdot [1-a(\eta)(1)] - [A+BT (y)] +C [\int_0^1{T(y)dy} -T(y)] \text{, when } 1<y\\ \end{cases} \end{equation*} and \\ $G([T(y),\eta]) := \epsilon [T(\eta)-T_c]$\\ Notice here that for a small $\epsilon$, the vector field $H$ separates the dynamics into 2 time scales: the fast dynamics for the temperature profiles, and the slow time scale for the ice line evolution. \\ Also, observe that in the extended version, any zone $y$ outside of the unit interval has the same solar forcing as the closest endpoint. The equilibria of the extended version are similar to the original version. The local equilibria for the temperature profile with ice line at $\eta$ is the following Lipschitz continuous function:\\ \begin{equation*} T^*(\eta)(y)= \begin{cases} &\frac{Q \cdot s(0) \cdot(1-a(\eta)(0))-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text { , when } y<0\\ &\frac{Q \cdot s(y) \cdot(1-a(\eta)(y))-A+C\int_0^1{T^*(\eta(y))}}{B+C} \text{ , when } 0 \leq y \leq 1\\ &\frac{Q \cdot s(1) \cdot(1-a(\eta)(1))-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text{ , when } 1<y\\ \end{cases} \end{equation*} \subsection{Results} The interval of the ice line is $\mathbb{R}$, and we take the function space for the temperature profiles $T(y)$ to be the Banach space $$\mathcal{B}=\{ T: \mathbb{R} \rightarrow \mathbb{R}: T \text{ is bounded and continous, with bounded Lipschitz constant} \}$$ with the norm $||T||_\mathcal{B} = ||T||_\infty $\\ \subsubsection{Inertial Manifold} We now consider the following shift operator to Dynamic Iceline Budyko's Model (\ref{budxt}) in $\mathcal{B}$: \[ m_{t}([T, \eta](k)) := \begin{cases} T(k+t) = T(k) + t \cdot F([T, \eta](k))\\ \eta (k) = \eta(k)+ t \cdot G([T, \eta](k)) \end{cases} \] We obtained the following results:\\ \begin{theorem} \label{invariant} For an $\epsilon$ small, there exists an attracting invariant manifold for the shift operator of Budyko's equation (\ref{budxt}), that is, \\ \begin{description} \item{1}. There exists a Lipschitz continuous map\\ $$\Phi^*:\mathbb{R} \rightarrow \mathcal{B}$$ \item{2}. There exists a closed set $\mathcal{D} \subset \mathcal{B}$, such that for any $(T_0, \eta_0) \in \mathcal{D} \times \mathbb{R}$, the distance $dist[m_t([T_0, \eta_0])(k), (\Phi^*(\eta), \eta)]$ decreases exponentially as $k$ increases. \\ \end{description} \end{theorem} Let $\Phi^*$ denote this invariant manifold. As a corrolary to the theorem (\ref{invariant}), for $\eta \in [0,1]$ we can compute the distance of the invariant manifold $\Phi^*(\eta)$ to the local equilibria manifold $T^*$ and more importantly, we can describe the asymptotic dynamics of the ice line. The following graph is the graph of ice line temperature of the local equilibrium in the extended version, $T^*(\eta)(\eta)$ over the interval $[0, 1]$. \\ Many authors uses the critical temperature $T_c=-10$. The dynamics of the ice line is determined by the sign of the difference between the temperature at the ice line and the critical temperature $T(\eta)-T_c$, which is a one dimensional function of the ice line $\eta$. To determine the dynamics of the ice line at the steady state temperature profile, we graph $T^*(\eta)(\eta)-T_c$ over the interval $[0,1]$. The function $T^*(\eta)(\eta)-T_c$ has two roots in the unit interval, as in the previous section, let the root closest to the equator, $y=0$ be $\eta_1$ and the other root be $\eta_2$. \\ We conclude from Theorem \ref{invariant} that the dynamics reduces to 1 dimension. This fact is illustrated in Figure (\ref{1ddyn}). Here, we observe that when the temperature profile reaches a steady state, any ice line $\eta \in (\eta_1, \eta_2)$ moves to the right because the ice line temperature $T^*(\eta)(\eta)$ exceeds the critical temperature. On the other hand, when $\eta <\eta_1$, and $\eta>\eta_2$, the ice line temperature $T^*(\eta)(\eta)$ falls below the critical temperature, and therefore the ice line $\eta$ moves to the left. In particular, we note that when the ice line is at the pole, that is when $\eta =1$, then this ice line advances toward the equator. \\ \begin{corollary} In the unit interval, the invariant manifold $\Phi^*$ is within $O(\epsilon)$ of the local equilibrium set $T^*$. Therefore, for a small $\epsilon$, the ice free planet is unstable.\\ \end{corollary} As an example, for a small $\epsilon$, the equilibrium iceline temperature $\Phi^*(\eta)(\eta)$ is within $1^oC$ of $T^*(\eta)(\eta)$, the ice free earth is unstable. \begin{figure}[h] \centering \subfloat {\label{win1}}\href{run:ebm-middle.mp4}{\includegraphics[width=2in]{Tstar-Phistar.jpg}} \subfloat {\label{1ddyn}} \includegraphics[width=2in]{etat.pdf} \caption {Figure on the left illustrates $\Phi^*(\eta)(\eta)$ within $1^oC$ of $T^*(\eta)(\eta)$.Figure on the right is the reduced 1-D dynamics } \end{figure} In light of the current global climate discussion, this result suggests an optimistic view. Suppose that the planet's temperature somehow rises quickly that the ice line moves to the pole and the small ice cap disappear. If the climate parameters, ie. the parameters $A, B, C$ are kept the same, then the temperature profile will dissipate toward that on the invariant manifold, so that the ice line temperature falls below the critical temperature, and the ice line will start moving toward the stable equilibrium, the small ice cap. \\ We obtain the following diagram, which is a bifurcation diagram of the solar radiation parameter Q against the ice line $\eta$ in the equilibrium state. The dash blue line denotes unstable regime, and the solid line is stable, while the vertical green line is the current solar radiation used, $Q=343$. We see that the ice free state ie. ice line at the pole or $\eta=1$, belongs to the unstable regime when the solar radiation $Q$ is near the current value, 343 watts per meter squared. \\ \begin{figure} \centering \includegraphics[scale=0.35]{Q-eta-bifurcation.jpeg}\\ \caption{Bifurcation diagram for Q-the solar input} \end{figure} \section{Technical Details} This section present the proof of the existence of the 1-D center stable manifold. Readers with little interest in the mathematical treatment may wish to skip this section and proceed to the discussions on some future directions and conclusion in the next section. \\ We consider the extended version of the Dynamic Iceline Budyko's EBM (\ref{budxt}) \subsection{A Function Space for Temperature Profiles} We take the space for the the temperature profiles to be the Banach space:\\ $$ \mathcal{B} := \{T:\mathbb{R} \rightarrow \mathbb{R}: T \text{ is bounded, continuous with bounded Lipschits constant} \} $$ with the norm sup norm $||T||_{\infty}$ \subsection{Equilibria} \begin{lemma}{Two Equilibria}\\ The system of the differential equations (\ref{budxt}) has two equilibria.\\ \end{lemma} \begin{proof} First, we fix $\eta$ and set $F(T,\eta)$ to zero and solve for $T$, in which $T=T(\eta)(y)$, a function depending on $\eta$ and $y$. Let $T^*(\eta)(y)$ denote this solution.\\ \begin{equation*} T^*(\eta)(y)= \begin{cases} &\frac{Q \cdot s(0) \cdot(1-a(\eta)(0)-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text { , when } y<0\\ &\frac{Q \cdot s(y) \cdot(1-a(\eta)(y)-A+C\int_0^1{T^*(\eta(y))}}{B+C} \text{ , when } 0 \leq y \leq 1\\ &\frac{Q \cdot s(1) \cdot(1-a(\eta)(1)-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text{ , when } 1<y\\ \end{cases} \end{equation*} Let $g(\eta) :=\int_0^1{Q \cdot s(y) (1-a(y,\eta))dy}$, then $\int_0^1 T^*(\eta)(y)dy=\frac{g(\eta)-A}{B}$, and substituting this to $T^*(\eta)$ we get an expression for $T^*(\eta)$ which only depends on $\eta$ and $y$. \\ \begin{equation*} T^*(\eta)(y)= \begin{cases} &\frac{Q\cdot s(0) \cdot(1-a(\eta)(0))-A+\frac{C}{B}(g(\eta)-A)} {B+C} \text { , when } y<0\\ &\frac{Q \cdot s(y) \cdot(1-a(\eta)(y))-A+\frac{C}{B}(g(\eta)-A)}{B+C} \text{ , when } 0 \leq y \leq 1\\ &\frac{Q \cdot s(1) \cdot(1-a(\eta)(1)-A+\frac{C}{B}(g(\eta)-A)}{B+C} \text{ , when } 1<y\\ \end{cases} \end{equation*} Then we use the second equation to find the ice edge(s) equilibria, that is, we set $T^*(\eta)(\eta)-T_c$ to zero, and solve for $\eta$. Let $h(\eta)$ denote the function $T^*(\eta)(\eta)-T_c$. We can eliminate the case that $\eta<0$ or $\eta>1$, since the temperature profiles for these ice lines do not intersect the critical temperature $T_c=-10$.\\ As mentioned in the introduction, we also assume that the parameter of the albedo function's maximum slope, ie. the parameter $M$ is large, eg. $M>10$. And we need the following computational lemma:\\ \begin{lemma} For $M>10$, $T^*(\eta)(\eta)-T_c$ has two roots on the unit interval. \\ \end{lemma} Let $\eta_1$ and $\eta_2$ be these roots, and suppose that $\eta_1 < \eta_2$. The equilibria for the above differential equations are therefore $(T^*_1, \eta_1)$ and $(T^*_2, \eta_2)$, with \\ \begin{align*} T_1^*(y)&:=T^*(\eta_1)(y)=\frac{Q\cdot(1-a(\eta_1)(y))\cdot s(y)+1.6g(\eta_1)-2.6A}{B+C}\\ T_2^*(y)&:=T^*(\eta_2)(y)=\frac{Q\cdot(1-a(y,\eta_2)(y))\cdot s(y)+1.6g(\eta_2)-2.6A}{B+C} \end{align*} \end{proof} \subsection{Inertial Manifold} By using Hadamard's graph transform method we will show that in an appropriate function space, for $\epsilon$ small, Dynamic Iceline Budyko's Model has an inertial manifold. The main theorem that we will prove in this section is the following:\\ \begin{theorem}{Existence of Local Inertial Manifold}\label{invariant-epsilon}\\ For $\epsilon$ small, there exists a locally attracting invariant manifold for the shift operator of Budyko's Energy Balance Model.\\ \end{theorem} \subsection{2. The space of graphs.} For each ice line $\eta \in \mathbb{R}$, we consider $(\eta,\Phi(\eta))$, where we take the space of the graphs $\Phi$ as the space $\mathcal{G}$ of the bounded continuous functions, $BC^0(\mathbb{R}, \mathcal{B})$ with norm $ \| \Phi \|_{\mathcal{G}}=sup_{\eta \in \mathbb{R}} \| \Phi(\eta)_{\mathcal{B}} \| $. We will need the following lemma: \begin{lemma} \\ The set $\mathcal{G}$ with the norm $||\cdot||_\mathcal{G}$ is a Banach space.\\ \end{lemma} \begin{lemma} \\ For each fixed $L$, the set $$\mathcal{G}_L = \lbrace \Phi \in \mathcal{G}: \forall \zeta \text{,} \eta \in \mathbb{R} \text{, }||\Phi(\zeta)-\Phi(\eta)||_\mathcal{B} \leq L |\zeta - \eta | \rbrace$$ is a closed set in the Banach space $\mathcal{G}$. \\ \end{lemma} \subsection{The albedo function $a(\eta)(y)$ as a graph} Given the location of the ice line $\eta$, the albedo function of the planet is the function $a(\eta)(y)= .47 + .15 \cdot tanh(M\cdot(y-\eta))$. The parameter $M$ is the steepest slope of the function, which occur on the ice line $\eta$. Furthermore, the albedo function $a$ as a function of $\eta$ is $C^1$, with the upper bound for the first derivative being $.15M$, therefore, $sup_{\eta, \zeta \in \mathbb{R}} sup_{y \in \mathbb{R}} \frac{ |a(\eta)(y) - a(\zeta)(y)|}{|\eta - \zeta|} < .15M$.\\ Also, for any $\eta$ and $y$, $0.32 \leq a(\eta)(y) \leq 0.62$. Therefore, if $ L \ge max \{ .62, .15 M \}$, then the albedo function $a$ is a member of $\mathcal{G}_L$. We will use this bound as the Lipschitz bound of the space of graphs. To be precise, we define:\\ \begin{definition} {Lipschitz bound, L}\\ The Lipschitz Bound for the space of graphs is a number $L$ such that $$L\ge sup_{\eta, \zeta \in \mathbb{R}} \frac{ |a(\eta) - a(\zeta)|}{|\eta - \zeta|} $$ In particular, $L \ge max \{.62, .15 m \}$ . \\ \end{definition} \begin{definition} Temperature profile bound, r\\ The bound for the space of temperatures is the number $r$ such that $$r \ge sup_{y \in [0,1]}|Q\cdot s(y)|$$\\ \end{definition} We now consider the the action of Dynamics Iceline Budyko's Model over the set $\mathcal{G}_L \cap B(0,r) $ in the space of graphs, where $B(0,r) = \{ \Phi \in \mathcal{G}: || \Phi|| \le r\}$. \subsection{The Shift Operator of Budyko's Model as a Graph Transform} We will define $m$ a transformation that extends the vector field $(F, G)$ so that its action is defined for all ice boundaries in the real line and for all temperature profiles in $\mathcal{B}$. But, first, we need a lemma:\\ \begin{lemma} \label{preimage} If $\epsilon < \frac{1}{L+r}$, then for any fixed $t<1$, and for each $\eta \in \mathbb{R}$ and each $\Phi \in \mathcal{G}_L$ such that $|| \Phi ||_\mathcal{G}<r$, there exists a $\xi \in \mathbb{R}$ such that $$\eta = \xi + \epsilon \cdot t \cdot (\Phi(\xi)(\xi) - T_c)$$ \end{lemma} \begin{proof} We will show that given $\Phi \in \mathcal{G}_L$, there exists a unique $k=k_{\Phi} \in BC^0(\mathbb{R})$ such that for any given $\eta$, $\eta=(\eta + k(\eta)) + \epsilon (\Phi(\eta+k(\eta))(\eta+k(\eta))-Tc)$.\\ We define a transformation $T$ on $BC^0(\mathbb{R})$ by $$(Tk)(\eta) = \epsilon t [T_c- \Phi(\eta+k(\eta))(\eta+k(\eta))]$$ Clearly, $||Tk|| < \infty$. We will now show that $T$ is a contraction mapping on the Banach space $BC^0(\mathbb{R})$. Indeed, for any two bounded continuous functions $k_1 \text{ and } k_2$ with sup norm less than $r$, we have that:\\ $||(Tk_1) (\eta) - (Tk_2)(\eta)||$ \begin{align} &=\epsilon t |\Phi(\eta+k_1(\eta))(\eta+k_1(\eta))-\Phi(\eta+k_2(\eta))(\eta+k_2(\eta))|\\ &\leq \epsilon t [ |\Phi(\eta+k_1(\eta))(\eta+k_1(\eta))-\Phi(\eta+k_2(\eta))|(\eta+k_1(\eta))\\ &\quad + |\Phi(\eta+k_2(\eta))(\eta+k_1(\eta))- \Phi(\eta+k_2(\eta))(\eta+k_2(\eta))|]\\ &\leq \epsilon t [|| \Phi(\eta+k_1(\eta)) - \Phi(\eta+k_2(\eta))||_{\mathcal{B}}+||\Phi(\eta+k_2(\eta))||_{\mathcal{B}}\cdot |k_1(\eta)-k_2(\eta)| ]\\ &\leq \epsilon t (L+r)|k_1(\eta)-k_2(\eta)|<\rho |k_1(\eta)-k_2(\eta)|. \end{align} with the number $\rho = \epsilon t (L+r)<1$. \\ Therefore, the transformation $T$ is a contraction on a Banach space, and by the Banach Fixed Point Theorem, there exists a unique fixed point. Let $k$ be the fixed point, that is, $$k(\eta) = k_{\Phi}(\eta)=\epsilon (T_c-\Phi(\eta+k(\eta))).$$ Therefore, $$\eta+k(\eta)=\eta+\epsilon t (T_c- \Phi(\eta+k(\eta))(\eta+k(\eta))).$$ Letting $\xi=\eta+k(\eta)$ finishes the proof. \end{proof}\\ \begin{definition} {Graph Transform using Budyko's EBM}\\ Given a graph $\Phi \in \mathcal{G}_L$, we define Budyko Graph Transform $m$ as: \[ m(\Phi) = m(\Phi)(\eta) = \Phi (\xi) + t \cdot F(\Phi, \xi) \] with $\xi$ as in Lemma (\ref{preimage}). \end{definition}\\ To prove the existence of the inertial manifold for Dynamic Iceline Budyko's Model, we first show that $m$ is a contraction on $\mathcal{G}_L \cap B(0,r)$.\\ \begin{lemma}\label{invmfld-B} As denoted above, let $L$ be the Lipschitz bounds, $r$ be the temperature profiles bounds, and $B$ be the the constant from the re-emission term of DIBM. \\ There exists an $\epsilon = \epsilon (L, r, B)>0$ small, such that for any fixed time step $t$, $0< t < \frac{1}{B+C}$, the map $m$ is a contraction on $\mathcal{G}_L \cap \overline{B(0,r)}$.\\ \end{lemma} \begin{proof} \\ We will show that for any fixed time step $t<\frac{1}{B+C}$, there exists a real number $\rho=\rho(t) < 1$ such that given $\Phi$ and $\Gamma$ in $\mathcal{G}_L \cap \overline{B(0,r)}$, then $||m(\Phi)-m(\Gamma)|| < \rho ||\Phi - \Gamma ||$.\\ By Lemma (\ref{preimage}), there exists $\xi \text{ and } \zeta$ such that $\eta = \xi + \epsilon \cdot t (\Phi(\xi)(\xi) - T_c)$, and $\eta=\zeta + \epsilon \cdot t (\Gamma(\zeta)(\zeta)-T_c)$. First we compare the ice boundaries $|\xi-\zeta|$ to their temperatures $|\Phi(\xi)(\xi)-\Gamma(\zeta)(\zeta)|$. \begin{align*} |\xi - \zeta | &\leq \epsilon \cdot t|\Phi(\xi)(\xi)-\Gamma(\zeta)(\zeta)| \\ &\leq \epsilon \cdot t \left[ | \Phi(\xi)(\xi)-\Gamma(\xi)(\xi)| + |\Gamma(\xi)(\xi)-\Gamma(\xi)(\zeta)| + |\Gamma(\xi)(\zeta)-\Gamma(\zeta)(\zeta) \right| ] \\ &\leq \epsilon \cdot t ||\Phi - \Gamma || \qquad \text{ by the definition}\\ &\quad \quad +r \epsilon t |\xi-\zeta | \qquad \text{ since for each } \xi \text{, } \Gamma(\xi) \in \mathcal{B} \text{ and } ||\Gamma || < r\\ & \quad \quad+ L \epsilon t |\xi - \zeta | ) \qquad \text{ since } \Gamma \in \mathcal{G}_L\\ & \le \epsilon \cdot t ||\Phi - \Gamma || +\epsilon (L+r) |\xi - \zeta | \end{align*} Solving for $|\xi-\zeta|$ we get the inequality: \begin{equation} \label{eta-phi-estimate} |\xi-\zeta| \leq \frac{t \cdot \epsilon}{1-( L+r) \epsilon } ||\Phi-\Gamma|| \end{equation} Let \begin{equation} \label{epsilon} \epsilon \le \frac{B}{ 2( Lr+L+r)} \end{equation} and define:\\ \begin{equation} \label{delta1} \delta_1 := \frac{L \cdot \epsilon}{1-(L+r)\epsilon} \text{ and } \end{equation} \begin{equation} \label{delta2} \delta_2 := \frac{L \cdot r \cdot \epsilon}{1-(L+r)\epsilon} \end{equation} Then \begin{equation} \label{delta12B} \delta_1 < \delta_2 < \frac{B}{2} .\end{equation} We now estimate the graph transform map $m$:\\ $|m(\Phi)(\eta) (y)- m(\Gamma)(\eta)(y)|$ \begin{align*} &=|\Phi (\xi)(y) + t \cdot [Q\cdot s(y) \cdot (1-\alpha(\xi)(y)) -(B+C) \Phi (\xi) (y) + C \overline{\Phi(\xi)} - A] \\ & - \Gamma(\xi)(y) -t \cdot [Q\cdot s(y) \cdot (1-\alpha(\xi)(y)) -(B+C) \Gamma (\xi) (y) + C \overline{\Gamma(\xi)} - A] \\ & + \Gamma(\xi)(y) +t \cdot [Q\cdot s(y) \cdot (1-\alpha(\xi)(y)) -(B+C) \Gamma (\xi) (y) + C \overline{\Gamma(\xi)} - A] \\ & - \Gamma(\zeta)(y) -t \cdot [Q\cdot s(y) \cdot (1-\alpha(\zeta)(y)) -(B+C) \Gamma (\zeta) (y) + C \overline{\Gamma(\zeta)} - A] |\\ & \quad \\ \end{align*} Since $t<\frac{1}{B+C}$, and therefore $1-t(B+C)>0$, then the estimate above continues as \begin{align*} & \leq [1-t(B+C)]|\Phi(\xi)(y) - \Gamma(\xi)(y)| + t \cdot C |\overline{\Phi(\xi)} - \overline{\Gamma(\xi)}|\\ & +[1-t(B+C)] |\Gamma(\xi)(y)-\Gamma(\zeta)(y)| + t \cdot C |\overline{\Gamma(\xi)} - \overline{\Gamma(\zeta)}|\\ & +|Q \cdot s(y) \cdot (\alpha(\xi)(y)-\alpha(\zeta)(y))|\\ &\quad \\ \end{align*} Using inequality (\ref{eta-phi-estimate}) we estimate the third through the fifth terms of the above inequality: \begin{align*} & \le [1-t(B+C) + tC] || \Phi - \Gamma || \\ & + [1-t(B+C)](t\delta_1) || \Phi - \Gamma || + (tC)(t\delta_1) || \Phi - \Gamma || \\ & + t\delta_2 || \Phi - \Gamma || & \quad \\ & \le \left(1-tB+t[(1-tB)\delta_1+\delta_2 ] \right) || \Phi - \Gamma || \\ \end{align*} By the choice of $\epsilon$ in (\ref{epsilon}) above, we get the inequality (\ref{delta12B}), and so the inequality above continues as: \begin{align*} & < \left( (1-tB) + t(\delta_2+\delta_2) \right) || \Phi -\Gamma ||\\ & \le 1 \cdot ||\Phi -\Gamma ||\\ \end{align*} Therefore, for any fixed $t$, $0<t< \frac{1}{B+C}$, if $\rho := \left( (1-tB) + t(\delta_2+\delta_2 ) \right)$, then $0<\rho<1$, and we showed that: $$||m(\Phi)-m(\Gamma)|| < \rho ||\Phi - \Gamma ||.$$ That is, the map $m$ is a contraction. \end{proof}\\ We finish the proof of the existence of invariant manifold (\ref{invariant-epsilon}):\\ \begin{proof} For $\epsilon$ as in the previous lemma, we showed that $$m:\mathcal{G}_L \cap B(0,r) \rightarrow \mathcal{G_L} \cap B(0,r)$$ is a contraction in a closed set of a Banach space. Therefore, there exists a unique fixed point $\Phi^*$ such that $m(\Phi^*) = \Phi^*$. \end{proof}\\ \begin{corollary} The invariant manifold $\Phi^*$ is within $O(\epsilon)$ of the equilibrium set $T^*$\\ \end{corollary} \begin{proof} Since $m(\Phi^*)=\Phi^*$, then the following holds: \begin{align*} \Phi^*(\eta) &= \Phi^*(\xi)+t F(\Phi^*(\xi)-T^*(\xi)+T^*(\xi), \xi) \end{align*} where $\xi = \eta+k_{\Phi^{*}}(\eta)$.\\ Using the facts that $F(T^*(\xi),\xi)=0$ and the reverse triangle inequality we find: \begin{align*} || \Phi^*(\eta)-\Phi^*(\xi)||_{\mathcal{B}} &= || t(B+C)\cdot [ \Phi^{*}(\xi) -T^*(\xi)] - tB (\overline{\Phi^*(\xi)}-\overline{T^*(\xi)})||_{\mathcal{B}} \\ &\ge \left| t(B+C)\cdot || \Phi^{*}(\xi) -T^*(\xi)||_{\mathcal{B}}- tB || \overline{\Phi^*(\xi)}-\overline{T^*(\xi)}||_{\mathcal{B}} \right| \\ & \ge tB || \Phi^*(\xi) - T^*(\xi)||_{\mathcal{B}} \end{align*} In the last step of the above estimate, we used the fact that $ \left| \overline{\Phi^*(\xi)}-\overline{T^*(\xi)} \right| \le || \Phi^*(\xi)-T^*(\xi)||_{\mathcal{B}}$.\\ Therefore, we arrive at the following estimate: \begin{align*} ||\Phi^*(\xi)-T^*(\xi)||& =\frac{1}{B} ||\Phi^*(\eta)-\Phi^*(\eta + k_\Phi^* (\eta))||\\ &\le \frac{1}{B} \cdot |\Phi^*(\eta + k_{\Phi^*}(\eta))(\eta+ k_{\Phi^*}(\eta))-T_c|\\ &\le \frac {\epsilon r}{B} \end{align*} recall that here, $r$ is the bound on the temperature profiles. \end{proof} \begin{corollary} For $\epsilon$ small, the ice free earth is unstable. \end{corollary} \begin{proof} Since $||\Phi^*-T^*||<\frac{\epsilon \cdot r}{B}$ then $|\Phi^*(\eta)(\eta)-T^*(\eta)(\eta)|<\frac{\epsilon \cdot r}{B}$ as well. So that if $\epsilon<\frac{B}{r}[T_c-T^*(1)(1) ]$, then $\Phi^*(1)(1)<T_c$, and so, ice will form at the pole and advances toward the small ice cap equilibrium. See figure (\ref{Tetaeta}) for the graph of the iceline local equilibrium temperature $T^*(\eta)(\eta)$. \\ Therefore, if $\eta(0)=1$, as $t \rightarrow \infty$, $\eta(t)$, the ice line, advances equatorward, and the planet evolves toward a non ice free earth. \end{proof} \section{Concluding Discussion} \subsection{Future Directions} There are several directions that one can explore based on this model. One immediate improvement is to compute the invariant manifold explicitly as done by Foias, Sell and Titi \cite{fst}. Another immediate improvement on this model is an extension to the southern hemisphere with two ice line and a non symmetric transport. Another direction is to explore how the change in the greenhouse gas components, which is the term $A+BT(y)$, affects the radiative forcings. A work by Andrew Hogg \cite{hogg} relates the evolution of temperature with that of carbondioxide as a response to the solar input variations caused by the Milankovitch cycle. We are interested in the possibility of coupling Budyko's model with the Hogg's model to understand the glacial cycles in the quaternary period. While North explores a similar model with only a diffusion transport \cite{n75}, \cite{n79}, \cite{n84}, the model discussed in this paper could be improved by including some difussion and averaging in the transport term. Such inclusion necesitates the consideration for the planet's heat capacity and a further explanation of the parameter $\epsilon$.\\ \subsection{Conclusion} We have shown in this paper the existence of a center stable manifold in an energy balance model with ice albedo feedback, featuring a dynamic iceline. The existence of such invariant manifold explains the numerical experiments presented as animations and allows for qualitative analysis of the small icecap stability. \newpage \section{Introduction} Energy Balance Models (EBMs) rest on the concept that in the earth's equilibrium state the energy received from the sun's radiation is balanced by the energy re-emitted back to space at the earth's temperature. Budyko's EBM aims to model the latitudinal distribution of the surface annual mean temperature, by taking into account the feedback from ice albedo (icecap's reflectivity factor). As a dynamical system then, Budyko's model takes place in an infinite dimensional state space. While the discussion of climate feedbacks easily enters realm of great complexity, the concept of ice albedo feedback and its effect on the planet, is not difficult to explain. Ice albedo is the degree of which ice reflects light or energy. Since ice cover in general has whiter color than the blue ocean, it reflects more light or energy back into space. This creates a positive feedback, properly called the \textit{ice albedo feedback}, because more ice means less energy absorbed, which induces cooler climate favoring condition for more ice formation, and so on. On the other hand, shrinkage of ice mass induces more ocean surface, which absorbs more energy, creates warmer climate and in turn, promotes more melting. But despite this easy to explain concept, the conclusions on the stability of icecaps vary greatly, even among the simplest models. While previous results were useful in understanding steady states of the energy balance models, they did not include mechanisms for ice-water boundary or iceline movement, making it difficult to obtain rigorous results for icecap stability. In this paper, to go along with Budyko's model, we introduce an equation that describes the movement of the ice line. We call this coupled system, Dynamic Iceline Budyko's Model (DIBM), and it consists of two equations: the first is a version of Budyko's model, similar to that written by Ka Kit Tung \cite{tung07} describing the evolution of the temperature profile, and the second is the novel equation that models the iceline dynamics. \\ Our planet currently has a small ice cover surrounding both poles, and in light of the ice albedo feedback concept, many has pondered on the stability of these polar ice caps. The idea of small ice cap instability, or sometimes abbreviated SICI, perhaps was started in 1924 by CEP Brooks \cite{brooks}, \cite{notz09}, when he hypothesized that in this planet, because of a mechanism later coined as ice albedo feedback, only two stable climates are possible: ice free and large ice capped earth, with the cryospheres extending from the poles pass the $78^o$ latitude. In late sixties, a Russian climatologist Mihail Budyko proposed an ice albedo feedback model based on the energy balance principle, where transport is represented by a simple relaxation process in which the temperature of each latitude dissipates to the global average temperature. He concluded that both ice free and polar or small ice cap climates are unstable (see p. 618 \cite{bud69}). Around the same time, William Sellers from University of Arizona proposed a minimal complexity climate model which includes planetary albedo, but uses separate atmospheric and oceanic transport processes \cite{sellers69}. In the next decade, Gerald North explored in depth the diffusive transport version of the energy balance model with the ice albedo feedback, and in his works, upon varying some of the parameters, the small ice cap instability disappears \cite{n75}, \cite{n79}. \\ While the small ice cap instability discussions in the mid twentieth century were fuelled by the possibility that the planet is going into its glaciation period, the more recent discussion on the state of the cryosphere is motivated by the rapid decrease of the Arctic sea ice extent, induced by global warming and placed more emphasis on the reversibility of an ice free state in a warmer climate. Recent results that attempt to answer this question have also been characterized by the use of computer simulations such as the one developed by in 2008 by Merryfield, Holland and Monahan \cite{hmm08}, a simple climate model based on the Community Climate System Model version 3 (CCSM 3)-a global climate model developed by NCAR. The simple model is nonlinear and admits abrupt sea ice transitions resembling those in the CCSM 3 simulations. In early 2009, Eisenman and Wettlaufer \cite{ew09} examined an energy balance model with seasonal features and a nonlinear forcing given by the sea ice thermodynamics. Their results suggest that there exists a critical threshold in the climate warming, beyond which a sudden loss of the remaining seasonal sea ice is possible. Another recent paper published in 2009 by Notz \cite{notz09} summarized the discussions on the state of the "existence of the cryospheric tipping points". \\ We will show that Dynamic Iceline Budyko's Model admits an unstable large ice cap, a stable ice covered planet, a stable small ice cap, and an unstable ice free planet indicating polar ice cap loss reversibility. Furthermore, we show that the infinite dimensional system has a 1-D center stable manifold, hence, the dynamic is essentially one dimensional. The existence of this one dimensional attractor will also be illustrated in the computer simulations, but in the end, it is the mathematical analysis of the dynamics that gives us the confidence in the numerical executions.\\ Unlike the more complex computer simulation models, our result should not be seen as predictive, but instead, as a tool to understand qualitatively the mechanism involved in climate processes. The advantage of our formulation is that its tractability allows us to isolate the effects of a single climate feedback, ie, the ice albedo feedback. For example, we see from this result that despite being a positive feedback, the ice albedo feedback is not enough to cause a tipping point phenomena as seen in recent studies, see \cite{ew09} and reference therein. The comparison of these two results indicates the fruitfulness of exchanges between computer simulation models and theoretical analysis, hence, between climate scientists and mathematicians.\\ We will organize this paper as follows: in section 2, we introduce Budyko's energy balance model along with a new feature to account for the iceline dynamics, and we discuss the equilibria of the systems. Then we show some animations that results from the numerics to illustrate the dynamics. Section 3 discusses the analysis of the Dynamics Iceline Budyko's model and its map, with detailed proofs using graph transform method in Section 4. Section 5 provides concluding remarks and a sketch of some future directions. \section{Dynamic Iceline Budyko's Model} The equation governing the temperature profile evolutions by itself does not induce the movement of the iceline, as illustrated by computer simulations in the following section. To capture the feedback from the ice albedo, we propose a system of two time scales, integro-difference equations governing the temperature distribution and the iceline dynamics. With this proposed system, we asked the following questions: \\ \begin{itemize} \item What is the appropriate function space for the temperature profiles? \item What are the dynamics of the proposed system? \item What are the parametric conditions to have an equilibrium state at the current ice line? \item What does the model suggest if the current condition is perturbed so that the planet is ice free?\\ \end{itemize} The main results of this study are:\\ \begin{itemize} \item The identification of the function space in which the proposed system is well defined. \item The existence of an invariant manifold for the Euler approximation of the proposed system. \item A parametric condition to have an equilibrium at the present climate. \item The instability of the ice free earth.\\ \end{itemize} We first introduce the details and some background of the model, then we explore the equilibria, the invariant manifold and, finally, the instability of the ice free planet. \\ \subsection{Details of Budyko's Model} While Budyko introduced his original model as a concept, following his steps many have formulated and parametrized the energy balance and ice albedo feedback concept. KK Tung has summarized the formulation as a differential equation with a nice exposition in his book Tung (2007): \begin{equation} R \frac {\partial}{\partial t} T(y) =Q \cdot s(y) \cdot [1 - \alpha(y)]-[A+BT(y)]-C \cdot[T(y)-\int_0^1{T(y)dy}] \end{equation} The function $T(y)$ is the annual average surface temperature at the zone $y$, and its graph over the domain of $y$ is called \textit{\textbf{the temperature profile}}. Other previous authors, eg. \cite{n75}, \cite{dg81}, also treated the model as a \textit{differential equation}, hence, presupposed the continuous dependence of the annual average temperature profile $T(y)(t)$ on the time variable $t$. We argue that the evolution of the yearly averaged temperature profile $T$ should be governed by a \textit{difference equation}. \\ Let $\Delta_t [f](k)$ denotes the difference equation of $f$ at time node $k$, that is $$\Delta_t[f](k)=\frac{f(k+t)-f(k)}{t}.$$\\ We consider the following Budyko's difference equation at each time node $k$: \begin{equation} R\Delta_t [T(y)](k) = Qs(y)(1-a(\eta)(y))-(B+C)T(y)(k)-A+C\overline{T}(k) \end{equation}\\ Here, the constant $R$ measures the heat capacity of the surface, and we assume that $R=1$. Such assumption does not change the qualitative behavior of the system. The independent variable $y$ is $\sin(\theta)$, where $\theta$ represents some latitude in the northern hemisphere, therefore $y$ lies on the unit interval. This model assumes that the annual distribution of the surface temperature is symmetric about the equator. \\ We will now examine in details the terms on the right hand side of the EBM equation above. The constant $Q$ is the solar constant, representing the amount of energy received from the sun at the top of the atmosphere. The function $s(y)$ is the latitudinal distribution of that energy which could be computed from earths orbital elements. Many authors, such as KK Tung \cite{tung07}, and North \cite{n75} used the Legendre polynomial function approximation $s(y)=1-0.482\frac{3y^2-1}{2}$ for this distribution. We will use the same approximation as well. \\ The term $1-\alpha(y)$ represents the fraction of the radiative energy absorbed by the earth at location $y$. To establish the ice albedo feedback effect, it is assumed that ice is formed when the temperature at a certain location stays below a critical temperature $T_c$, taken to be $-10^oC$. Also, it is assumed that the surface is either water (ice free) or ice covered and that there is only one ice line, $\eta$. Since ice reflects more sunlight than water does, area covered with ice has higher albedo than that covered with water. In this paper we use the approach that the albedo function $a(y)$ at location $y$ depends on $y$ relative to the location of the iceline, and not on the temperature $T(y)$. Let $\eta \in [0,1]$ denote the location of the ice line. The albedo function we use in this paper is smooth and is iceline dependent:\\ \begin{definition} {\textbf{Iceline-Dependent Smooth Albedo}}\\ Given that the ice line is at $\eta$, the albedo at $y$ is $$a(\eta)(y)=0.47+0.15 \cdot tanh[M \cdot (y -\eta)]$$ \end{definition} Here $M$ is the parameter representing the steepness of the albedo near the icelineand is a fixed quantity, presumably dictated by the planet.\\ As in Tung \cite{tung07}, balancing the absorbed radiative energy contained in the first term are the re-emission term, $A+BT(y)$ and the transport term, $C \cdot (T(y)-\overline{T})$, where $\overline{T}$ is the global average $\int_{[0,1]} T(y)dy$. The constants $A = 202 \text{ watts per squared meters}$ and $B=1.9 \text{ watts per squared meters per } ^oC$ are derived from fitting a linear function through satelite data of Outgoing Long-wave Radiation (OLR) at the top of the atmosphere \cite{graves93}. The constant $C$ in the transport term is taken to be $1.6B=3.04$ and is chosen so that one equilibrium fits the current climate with ice line near the pole. \\ \subsection{Dynamic Iceline} So far we have an equation that describes the evolution of the temperature profile over the northern hemisphere, and that takes into account the ice albedo feedback. We will add to this an equation that describes the evolution of the ice line $\eta$. Previous literatures such as Budyko \cite{bud69}, Tung \cite{tung07}, North \cite{n75}, etc. have used the idea that ice is formed when the temperature is below a critical threshold, $T_c$. We adapt this idea to the ice line evolution by prescribing a poleward movement of the ice line when the temperature at the ice line $T(\eta)$ is above a critical temperature $T_c$, and equatorward movement otherwise. Also, we assume that the movement of the ice line happens at a much slower rate compared to the evolution of the temperature profile. Therefore, for $\epsilon<<1$ and with $T_c=-10$ as used by previous authors, the equation for the ice line evolution at time node $k$ can be written as: \begin{equation} \Delta_t [\eta](k)= \frac{\eta(k+t)-\eta(k)}{t} = \epsilon \cdot [T(\eta)(k)-T_c] \end{equation} Some recent computation has suggested that to be physically relevant, the value of $\epsilon$ is very small, possibly in the order of $10^{(-12)}$ \cite{mcg}. We will see in the simulation that using a much larger $\epsilon \cong 0.025$ the attracting 1-D manifold still appears. \\ The object of our analysis, Dynamic Iceline Budyko's Model, is therefore the following infinite dimensional, two time scale, integro-difference equations:\\ \begin{equation} \label{budref} \begin{cases} \Delta_t [T(y)](k) = F([T(y),\eta])(k)\\ \Delta_t [\eta](k)= G([T(y),\eta])(k) \end{cases} \end{equation} with $F$ and $G$ as the following: \begin{align*} F([T(y), \eta]) &= Qs(y)(1-\alpha(\eta)(y))-(B+C)T(y)-A+C \int_0^1 T(y)dy\\ G([T(y),\eta])&= \epsilon (T(\eta)-T_c) \end{align*} \subsection{Equilibria and Animations} When the ice line $\eta$ is fixed, the \textit{\textbf{temperature profile equilibrium, with ice line at $\eta$}} is: \[ T^*(\eta)(y) = \frac{Q \cdot s(y) \cdot(1-a(\eta)(y)-A+C\int_0^1{T^*(\eta)(y)dy}}{B+C} \] We call the set of the Lipschitz continuous functions $\mathbb{T}^*:=\{ T^*(\eta): \eta \in [0,1] \}$ \textit{\textbf{the local equilibrium}} set. This set is the steady state of the first equations in (\ref{budref}). Notice that, without a mechanism specifying the movement of the iceline, the temperature profile will dissipate to the local temperature profile $T^*(\eta)(y)$ at the initial ice line $\eta$. The following figures simulate the evolutions of the temperature profile $T_0(y)=14-54y^2$ following only the first equation of (\ref{budref}), with the ice lines fixed at \textbf{$\eta_0=0.1, 0.3, 1$}. \\ Note: on the electronic copy, the following figures are the initial temperature profile and the initial ice line, and the local equilibria at the respective ice lines. Click anywhere on the left figures to start the animations following Budyko's equations. Similar animations can also be found on the web through: {\bf \url{http://math.arizona.edu/~ewidiasih/index.html/ewidiasih/Research.html} } \begin{figure}[h] \centering \subfloat{\label{strticov}} \href{run:static-icecovered.avi}{\includegraphics[width=1.5in]{starticecovered.jpg}} \subfloat{\label{strtmid}} \href{run:static-middle.avi}{\includegraphics[width=1.5in]{startmiddle.jpg}} \subfloat{\label{strticf}} \href{run:static-icefree.avi}{\includegraphics[width=1.5in]{starticefree.jpg}} \subfloat{\label{endicov }\includegraphics[width=1.5in]{staticend-icecovered.jpg}} \subfloat{\label{endmid }\includegraphics[width=1.5in]{staticend-middle.jpg}} \subfloat{\label{endicf }\includegraphics[width=1.5in]{staticend-icefree.jpg}} \caption{The horizontal axes are the domain of the temperature profile $T$. The solid blue curve is the temperature profile, and the big red 'X' represents the location of the ice line. The figures on the left are the initial temperature profile, $T(y)=14-54y^2$ and ice lines at $\eta=0.1, 0.5$ and $1$, and the figure on the right is the steady state temperature profile $T^*(\eta)(y)$ at the respective ice lines. Clicking on the top figures will start the animation. } \end{figure} \newpage The next animations track the dynamics using the proposed system with parameter $\epsilon=0.025$ and with the same starting temperature profile and ice lines as the previous series of figures. It should be noted here that we choose this value of $\epsilon$, not based on the physical considerations, but rather to create a reasonable animations run. \\ Notice that the initial temperature profile first evolves to a temperature profile similar to the local equilibrium temperature profile, one with a large slope at some ice line $\eta$ close to the initial ice line, and eventually they together move toward an equilibrium. This suggests the existence of an invariant manifold in the phase space $(T,\eta)$. We can directly compute the global equilibria for the proposed system by setting the ice line temperature of the local equilibria to the critical temperature and they are: \[ (T^*(\eta_1)(y), \eta_1) \text{ where } \eta_1 \cong 0.225 \text{, and } (T^*(\eta)(y), \eta_2) \text{ where } \eta_2 \cong 0.962 \] \begin{figure}[h] \centering \subfloat{\label{strticecov}} \href{run:ebm-covered.avi}{\includegraphics[width=1.5in]{starticecovered.jpg}} \subfloat{\label{strtic}} \href{run:ebm-middle.avi}{\includegraphics[width=1.5in]{startmiddle.jpg}} \subfloat{\label{strticfree}} \href{run:ebm-icefree.avi}{\includegraphics[width=1.5in]{starticefree.jpg}} \subfloat{\label{endicecov}\includegraphics[width=1.5in]{ebmend-icecov.jpg}} \subfloat{\label{endic }\includegraphics[width=1.5in]{ebmend-icemiddle.jpg}} \subfloat{\label{endicfree }\includegraphics[width=1.5in]{ebmend-icefree.jpg}} \caption{The figures on the left are the initial temperature profile, $T(y)=14-54y^2$ with ice lines at $\eta_0=$ 0.1, 0.5 and 1, and the figure on the right is the steady state temperature profile $T^*(\eta)(y)$, with the ice lines at the equator or $\eta=0$, $\eta \cong 0.962$. As before, clicking on the top figures will start the animations. } \label{icf} \end{figure} \newpage \section{Analysis of Dynamic Iceline Budyko's Model } Our goal is to explain the dynamics of the temperature profiles $T(y)$, coupled with the ice line $\eta$ in the simulation above. The main result is the existence of a one dimensional center stable manifold. The idea of an attracting invariant manifold in a fast slow system has been explored and developed by many including J. Carr \cite{carr}, S. Wiggins \cite{wiggins}, Fenichel \cite{fenichel}, and Vanderbauwhede \cite{van} to name a few. First, we argue that it is reasonable to assume that the albedo functions has bounded local variation, that is, for each fixed ice line $\eta$, the albedo function $\alpha(\eta)$ has a bounded Lipschitz contant. Let $M$ be this bound, ie. for any real values $x$ and $z$, $$|\alpha(\eta)(x)-\alpha(\eta)(z)|<M|x-z|.$$ When the ice line is at the equator, we have an ice covered planet, otherwise, when it is at the pole, our planet is ice free. The forcings in Budyko's model as explained in section 2 is fixed to the current climate which allow for a small ice cap as an equilibrium. As mentioned in the introduction, an important question in light of the current global climate issue is: \\ \begin{flushleft} {"What does the model suggest if this equilibrium is perturbed so that the planet suddenly becomes ice free?"}\\ \end{flushleft} To be able to describe the dynamics of the ice line $\eta$ at the end points, we will embed the ice line interval and the domain of the temperature profile in the real line. We will also embed the vector field in such a way that preserves the dynamics in the unit interval including its end points. This embedding allows the dynamical analysis of the polar and the equatorial ice line and is a mathematical convenience for showing the existence of an inertial manifold.\\ We thus consider the following system similar to (\ref{budref}): \begin{equation} \label{budxt} \left( \begin{array}{cc} &\Delta_t [ T(y)]\\ &\Delta_t [\eta] \end{array} \right) := H([T(y), \eta]) = \left( \begin{array}{c} F([T(y), \eta])\\ G([T(y),\eta]) \end{array} \right) \end{equation} where we define:\\ $F([T(y),\eta])$:= \begin{equation*} \begin{cases} & Q\cdot s(0) \cdot [1-a(\eta)(0)] - [A+BT(y)] + C [\int_0^1{T(y)dy}-T(y)] \text{, when } y <0\\ & Q\cdot s(y) \cdot [1-a(\eta)(y)] - [A+BT(y)] +C [\int_0^1{T(y)dy}-T(y) ]\text{, when } 0 \leq y \leq 1\\ & Q\cdot s(1) \cdot [1-a(\eta)(1)] - [A+BT (y)] +C [\int_0^1{T(y)dy} -T(y)] \text{, when } 1<y\\ \end{cases} \end{equation*} and \\ $G([T(y),\eta]) := \epsilon [T(\eta)-T_c]$\\ Notice here that for a small $\epsilon$, the vector field $H$ separates the dynamics into 2 time scales: the fast dynamics for the temperature profiles, and the slow time scale for the ice line evolution. \\ Also, observe that in the extended version, any zone $y$ outside of the unit interval has the same solar forcing as the closest endpoint. The equilibria of the extended version are similar to the original version. The local equilibria for the temperature profile with ice line at $\eta$ is the following Lipschitz continuous function:\\ \begin{equation*} T^*(\eta)(y)= \begin{cases} &\frac{Q \cdot s(0) \cdot(1-a(\eta)(0))-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text { , when } y<0\\ &\frac{Q \cdot s(y) \cdot(1-a(\eta)(y))-A+C\int_0^1{T^*(\eta(y))}}{B+C} \text{ , when } 0 \leq y \leq 1\\ &\frac{Q \cdot s(1) \cdot(1-a(\eta)(1))-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text{ , when } 1<y\\ \end{cases} \end{equation*} \subsection{Results} The interval of the ice line is $\mathbb{R}$, and we take the function space for the temperature profiles $T(y)$ to be the Banach space $$\mathcal{B}=\{ T: \mathbb{R} \rightarrow \mathbb{R}: T \text{ is bounded and continous, with bounded Lipschitz constant} \}$$ with the norm $||T||_\mathcal{B} = ||T||_\infty $\\ \subsubsection{Inertial Manifold} We now consider the following shift operator to Dynamic Iceline Budyko's Model (\ref{budxt}) in $\mathcal{B}$: \[ m_{t}([T, \eta](k)) := \begin{cases} T(k+t) = T(k) + t \cdot F([T, \eta](k))\\ \eta (k) = \eta(k)+ t \cdot G([T, \eta](k)) \end{cases} \] We obtained the following results:\\ \begin{theorem} \label{invariant} For an $\epsilon$ small, there exists an attracting invariant manifold for the shift operator of Budyko's equation (\ref{budxt}), that is, \\ \begin{description} \item{1}. There exists a Lipschitz continuous map\\ $$\Phi^*:\mathbb{R} \rightarrow \mathcal{B}$$ \item{2}. There exists a closed set $\mathcal{D} \subset \mathcal{B}$, such that for any $(T_0, \eta_0) \in \mathcal{D} \times \mathbb{R}$, the distance $dist[m_t([T_0, \eta_0])(k), (\Phi^*(\eta), \eta)]$ decreases exponentially as $k$ increases. \\ \end{description} \end{theorem} Let $\Phi^*$ denote this invariant manifold. As a corrolary to the theorem (\ref{invariant}), for $\eta \in [0,1]$ we can compute the distance of the invariant manifold $\Phi^*(\eta)$ to the local equilibria manifold $T^*$ and more importantly, we can describe the asymptotic dynamics of the ice line. The following graph is the graph of ice line temperature of the local equilibrium in the extended version, $T^*(\eta)(\eta)$ over the interval $[0, 1]$. \\ Many authors uses the critical temperature $T_c=-10$. The dynamics of the ice line is determined by the sign of the difference between the temperature at the ice line and the critical temperature $T(\eta)-T_c$, which is a one dimensional function of the ice line $\eta$. To determine the dynamics of the ice line at the steady state temperature profile, we graph $T^*(\eta)(\eta)-T_c$ over the interval $[0,1]$. The function $T^*(\eta)(\eta)-T_c$ has two roots in the unit interval, as in the previous section, let the root closest to the equator, $y=0$ be $\eta_1$ and the other root be $\eta_2$. \\ We conclude from Theorem \ref{invariant} that the dynamics reduces to 1 dimension. This fact is illustrated in Figure (\ref{1ddyn}). Here, we observe that when the temperature profile reaches a steady state, any ice line $\eta \in (\eta_1, \eta_2)$ moves to the right because the ice line temperature $T^*(\eta)(\eta)$ exceeds the critical temperature. On the other hand, when $\eta <\eta_1$, and $\eta>\eta_2$, the ice line temperature $T^*(\eta)(\eta)$ falls below the critical temperature, and therefore the ice line $\eta$ moves to the left. In particular, we note that when the ice line is at the pole, that is when $\eta =1$, then this ice line advances toward the equator. \\ \begin{corollary} In the unit interval, the invariant manifold $\Phi^*$ is within $O(\epsilon)$ of the local equilibrium set $T^*$. Therefore, for a small $\epsilon$, the ice free planet is unstable.\\ \end{corollary} As an example, for a small $\epsilon$, the equilibrium iceline temperature $\Phi^*(\eta)(\eta)$ is within $1^oC$ of $T^*(\eta)(\eta)$, the ice free earth is unstable. \begin{figure}[h] \centering \subfloat {\label{win1}}\href{run:ebm-middle.mp4}{\includegraphics[width=2in]{Tstar-Phistar.jpg}} \subfloat {\label{1ddyn}} \includegraphics[width=2in]{etat.pdf} \caption {Figure on the left illustrates $\Phi^*(\eta)(\eta)$ within $1^oC$ of $T^*(\eta)(\eta)$.Figure on the right is the reduced 1-D dynamics } \end{figure} In light of the current global climate discussion, this result suggests an optimistic view. Suppose that the planet's temperature somehow rises quickly that the ice line moves to the pole and the small ice cap disappear. If the climate parameters, ie. the parameters $A, B, C$ are kept the same, then the temperature profile will dissipate toward that on the invariant manifold, so that the ice line temperature falls below the critical temperature, and the ice line will start moving toward the stable equilibrium, the small ice cap. \\ We obtain the following diagram, which is a bifurcation diagram of the solar radiation parameter Q against the ice line $\eta$ in the equilibrium state. The dash blue line denotes unstable regime, and the solid line is stable, while the vertical green line is the current solar radiation used, $Q=343$. We see that the ice free state ie. ice line at the pole or $\eta=1$, belongs to the unstable regime when the solar radiation $Q$ is near the current value, 343 watts per meter squared. \\ \begin{figure} \centering \includegraphics[scale=0.35]{Q-eta-bifurcation.jpeg}\\ \caption{Bifurcation diagram for Q-the solar input} \end{figure} \section{Technical Details} This section present the proof of the existence of the 1-D center stable manifold. Readers with little interest in the mathematical treatment may wish to skip this section and proceed to the discussions on some future directions and conclusion in the next section. \\ We consider the extended version of the Dynamic Iceline Budyko's EBM (\ref{budxt}) \subsection{A Function Space for Temperature Profiles} We take the space for the the temperature profiles to be the Banach space:\\ $$ \mathcal{B} := \{T:\mathbb{R} \rightarrow \mathbb{R}: T \text{ is bounded, continuous with bounded Lipschits constant} \} $$ with the norm sup norm $||T||_{\infty}$ \subsection{Equilibria} \begin{lemma}{Two Equilibria}\\ The system of the differential equations (\ref{budxt}) has two equilibria.\\ \end{lemma} \begin{proof} First, we fix $\eta$ and set $F(T,\eta)$ to zero and solve for $T$, in which $T=T(\eta)(y)$, a function depending on $\eta$ and $y$. Let $T^*(\eta)(y)$ denote this solution.\\ \begin{equation*} T^*(\eta)(y)= \begin{cases} &\frac{Q \cdot s(0) \cdot(1-a(\eta)(0)-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text { , when } y<0\\ &\frac{Q \cdot s(y) \cdot(1-a(\eta)(y)-A+C\int_0^1{T^*(\eta(y))}}{B+C} \text{ , when } 0 \leq y \leq 1\\ &\frac{Q \cdot s(1) \cdot(1-a(\eta)(1)-A+C\int_0^1{T^*(\eta)(y)}}{B+C} \text{ , when } 1<y\\ \end{cases} \end{equation*} Let $g(\eta) :=\int_0^1{Q \cdot s(y) (1-a(y,\eta))dy}$, then $\int_0^1 T^*(\eta)(y)dy=\frac{g(\eta)-A}{B}$, and substituting this to $T^*(\eta)$ we get an expression for $T^*(\eta)$ which only depends on $\eta$ and $y$. \\ \begin{equation*} T^*(\eta)(y)= \begin{cases} &\frac{Q\cdot s(0) \cdot(1-a(\eta)(0))-A+\frac{C}{B}(g(\eta)-A)} {B+C} \text { , when } y<0\\ &\frac{Q \cdot s(y) \cdot(1-a(\eta)(y))-A+\frac{C}{B}(g(\eta)-A)}{B+C} \text{ , when } 0 \leq y \leq 1\\ &\frac{Q \cdot s(1) \cdot(1-a(\eta)(1)-A+\frac{C}{B}(g(\eta)-A)}{B+C} \text{ , when } 1<y\\ \end{cases} \end{equation*} Then we use the second equation to find the ice edge(s) equilibria, that is, we set $T^*(\eta)(\eta)-T_c$ to zero, and solve for $\eta$. Let $h(\eta)$ denote the function $T^*(\eta)(\eta)-T_c$. We can eliminate the case that $\eta<0$ or $\eta>1$, since the temperature profiles for these ice lines do not intersect the critical temperature $T_c=-10$.\\ As mentioned in the introduction, we also assume that the parameter of the albedo function's maximum slope, ie. the parameter $M$ is large, eg. $M>10$. And we need the following computational lemma:\\ \begin{lemma} For $M>10$, $T^*(\eta)(\eta)-T_c$ has two roots on the unit interval. \\ \end{lemma} Let $\eta_1$ and $\eta_2$ be these roots, and suppose that $\eta_1 < \eta_2$. The equilibria for the above differential equations are therefore $(T^*_1, \eta_1)$ and $(T^*_2, \eta_2)$, with \\ \begin{align*} T_1^*(y)&:=T^*(\eta_1)(y)=\frac{Q\cdot(1-a(\eta_1)(y))\cdot s(y)+1.6g(\eta_1)-2.6A}{B+C}\\ T_2^*(y)&:=T^*(\eta_2)(y)=\frac{Q\cdot(1-a(y,\eta_2)(y))\cdot s(y)+1.6g(\eta_2)-2.6A}{B+C} \end{align*} \end{proof} \subsection{Inertial Manifold} By using Hadamard's graph transform method we will show that in an appropriate function space, for $\epsilon$ small, Dynamic Iceline Budyko's Model has an inertial manifold. The main theorem that we will prove in this section is the following:\\ \begin{theorem}{Existence of Local Inertial Manifold}\label{invariant-epsilon}\\ For $\epsilon$ small, there exists a locally attracting invariant manifold for the shift operator of Budyko's Energy Balance Model.\\ \end{theorem} \subsection{2. The space of graphs.} For each ice line $\eta \in \mathbb{R}$, we consider $(\eta,\Phi(\eta))$, where we take the space of the graphs $\Phi$ as the space $\mathcal{G}$ of the bounded continuous functions, $BC^0(\mathbb{R}, \mathcal{B})$ with norm $ \| \Phi \|_{\mathcal{G}}=sup_{\eta \in \mathbb{R}} \| \Phi(\eta)_{\mathcal{B}} \| $. We will need the following lemma: \begin{lemma} \\ The set $\mathcal{G}$ with the norm $||\cdot||_\mathcal{G}$ is a Banach space.\\ \end{lemma} \begin{lemma} \\ For each fixed $L$, the set $$\mathcal{G}_L = \lbrace \Phi \in \mathcal{G}: \forall \zeta \text{,} \eta \in \mathbb{R} \text{, }||\Phi(\zeta)-\Phi(\eta)||_\mathcal{B} \leq L |\zeta - \eta | \rbrace$$ is a closed set in the Banach space $\mathcal{G}$. \\ \end{lemma} \subsection{The albedo function $a(\eta)(y)$ as a graph} Given the location of the ice line $\eta$, the albedo function of the planet is the function $a(\eta)(y)= .47 + .15 \cdot tanh(M\cdot(y-\eta))$. The parameter $M$ is the steepest slope of the function, which occur on the ice line $\eta$. Furthermore, the albedo function $a$ as a function of $\eta$ is $C^1$, with the upper bound for the first derivative being $.15M$, therefore, $sup_{\eta, \zeta \in \mathbb{R}} sup_{y \in \mathbb{R}} \frac{ |a(\eta)(y) - a(\zeta)(y)|}{|\eta - \zeta|} < .15M$.\\ Also, for any $\eta$ and $y$, $0.32 \leq a(\eta)(y) \leq 0.62$. Therefore, if $ L \ge max \{ .62, .15 M \}$, then the albedo function $a$ is a member of $\mathcal{G}_L$. We will use this bound as the Lipschitz bound of the space of graphs. To be precise, we define:\\ \begin{definition} {Lipschitz bound, L}\\ The Lipschitz Bound for the space of graphs is a number $L$ such that $$L\ge sup_{\eta, \zeta \in \mathbb{R}} \frac{ |a(\eta) - a(\zeta)|}{|\eta - \zeta|} $$ In particular, $L \ge max \{.62, .15 m \}$ . \\ \end{definition} \begin{definition} Temperature profile bound, r\\ The bound for the space of temperatures is the number $r$ such that $$r \ge sup_{y \in [0,1]}|Q\cdot s(y)|$$\\ \end{definition} We now consider the the action of Dynamics Iceline Budyko's Model over the set $\mathcal{G}_L \cap B(0,r) $ in the space of graphs, where $B(0,r) = \{ \Phi \in \mathcal{G}: || \Phi|| \le r\}$. \subsection{The Shift Operator of Budyko's Model as a Graph Transform} We will define $m$ a transformation that extends the vector field $(F, G)$ so that its action is defined for all ice boundaries in the real line and for all temperature profiles in $\mathcal{B}$. But, first, we need a lemma:\\ \begin{lemma} \label{preimage} If $\epsilon < \frac{1}{L+r}$, then for any fixed $t<1$, and for each $\eta \in \mathbb{R}$ and each $\Phi \in \mathcal{G}_L$ such that $|| \Phi ||_\mathcal{G}<r$, there exists a $\xi \in \mathbb{R}$ such that $$\eta = \xi + \epsilon \cdot t \cdot (\Phi(\xi)(\xi) - T_c)$$ \end{lemma} \begin{proof} We will show that given $\Phi \in \mathcal{G}_L$, there exists a unique $k=k_{\Phi} \in BC^0(\mathbb{R})$ such that for any given $\eta$, $\eta=(\eta + k(\eta)) + \epsilon (\Phi(\eta+k(\eta))(\eta+k(\eta))-Tc)$.\\ We define a transformation $T$ on $BC^0(\mathbb{R})$ by $$(Tk)(\eta) = \epsilon t [T_c- \Phi(\eta+k(\eta))(\eta+k(\eta))]$$ Clearly, $||Tk|| < \infty$. We will now show that $T$ is a contraction mapping on the Banach space $BC^0(\mathbb{R})$. Indeed, for any two bounded continuous functions $k_1 \text{ and } k_2$ with sup norm less than $r$, we have that:\\ $||(Tk_1) (\eta) - (Tk_2)(\eta)||$ \begin{align} &=\epsilon t |\Phi(\eta+k_1(\eta))(\eta+k_1(\eta))-\Phi(\eta+k_2(\eta))(\eta+k_2(\eta))|\\ &\leq \epsilon t [ |\Phi(\eta+k_1(\eta))(\eta+k_1(\eta))-\Phi(\eta+k_2(\eta))|(\eta+k_1(\eta))\\ &\quad + |\Phi(\eta+k_2(\eta))(\eta+k_1(\eta))- \Phi(\eta+k_2(\eta))(\eta+k_2(\eta))|]\\ &\leq \epsilon t [|| \Phi(\eta+k_1(\eta)) - \Phi(\eta+k_2(\eta))||_{\mathcal{B}}+||\Phi(\eta+k_2(\eta))||_{\mathcal{B}}\cdot |k_1(\eta)-k_2(\eta)| ]\\ &\leq \epsilon t (L+r)|k_1(\eta)-k_2(\eta)|<\rho |k_1(\eta)-k_2(\eta)|. \end{align} with the number $\rho = \epsilon t (L+r)<1$. \\ Therefore, the transformation $T$ is a contraction on a Banach space, and by the Banach Fixed Point Theorem, there exists a unique fixed point. Let $k$ be the fixed point, that is, $$k(\eta) = k_{\Phi}(\eta)=\epsilon (T_c-\Phi(\eta+k(\eta))).$$ Therefore, $$\eta+k(\eta)=\eta+\epsilon t (T_c- \Phi(\eta+k(\eta))(\eta+k(\eta))).$$ Letting $\xi=\eta+k(\eta)$ finishes the proof. \end{proof}\\ \begin{definition} {Graph Transform using Budyko's EBM}\\ Given a graph $\Phi \in \mathcal{G}_L$, we define Budyko Graph Transform $m$ as: \[ m(\Phi) = m(\Phi)(\eta) = \Phi (\xi) + t \cdot F(\Phi, \xi) \] with $\xi$ as in Lemma (\ref{preimage}). \end{definition}\\ To prove the existence of the inertial manifold for Dynamic Iceline Budyko's Model, we first show that $m$ is a contraction on $\mathcal{G}_L \cap B(0,r)$.\\ \begin{lemma}\label{invmfld-B} As denoted above, let $L$ be the Lipschitz bounds, $r$ be the temperature profiles bounds, and $B$ be the the constant from the re-emission term of DIBM. \\ There exists an $\epsilon = \epsilon (L, r, B)>0$ small, such that for any fixed time step $t$, $0< t < \frac{1}{B+C}$, the map $m$ is a contraction on $\mathcal{G}_L \cap \overline{B(0,r)}$.\\ \end{lemma} \begin{proof} \\ We will show that for any fixed time step $t<\frac{1}{B+C}$, there exists a real number $\rho=\rho(t) < 1$ such that given $\Phi$ and $\Gamma$ in $\mathcal{G}_L \cap \overline{B(0,r)}$, then $||m(\Phi)-m(\Gamma)|| < \rho ||\Phi - \Gamma ||$.\\ By Lemma (\ref{preimage}), there exists $\xi \text{ and } \zeta$ such that $\eta = \xi + \epsilon \cdot t (\Phi(\xi)(\xi) - T_c)$, and $\eta=\zeta + \epsilon \cdot t (\Gamma(\zeta)(\zeta)-T_c)$. First we compare the ice boundaries $|\xi-\zeta|$ to their temperatures $|\Phi(\xi)(\xi)-\Gamma(\zeta)(\zeta)|$. \begin{align*} |\xi - \zeta | &\leq \epsilon \cdot t|\Phi(\xi)(\xi)-\Gamma(\zeta)(\zeta)| \\ &\leq \epsilon \cdot t \left[ | \Phi(\xi)(\xi)-\Gamma(\xi)(\xi)| + |\Gamma(\xi)(\xi)-\Gamma(\xi)(\zeta)| + |\Gamma(\xi)(\zeta)-\Gamma(\zeta)(\zeta) \right| ] \\ &\leq \epsilon \cdot t ||\Phi - \Gamma || \qquad \text{ by the definition}\\ &\quad \quad +r \epsilon t |\xi-\zeta | \qquad \text{ since for each } \xi \text{, } \Gamma(\xi) \in \mathcal{B} \text{ and } ||\Gamma || < r\\ & \quad \quad+ L \epsilon t |\xi - \zeta | ) \qquad \text{ since } \Gamma \in \mathcal{G}_L\\ & \le \epsilon \cdot t ||\Phi - \Gamma || +\epsilon (L+r) |\xi - \zeta | \end{align*} Solving for $|\xi-\zeta|$ we get the inequality: \begin{equation} \label{eta-phi-estimate} |\xi-\zeta| \leq \frac{t \cdot \epsilon}{1-( L+r) \epsilon } ||\Phi-\Gamma|| \end{equation} Let \begin{equation} \label{epsilon} \epsilon \le \frac{B}{ 2( Lr+L+r)} \end{equation} and define:\\ \begin{equation} \label{delta1} \delta_1 := \frac{L \cdot \epsilon}{1-(L+r)\epsilon} \text{ and } \end{equation} \begin{equation} \label{delta2} \delta_2 := \frac{L \cdot r \cdot \epsilon}{1-(L+r)\epsilon} \end{equation} Then \begin{equation} \label{delta12B} \delta_1 < \delta_2 < \frac{B}{2} .\end{equation} We now estimate the graph transform map $m$:\\ $|m(\Phi)(\eta) (y)- m(\Gamma)(\eta)(y)|$ \begin{align*} &=|\Phi (\xi)(y) + t \cdot [Q\cdot s(y) \cdot (1-\alpha(\xi)(y)) -(B+C) \Phi (\xi) (y) + C \overline{\Phi(\xi)} - A] \\ & - \Gamma(\xi)(y) -t \cdot [Q\cdot s(y) \cdot (1-\alpha(\xi)(y)) -(B+C) \Gamma (\xi) (y) + C \overline{\Gamma(\xi)} - A] \\ & + \Gamma(\xi)(y) +t \cdot [Q\cdot s(y) \cdot (1-\alpha(\xi)(y)) -(B+C) \Gamma (\xi) (y) + C \overline{\Gamma(\xi)} - A] \\ & - \Gamma(\zeta)(y) -t \cdot [Q\cdot s(y) \cdot (1-\alpha(\zeta)(y)) -(B+C) \Gamma (\zeta) (y) + C \overline{\Gamma(\zeta)} - A] |\\ & \quad \\ \end{align*} Since $t<\frac{1}{B+C}$, and therefore $1-t(B+C)>0$, then the estimate above continues as \begin{align*} & \leq [1-t(B+C)]|\Phi(\xi)(y) - \Gamma(\xi)(y)| + t \cdot C |\overline{\Phi(\xi)} - \overline{\Gamma(\xi)}|\\ & +[1-t(B+C)] |\Gamma(\xi)(y)-\Gamma(\zeta)(y)| + t \cdot C |\overline{\Gamma(\xi)} - \overline{\Gamma(\zeta)}|\\ & +|Q \cdot s(y) \cdot (\alpha(\xi)(y)-\alpha(\zeta)(y))|\\ &\quad \\ \end{align*} Using inequality (\ref{eta-phi-estimate}) we estimate the third through the fifth terms of the above inequality: \begin{align*} & \le [1-t(B+C) + tC] || \Phi - \Gamma || \\ & + [1-t(B+C)](t\delta_1) || \Phi - \Gamma || + (tC)(t\delta_1) || \Phi - \Gamma || \\ & + t\delta_2 || \Phi - \Gamma || & \quad \\ & \le \left(1-tB+t[(1-tB)\delta_1+\delta_2 ] \right) || \Phi - \Gamma || \\ \end{align*} By the choice of $\epsilon$ in (\ref{epsilon}) above, we get the inequality (\ref{delta12B}), and so the inequality above continues as: \begin{align*} & < \left( (1-tB) + t(\delta_2+\delta_2) \right) || \Phi -\Gamma ||\\ & \le 1 \cdot ||\Phi -\Gamma ||\\ \end{align*} Therefore, for any fixed $t$, $0<t< \frac{1}{B+C}$, if $\rho := \left( (1-tB) + t(\delta_2+\delta_2 ) \right)$, then $0<\rho<1$, and we showed that: $$||m(\Phi)-m(\Gamma)|| < \rho ||\Phi - \Gamma ||.$$ That is, the map $m$ is a contraction. \end{proof}\\ We finish the proof of the existence of invariant manifold (\ref{invariant-epsilon}):\\ \begin{proof} For $\epsilon$ as in the previous lemma, we showed that $$m:\mathcal{G}_L \cap B(0,r) \rightarrow \mathcal{G_L} \cap B(0,r)$$ is a contraction in a closed set of a Banach space. Therefore, there exists a unique fixed point $\Phi^*$ such that $m(\Phi^*) = \Phi^*$. \end{proof}\\ \begin{corollary} The invariant manifold $\Phi^*$ is within $O(\epsilon)$ of the equilibrium set $T^*$\\ \end{corollary} \begin{proof} Since $m(\Phi^*)=\Phi^*$, then the following holds: \begin{align*} \Phi^*(\eta) &= \Phi^*(\xi)+t F(\Phi^*(\xi)-T^*(\xi)+T^*(\xi), \xi) \end{align*} where $\xi = \eta+k_{\Phi^{*}}(\eta)$.\\ Using the facts that $F(T^*(\xi),\xi)=0$ and the reverse triangle inequality we find: \begin{align*} || \Phi^*(\eta)-\Phi^*(\xi)||_{\mathcal{B}} &= || t(B+C)\cdot [ \Phi^{*}(\xi) -T^*(\xi)] - tB (\overline{\Phi^*(\xi)}-\overline{T^*(\xi)})||_{\mathcal{B}} \\ &\ge \left| t(B+C)\cdot || \Phi^{*}(\xi) -T^*(\xi)||_{\mathcal{B}}- tB || \overline{\Phi^*(\xi)}-\overline{T^*(\xi)}||_{\mathcal{B}} \right| \\ & \ge tB || \Phi^*(\xi) - T^*(\xi)||_{\mathcal{B}} \end{align*} In the last step of the above estimate, we used the fact that $ \left| \overline{\Phi^*(\xi)}-\overline{T^*(\xi)} \right| \le || \Phi^*(\xi)-T^*(\xi)||_{\mathcal{B}}$.\\ Therefore, we arrive at the following estimate: \begin{align*} ||\Phi^*(\xi)-T^*(\xi)||& =\frac{1}{B} ||\Phi^*(\eta)-\Phi^*(\eta + k_\Phi^* (\eta))||\\ &\le \frac{1}{B} \cdot |\Phi^*(\eta + k_{\Phi^*}(\eta))(\eta+ k_{\Phi^*}(\eta))-T_c|\\ &\le \frac {\epsilon r}{B} \end{align*} recall that here, $r$ is the bound on the temperature profiles. \end{proof} \begin{corollary} For $\epsilon$ small, the ice free earth is unstable. \end{corollary} \begin{proof} Since $||\Phi^*-T^*||<\frac{\epsilon \cdot r}{B}$ then $|\Phi^*(\eta)(\eta)-T^*(\eta)(\eta)|<\frac{\epsilon \cdot r}{B}$ as well. So that if $\epsilon<\frac{B}{r}[T_c-T^*(1)(1) ]$, then $\Phi^*(1)(1)<T_c$, and so, ice will form at the pole and advances toward the small ice cap equilibrium. See figure (\ref{Tetaeta}) for the graph of the iceline local equilibrium temperature $T^*(\eta)(\eta)$. \\ Therefore, if $\eta(0)=1$, as $t \rightarrow \infty$, $\eta(t)$, the ice line, advances equatorward, and the planet evolves toward a non ice free earth. \end{proof} \section{Concluding Discussion} \subsection{Future Directions} There are several directions that one can explore based on this model. One immediate improvement is to compute the invariant manifold explicitly as done by Foias, Sell and Titi \cite{fst}. Another immediate improvement on this model is an extension to the southern hemisphere with two ice line and a non symmetric transport. Another direction is to explore how the change in the greenhouse gas components, which is the term $A+BT(y)$, affects the radiative forcings. A work by Andrew Hogg \cite{hogg} relates the evolution of temperature with that of carbondioxide as a response to the solar input variations caused by the Milankovitch cycle. We are interested in the possibility of coupling Budyko's model with the Hogg's model to understand the glacial cycles in the quaternary period. While North explores a similar model with only a diffusion transport \cite{n75}, \cite{n79}, \cite{n84}, the model discussed in this paper could be improved by including some difussion and averaging in the transport term. Such inclusion necesitates the consideration for the planet's heat capacity and a further explanation of the parameter $\epsilon$.\\ \subsection{Conclusion} We have shown in this paper the existence of a center stable manifold in an energy balance model with ice albedo feedback, featuring a dynamic iceline. The existence of such invariant manifold explains the numerical experiments presented as animations and allows for qualitative analysis of the small icecap stability. \newpage
proofpile-arXiv_068-650
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} A pair $(M,L)$ of a compact complex manifold $M$ and a positive line bundle $L$ over $M$ is called a polarized manifold. Here a positive line bundle means a holomorphic line bundle $L$ such that its first Chern class $c_1(L)$ is represented, as a de Rham class, by a positive closed $(1,1)$-form. Therefore we can find a closed 2-form $\omega$ of the form \begin{equation} \omega = \frac i{2\pi} \sum_{i,j=1}^m\,g_{i{\overline j}}\, dz^i \wedge d\bar z^j \end{equation} with $g = (g_{i{\overline j}})$ being pointwise a positive definite Hermitian matrix, and $z^1, \cdots, z^m$ local holomorphic coordinates. Then $g$ defines a Hermitian metric of $M$, and $\omega$ is regarded as its fundamental 2-form. Since $\omega$ is closed, $g$ becomes a K\"ahler metric. Hence, for a polarized manifold $(M,L)$, $c_1(L)$ is regarded as a K\"ahler class. We seek a constant scalar curvature K\"ahler (cscK) metric with its K\"ahler form in $c_1(L)$. There are known obstructions related to holomorphic vector fields. One is reductiveness of the Lie algebra $\mathfrak h(M)$ of all holomorphic vector fields on $M$ (\cite{Lic}, \cite{matsushima57}), and the other is certain Lie algebra character $f : \mathfrak h(M) \to {\mathbb C}$ (\cite{futaki83.1}, \cite{calabi85}). Besides them, there are obstructions related to GIT stability. A well-known conjecture due to Yau, Tian, and Donaldson says the existence of constant scalar curvature metrics in $c_1(L)$ will be equivalent to K-(poly)stability (\cite{donaldson02}). K-stability is defined using the so-called {\it DF-invariant} as a numerical invariant for the Hilbert-Mumford criterion, see Definition \ref{DF}. At the moment of this writing, it has been proved that the existence implies K-stability (\cite{chentian04}, \cite{donaldson05}, \cite{stoppa0803}, \cite{mabuchi0910}), but it is still open whether K-stability implies the existence. Therefore at least K-stability is an obstruction to the existence. But there is another stability condition which is an obstruction to the existence of cscK metrics when the automorphism group $\mathrm{Aut}(M,L)$ is discrete. Here $\mathrm{Aut}(M,L)$ is the subgroup of the automorphism group $\mathrm{Aut}(L)$ of $L$ consisting of all automorphisms of $L$ commuting with the ${\mathbb C}^{\ast}$-action on the fibers. Notice that such automorphisms descend to automorphisms of $M$. Therefore $\mathrm{Aut}(M,L)$ is naturally identified with a subgroup of the automorphism group $\mathrm{Aut}(M)$ of $M$. From now on we regard $\mathrm{Aut}(M,L)$ as a subgroup of $\mathrm{Aut}(M)$ in this way, and also the Lie algebra $\frak h_0$ of $\mathrm{Aut}(M,L)$ as a Lie subalgebra of the Lie algebra $\mathfrak h(M)$ of $\mathrm{Aut}(M)$. The following result due to Donaldson shows in fact asymptotic Chow stability is an obstruction to the existence of cscK metrics. \begin{theorem}[Donaldson \cite{donaldson01}]\label{Donaldson} Let $(M,L)$ be a polarized manifold with $\mathrm{Aut}(M,L)$ discrete. Suppose there exists a cscK metric in $c_1(L)$. Then $(M,L)$ is asymptotically Chow stable. \end{theorem} Note that if $(M,L^k)$ is Chow stable then there exists a ``balanced metric'' for $L^k$. Donaldson further proved in the same paper \cite{donaldson01} that as $k \to \infty$, the balanced metrics converge to the cscK metric (assuming the existence of a cscK metric). Because of this result, we may have an expectation of a possibility to use the convergence of the balanced metrics as a one step in the proof of the implication of stability implying existence. But the claim of this talk is that Donaldson's theorem does not hold if $\mathrm{Aut}(M,L)$ is not discrete. In fact we explain the following result. \begin{theorem}[Ono-Sano-Yotsutani \cite{OSY09}]\label{OSY} There is a toric Fano 7-manifold (suggested by Nill and Paffenholtz in \cite{NillPaffen}) which is K\"ahler-Einstein but not asymptotically Chow-semistable (polystable). \end{theorem} This result relies on our earlier works \cite{futaki04-1} and \cite{FOS08}. The following result of Della Vedova and Zuddas, which is also related to our work \cite{futaki04-1}, claims that there are two dimensional examples. \begin{theorem}[Della Vedova-Zuddas \cite{DVZ10}]\label{DVZ} There are constant scalar curvature K\"ahler surfaces which admit an asymptotically Chow unstable polarization. \end{theorem} The following result of Odaka uses a formula of DF-invariant for blow-ups along the flag ideals due to Wang \cite{Xiaowei08} and Odaka \cite{Odaka09}. \begin{theorem}[Odaka \cite{Odaka10}]\label{Odaka} There are examples of K-stable polarized orbifolds which are asymptotically Chow unstable. In fact, these examples are K\"ahler-Einstein orbifolds with finite automorphisms. Hence Donaldson's theorem does not hold for orbifolds. \end{theorem} Note that there is an argument without using balanced metrics to show that cscK metrics minimize the K-energy when the automorphism group is not discrete, see Li \cite{LiChi10}. \section{ What is (asymptotic) Chow stability ?} Let $V_k := H^0(M,\mathcal O(L^k))^*$ be the vector space of all holomorphic sections of $L^k$, $M_k \subset {\mathbb P}(V_k)$ the image of Kodaira embedding by $L^k$, and $d_k$ the degree of $M_k$ in ${\mathbb P}(V_k)$. Denote by $m$ the dimension of $M$: $m = \dim_{{\mathbb C}} M$. An element of ${\mathbb P}(V_k^*) \times \cdots \times {\mathbb P}(V_k^*)$ ($m+1$ times) defines $m+1$ hyperplanes $H_1,\,\cdots\, ,H_{m+1}$ in ${\mathbb P}(V_k)$. Then the set $$\{(H_1, \cdots , H_{m+1}) \in {\mathbb P}(V_k^*) \times \cdot\cdot \times {\mathbb P}(V_k^*)|H_1 \cap \cdot\cdot \cap H_{m+1} \cap M_k \ne \emptyset\}$$ becomes a divisor in ${\mathbb P}(V_k^*) \times \cdot\cdot \times {\mathbb P}(V_k^*)$, and this divisor is defined by a polynomial $${\hat M}_k \in (\mathrm{Sym}^{d_k}(V_k))^{\otimes (m+1)},$$ called the {\bf Chow form}. Consider the $SL(V_k)$-action on $(\mathrm{Sym}^{d_k}(V_k))^{\otimes (m+1)}$. Stabilizer of ${\hat M}_k$ under $SL(V_k)$-action is $\mathrm{Aut}(M,L)$. In Theorem \ref{Donaldson} by Donaldson, ``$\mathrm{Aut}(M,L)$ is discrete'' means ``the stabilizer is finite''. \begin{definition}\label{Chow} Let $(M,L)$ be a polarized manifold. \begin{enumerate} \item[1] $M$ is said to be Chow polystable w.r.t. $L^k$ if the orbit of ${\hat M}_k$ in\\ $(\mathrm{Sym}^{d_k}(V_k))^{\otimes (m+1)}$ under the action of $\mathrm{SL}(V_k)$ is closed. \item[2] $M$ is Chow stable w.r.t $L^k$ if $M$ is polystable and the stabilizer at ${\hat M}_k$ of the action of $\mathrm{SL}(V_k)$ is finite. \item[3] $M$ is Chow semistable w.r.t. $L^k$ if the closure of the orbit of ${\hat M}_k$ in\\ $(\mathrm{Sym}^{d_k}(V_k))^{\otimes (m+1)}$ under the action of $\mathrm{SL}(V_k)$ does not contain\\ ${\bf o} \in (\mathrm{Sym}^{d_k}(V_k))^{\otimes (m+1)}$. \item[4] $M$ is asymptotically Chow polystable (resp. stable or semistable) w.r.t. $L$ if there exists a $k_0 > 0$ such that $M$ is Chow polystable (resp. stable or semistable) w.r.t. $L^k$ for all $k \ge k_0$. \end{enumerate} \end{definition} In the case when $\mathrm{Aut}(M,L)$ is not discrete Mabuchi tried to extend Theorem \ref{Donaldson} by Donaldson. He first showed that in this case there is an obstruction to asymptotic Chow semistability: \begin{theorem}[Mabuchi \cite{mabuchi-a}]\label{mabuchi-a} Let $(M,L)$ be a polarized manifold. If $\mathrm{Aut}(M,L)$ is not discrete then there is an obstruction to asymptotic Chow semistability. \end{theorem} This obstruction is expressed in the paper \cite{futaki04-1} as a series of integral invariants, which are explained later in the next section. Mabuchi then proved the following result. \begin{theorem}[Mabuchi \cite{mabuchi-c}]\label{mabuchi-c} Let $(M,L)$ be a polarized manifold, and suppose $\mathrm{Aut}(M,L)$ is not discrete. If there exists a constant scalar curvature K\"ahler metric in $c_1(L)$ and if the obstruction in Theorem \ref{mabuchi-a} vanishes then $(M,L)$ is asymptotically Chow polystable. \end{theorem} \section{Obstructions to asymptotic Chow semistability} The Lie algebra $\frak h_0$ of $\mathrm{Aut}(M,L)$ is expressed in various ways. Recall that $\frak h(M)$ is the Lie algebra of all holomorphic vector fields on $M$, which is the Lie algebra of $\mathrm{Aut}(M)$. First of all it can be expressed as $$ \mathfrak h_0 = \{X \in \frak h(M)\ |\ \mathrm{zero}(X) \ne \emptyset\}. $$ Secondly it can be expressed also as $$ \mathfrak h_0 = \{X \in \frak h(M)\ |\ \exists u \in C^{\infty}(M)\otimes{\mathbb C} \ \mathrm{s.t.}\ X = \mathrm{grad}'u = g^{i{\overline j}}\frac{\partial u}{\partial {\overline z}^j}\frac{\partial}{\partial z^i}\}.$$ Or we may say that $\mathrm{Aut}(M,L)$ is the linear algebraic part of $\mathrm{Aut}(M)$. Mabuchi's obstruction to asymptotic Chow semistability can be re-stated in terms of integral invariants $\mathcal F_{\mathrm{Td^i}}$'s, which are explained below, as follows. \begin{theorem}[\cite{futaki04-1}]\label{Futaki04} Let $(M,L)$ be a polarized manifold with $\dim_{{\mathbb C}} M = m$. \begin{enumerate} \item[(a)]\ \ The vanishing of Mabuchi's obstruction is equivalent to the vanishing of Lie algebra characters $\mathcal F_{\mathrm{Td^i}} : \mathfrak h_0 \to {\mathbb C}$, for $i = 1, \cdots, m.$ \item[(b)]\ \ $\mathcal F_{\mathrm{Td^1}}$ is an obstruction to the existence of a constant scalar curvature K\"ahler metric in $c_1(L)$, which is sometimes called the classical Futaki invariant. \end{enumerate} \end{theorem} The Lie algebra characters $\mathcal F_{\mathrm{Td^i}}$ are defined as follows. For $X \in \mathfrak h_0$ we have $$ i(X) \omega = - \bar{\partial}\,u_X. $$ Assume the normalization \begin{equation}\label{normalization} \int_M u_X\ \omega^m = 0. \end{equation} Choose a type $(1,0)$-connection $\nabla$ in $T'M$. Put $$ L(X) = \nabla_X - L_X \in \Gamma(\mathrm{End}(T'M))$$ and let $$ \Theta \in \Gamma(\Omega^{1,1}(M)\otimes\mathrm{End}(T'M))$$ be the (1,1)-part of the curvature form of $\nabla$. \begin{definition}\label{character}\ \ For $\phi \in I^p(GL(m,{\mathbb C}))$, we define \begin{eqnarray*} {\mathcal F}_{\phi}(X) &=& (m-p+1) \int_M \phi(\Theta) \wedge u_X\,\omega^{m-p} \\ & & + \int_M \phi(L(X) + \Theta) \wedge \omega^{m-p+1}.\nonumber \end{eqnarray*} \end{definition} Notice that ${\mathcal F}_{\phi}(X)$ is linear in $X$. One can show that ${\mathcal F}_{\phi}$ is independent of choices of $\omega$ and $\nabla$. from which it follows that ${\mathcal F}_{\phi}$ is invariant under the adjoint action of $\mathrm{Aut}(M)$. In particular ${\mathcal F}_{\phi}$ is a Lie algebra character. \begin{proof}[Outline of the proof of Theorem \ref{Futaki04}]\ \ To show (a), suppose we have a ${\mathbb C}^\ast$-action on $M$. Asymptotic Chow semistablility implies that there is a lift of the ${\mathbb C}^\ast$-action to $L$ such that it induces $SL(H^0(L^k))$-action for all $k$. So, the weight $w_k$ of the action on $H^0(L^k)$ is zero for all $k$. But $w_k$ can be expressed using the equivariant index formula. The coefficient of $k^j$ is ${\mathcal F}_{\mathrm{Td^j}}(X)$ where $X$ is the infinitesimal generator of the ${\mathbb C}^{\ast}$-action. To show (b), recall that the first Todd class $\mathrm{Td^1}$ is equal to $\frac12 c_1$. Thus it corresponds to one half of the trace. Hence the second term of ${\mathcal F}_{\mathrm{Td^1}}(X)$ in Definition \ref{character} is one half of the integral of the divergence of $X$, which of course vanishes by the divergence theorem. Hence we have $$ {\mathcal F}_{\mathrm{Td^1}}(X) = \frac m2 \int_M u_X c_1 \wedge \omega^{m-1} $$ where $c_1$ denotes the first Chern form, or the Ricci form. Since $m c_1 \wedge \omega^{m-1} = S \omega^m$ where $S$ is the scalar curvature, the last integral becomes zero if $S$ is constant because of the normalization (\ref{normalization}). This completes the outline of the proof of Theorem \ref{Futaki04}. See \cite{futaki04-1} or \cite{FOS08} for the detail of the proof. \end{proof} Now we have natural questions: Question (a)\ \ In Theorem \ref{mabuchi-c}, can't we omit the assumption of the vanishing of the obstruction ? That is to say, if there exists a constant scalar curvature K\"ahler metric in $c_1(L)$ then doesn't the obstruction necessarily vanish ? Question (b)\ \ In Theorem\ref{Futaki04}, if $\mathcal F_{\mathrm{Td^1}} = 0$ then $\mathcal F_{\mathrm{Td^2}}= \cdots = \mathcal F_{\mathrm{Td^m}} = 0$ ? \bigskip In \cite{FOS08} we studied the characters $\mathcal F_{\mathrm{Td^i}}$'s in terms of Hilbert series for toric Fano manifolds. We showed that the linear span of $\mathcal F_{\mathrm{Td^1}}, \cdots, \mathcal F_{\mathrm{Td^m}}$ coincides with the linear span of the characters obtained as derivatives of the Hilbert series. Note that the derivatives of the Hilbert series are computed by inputing toric data into a computer. We saw that, up to dimension three among toric Fano manifolds, there are no counterexamples to Question (b). But later a seven dimensional example of Nill and Paffenholz \cite{NillPaffen} appeared, and Ono, Sano and Yotsutani \cite{OSY09} checked that this seven dimensional example shows that the answers to Questions (a) and (b) are No. Now we turn to the Hilbert series. \section{Hilbert series.} Let $M$ be a toric Fano manifold of $\dim M = m$. We take $L = K_M^{-1}$. Then $L$ is a very ample line bundle. Since $M$ is toric, the real $m$-dimensional torus $T^m$ acts on $M$ effectively. Since we have a natural $S^1$-action on $K_M^{-1}$, the real $(m+1)$-dimensional torus $T^{m+1}$ acts on $K^{-1}_M$ effectively so that $K_M^{-1}$ is also toric. For $g \in T^{m+1}$, we put $$ L(g) := \sum_{k=0}^\infty \mathrm{Tr}(g|_{H^0(M,K_M^{-k})}).$$ Because of Kodaira vanishing theorem we may regard $L(g)$ as a formal sum of the Lefschetz numbers. We may analytically continue $L(g)$ to the algebraic torus $T_{{\mathbb C}}^{m+1}$, and write it as $L({\bf x})$ for an element ${\bf x} \in T_{{\mathbb C}}^{m+1}$. Let $\{v_j \in {\mathbb Z}^m\}_j$ be the generators of the fan of $M$. Then the moment polytope of $M$ can be expressed as $$P^{\ast} := \{w \in {\mathbb R}^m | v_j\cdot w \ge -1, \forall j\}.$$ Let $$C^{\ast} \subset {\mathbb R}^{m+1} (= \mathrm{Lie}(T^{m+1}))^{\ast}$$ be the cone over $P^{\ast}$. The integral points in $C^{\ast}$ corresponds bijectively to the set of all bases of $H^0(M,K_M^{-k})$ for all $k$. For ${\bf x} \in T_{{\mathbb C}}^{m+1}$ and ${\bf a} = (w,k) \in {\mathbb Z}^{m+1} \cap C^{\ast}$, we put $$ {\bf x}^{{\bf a}} = x_1^{a_1} \cdots x_{m+1}^{a_{m+1}}.$$ \begin{definition}\ \ The Hilbert series $\mathcal C({\bf x}, C^{\ast})$ is defined by $$\mathcal C({\bf x}, C^{\ast}) := \sum_{{\bf a} \in C^{\ast}\cap {\mathbb Z}^{m+1}} {\bf x}^{{\bf a}}.$$ \end{definition} The following fact is nontrivial, but is well-known in combinatorics. \begin{fact}\label{rational}\ \ $\mathcal C({\bf x}, C^{\ast})$ is a rational function of ${\bf x}$. \end{fact} It is easy to show the following lemma. \begin{lemma}\ \ $\mathcal C({\bf x}, C^{\ast}) = L({\bf x})$. \end{lemma} For ${\bf b} \in {\mathbb R}^{m+1} \cong \mathfrak g = \mathrm{Lie}(T^{m+1})$, put $$e^{-t{\bf b}} := (e^{-b_1t},\cdots, e^{-b_{m+1} t}).$$ Then we have $$\mathcal C(e^{-t{\bf b}}, C^{\ast}) = \sum_{{\bf a} \in C^{\ast}\cap {\mathbb Z}^{m+1}} e^{-t{\bf a}\cdot{\bf b}}.$$ This is a rational function in $t$ by Fact (\ref{rational}). Let $P$ be the dual polytope of $P^{\ast}$, and put $$C_{R} := \{(b_1, \cdots, b_m, m+1) | (b_1, \cdots, b_m) \in (m+1)P\} \subset \mathfrak g.$$ An intrinsic meaning of $C_{R}$ can be explained as follows. The unit circle bundle associated with $K_M$ is considered as a Sasaki manifold with the regular Reeb vector field. But the Reeb vector field can be deformed in $\mathfrak g$. The subset $C_{R}$ consists of those which are critical points for the volume functional when we take the variation of the Reeb vector field to be constant multiple of the Reeb vector field itself (see \cite{MSY2}). In other words, $C_{R}$ is a natural deformation space of the Reeb vector fields of the toric Sasaki manifold. Put ${\bf b} = (0,\cdots,0,m+1)$. \begin{theorem}[\cite{FOS08}]\ \ The coefficients of the Laurant series of the rational function $\frac d{ds}|_{s=0}\mathcal C(e^{-t({\bf b} + s{\bf c})}, C^{\ast})$ in $t$ span the linear space spanned by $\mathcal F_{\mathrm{Td}^1}, \cdots,\mathcal F_{\mathrm{Td}^m}$. \end{theorem} \noindent This theorem is a generalization of a result of Martelli, Sparks and Yau \cite{MSY2}, which says the classical Futaki invariant is obtained as a derivative of the Hilbert series. Our computations show that the question is closely related to a question raised by Batyrev and Selivanova: Is a toric Fano manifold with vanishing $f (= \mathcal F_{\mathrm{Td}^1})$ for the anticanonical class necessarily symmetric? Recall that a toric Fano manifold $M$ is said to be symmetric if the trivial character is the only fixed point of the action of the Weyl group on the space of all algebraic characters of the maximal torus in $\mathrm{Aut}(M)$. The question of Batyrev and Selivanova is natural because it is proved by Batyrev and Selivanova \cite{batyrev-selivanova} that if a toric Fano manifold is symmetric then there exists a K\"ahler-Einstein metric. Later Wang and Zhu \cite{Wang-Zhu} proved that a toric Fano manifold admits a K\"ahler-Einstein metric if and only if $f (= \mathcal F_{\mathrm{Td}^1})$ vanishes. Nill and Paffenholz \cite{NillPaffen} gave a counterexample to the question of Batyrev-Selyvanova. Namely they gave an example of a non-symmetric seven dimensional toric K\"ahler-Einstein Fano manifold on which we have $\mathcal F_{\mathrm{Td}^1}=0$. Ono, Sano and Yotsutani showed that, in this example, other $\mathcal F_{\mathrm{Td}^i}$'s are non-zero and all proportional. \section{Higher integral invariants and higher CM lines} The invariant $\mathcal F_{\mathrm{Td}^1}$ is considered as the Mumford weight of the CM line $\lambda_{CM}$ on the Hilbert scheme $\mathcal H$ of subschemes of ${\mathbb P}^N$ with Hilbert polynomial $\chi$ as shown by Paul and Tian \cite{paultianCM1}, \cite{paultianCM2}. Recently Della Vedova and Zuddas showed that the same is true for higher $\mathcal F_{\mathrm{Td}^i}$'s. This section is based on their paper \cite{DVZ10}. Let $(M,L)$ be an $m$-dimensional polarized variety or scheme. For a one parameter subgroup $\rho : {\mathbb C}^{\ast} \to \mathrm{Aut}(M,L)$ with a lifting to an action $\tilde\rho : {\mathbb C}^{\ast} \to \mathrm{Aut}(L)$ on $L$ we denote by $w(M,L)$ the weight of the induced action on the determinant line $\otimes_{i=0}^m (\det H^i(M,L))^{(-1)^i}$, and by $\chi(M,L)$ the Euler-Poincare characteristic $\sum_{i=0}^m (-1)^i \dim H^i(M,L)$. Of course if we replace $L$ by its sufficiently high power we may assume $H^i(M,L) = 0$ for $i > 0$. It is known by the general theory that we have polynomial expansions \begin{equation}\label{chi} \chi(M,L^k) = a_0(M,L)k^m + a_1(M,L)k^{m-1} + \cdots + a_m(M,L), \end{equation} \begin{equation}\label{weight} w(M,L^k) = b_0(M,L)k^{m+1} + b_1(M,L)k^m + \cdots + b_{m+1}(M,L). \end{equation} We define the Chow weight $ \mathrm{Chow}(M,L^k)$ of $(M,L^k)$ by $$ \mathrm{Chow}(M,L^k) = \frac{w(M,L^k)}{k\chi(M,L^k)} - \frac{b_0(M,L)}{a_0(M,L)} .$$ One easily gets \begin{eqnarray*} \mathrm{Chow}(M,L^k) &=& \frac{b_{m+1}(M,L)}{k\chi(M,L^k)} \\ && + \frac{a_0(M,L)}{k\chi(M,L^k)} \sum_{\ell=1}^m \frac{a_0(M,L)b_\ell(M,L)-b_0(M,L)a_\ell(M,L)}{a_0(M,L)^2} k^{m+1- \ell}. \end{eqnarray*} The first term $b_{m+1}$ is known to vanish in the smooth case, see \cite{futaki88}. We then define $F_\ell(M,L)$ by $$ F_\ell(M,L) = \frac {a_0(M,L)b_\ell(M,L) - b_0(M,L)a_\ell(M,L)}{a_0(M,L)^2}. $$ If $M$ is smooth, $\chi(M,L)$ is expressed using Todd classes and $c_1(L)$ by Riemann-Roch theorem and $w(M,L)$ is expressed using Todd classes, $c_1(L)$, connections in the tangent bundle of $M$ and $L$ with the infinitesimal action of $X$. The connection term in $L$ makes its appearance as the Hamiltonian function $u_X$ in Definition \ref{character} of $\mathcal F_\phi(X)$. Hence the terms $a_i(M,L)$ and $b_j(M,L)$ are written in terms those classes and connections. Della Vedova and Zuddas show that $F_\ell(M,L) $ is independent of the choice of a lifting $\tilde\rho : {\mathbb C}^{\ast} \to \mathrm{Aut}(L)$ of $\rho$ and that \begin{equation}\label{DVZformula} F_\ell(M,L) = \frac{1}{vol(M, L)}\mathcal F_{\mathrm{Td}^\ell}(X) \end{equation} when $M$ is smooth and $X$ is the infinitesimal generator of the action $\rho : {\mathbb C}^{\ast} \to \mathrm{Aut}(M,L)$. We give here the case when $\ell = 1$. Refer to \cite{DVZ10} for general $\ell$. \begin{lemma}[\cite{donaldson02}]\ \ If $M$ is a nonsingular projective variety then $$ F_1(M,L) = \frac{1}{vol(M, L)}\mathcal F_{\mathrm{Td}^1}(X) $$ where $X$ is the infinitesimal generator of the ${\mathbb C}^{\ast}$-action. \end{lemma} \begin{proof}\ \ Let us denote by $m$ the complex dimension of $M$. Expand $h^0(L^k)$ and $w(k)$ as $$ h^0(L^k) = a_0k^m + a_1k^{m-1}+ \cdots,$$ $$ w(k) = b_0k^{m+1} + b_1k^m + \cdots.$$ Then by the Riemann-Roch and the equivariant Riemann-Roch formulae $$ a_0 = \frac 1{m!}\int_M c_1(L)^m = vol(M), $$ $$ a_1 = \frac 1{2(m-1)!} \int_M \rho \wedge c_1(L)^{m-1} = \frac 1{2m!} \int_M \sigma \omega^m, $$ $$ b_0 = \frac 1{(m+1)!}\int_M (m+1) u_X \omega^m, $$ $$ b_1 = \frac 1{m!} \int_M m u_X \omega^{m-1} \wedge \frac 12 c_1(M) + \frac 1{m!} \int_M \operatorname{div}X\ \omega^m .$$ The last term of the previous integral is zero because of the divergence formula. Thus $$ \frac {w(k)}{kh^0(k)} = \frac{b_0}{a_0}(1 + (\frac{b_1}{b_0} - \frac{a_1}{a_0})k^{-1} + \cdots ) $$ from which we have \begin{eqnarray*} F_1(M,L) &=& \frac{b_0}{a_0}(\frac{b_1}{b_0} - \frac{a_1}{a_0}) = \frac 1{a_0^2}(a_0b_1 - a_1b_0)\\ &=& \frac 1{2vol(M, L)}\int_M u_X(\sigma - \frac 1{vol(M, L)}\int_M \sigma \frac{\omega^n}{n!})\frac{\omega^n}{n!}\\ &=& \frac{1}{vol(M, L)}\mathcal F_{\mathrm{Td}^1}(X) \end{eqnarray*} \end{proof} \begin{definition}\label{DF} Let $(M,L)$ be a polarized scheme. We call $F_1(M,L)$ the DF-invariant of $(M,L)$. \end{definition} The DF-invariant $F_1(M,L)$ is used as a numerical invariant to define K-stability, see next section for the detail. The idea is the similar to the following Hilbert-Mumford criterion for Chow stability. Let $f : \mathcal U \to \mathcal H$ be the universal flat family over the Hilbert scheme $\mathcal H$ of subschemes of ${\mathbb P}^N$ with Hilbert polynomial $\chi$, and $\iota : \mathcal U \to \mathcal H \times {\mathbb C}{\mathbb P}^N$ be the natural embedding. Then we have $f = \mathrm{pr}_{\mathcal H}\circ \iota$. Let $\mathcal L = \iota^\ast\circ \mathrm{pr}_{\mathcal H}^{\ast} \mathcal O(1)$ be the relatively ample line bundle over $\mathcal U$. For $k$ sufficiently large we have $\mathrm{rank} f_\ast (L^k) = \dim H^0(\mathcal U_x, \mathcal L_x^k)$ and $\det f_\ast(L^k) = \det H^0(\mathcal U_x, \mathcal L_x^k)$ for all $x \in \mathcal H$. Hence we have \begin{equation}\label{rank} \mathrm{rank} f_\ast (L^k) = a_0 k^n + a_1 k^{n-1} + \cdots + a_n. \end{equation} Considering the determinant we see from \cite{KnudsenMumford} that there are ${\mathbb Q}$-line bundles $\mu_0, \cdots, \mu_{m+1}$ such that \begin{equation}\label{det} \mathrm{det} f_\ast (L^k) = \mu_0^{k^{m+1}}\otimes \mu_1^{k^m} \otimes \cdots \otimes \mu_{m+1}. \end{equation} By definition {\it Chow-line} is the ${\mathbb Q}$-line bundle $\lambda_{Chow} (\mathcal H, \mathcal L)$ over $\mathcal H$ \begin{equation}\label{Chow-line} \lambda_{Chow} (\mathcal H, \mathcal L) = \det f_\ast(\mathcal L) ^{\frac 1{k\mathrm{rank}f_\ast(\mathcal L)} }\otimes \mu_0 ^{- \frac 1 {a_0}}. \end{equation} It is easy to see that $ \mathrm{Chow}(M,L)$ is the Mumford weight of the Chow-line $\lambda_{Chow} (\mathcal H, \mathcal L)$. By (\ref{rank}) and (\ref{det}) one can show \begin{equation}\label{Chow-line2} \lambda_{Chow} (\mathcal H, \mathcal L) = \mu_{m+1}^{\frac 1{k\chi(k)}} \otimes \left(\bigotimes_{\ell = 1}^m \left(\mu_\ell^{\frac1{a_0}} \otimes \mu_0^{-\frac{a_\ell}{a_0^2}}\right)^{\frac{a_0k^{m+1-\ell}}{\chi(k)}}\right). \end{equation} We define the $\ell$-th CM-line $ \lambda_{\mathrm{CM}, \ell}(\mathcal H, \mathcal L) $ on the Hilbert scheme $\mathcal H$ by $$ \lambda_{\mathrm{CM}, \ell}(\mathcal H, \mathcal L) = \mu_\ell^{\frac 1{a_0}} \otimes \mu_0^{- \frac {a_\ell}{a_0^2}}.$$ It is also easy to see that $F_\ell(M,L)$ is the weight of the $\ell$-th CM-line $ \lambda_{\mathrm{CM}, \ell}(\mathcal H, \mathcal L) $. Della Vedova and Zuddas then compute $ \mathrm{Chow}(M,L)$ and $F_\ell(M,L)$ for projective bundles over curves and for polarized manifolds blown-up at finite points. Let $\Sigma$ be a genus $g$ smooth curve and $E$ a rank $n \ge 2$ vector bundle over $\Sigma$. Let $M = {\mathbb P}(E)$ be the projective bundle associated to $E$ and denote by $\pi : M \to \Sigma$ the projection. A line bundle $L$ on $M$ is the form $L = \mathcal O_{{\mathbb P}(E)}(r) \otimes \pi^\ast B$ where $B$ is a line bundle over $\Sigma$. We assume that $L$ is ample. We also assume that $E$ is decomposed as $E = E_1 \oplus \cdots \oplus E_s$ into indecomposable components $E_i$, and that we are given a ${\mathbb C}^\ast$ action on $E$ written in terms of this decomposition $$ t\cdot (e_1, \cdots, e_s) = (t^{\lambda_1}e_1, \cdots, t^{\lambda_s} e_s). $$ In this situation $\mathrm{Chow}(M,L^k)$ is given by \begin{eqnarray}\label{Chow-bundle} \mathrm{Chow}(M,L^k) &=& \frac{\binom{m-1+kr}{m}}{m+1} \frac{\chi(\Sigma, \det(E\otimes B^{-\frac1r})}{\mu(E\otimes B^{-\frac1r})\chi (\Sigma, S^{kr}(E^\ast\otimes B^{\frac1r}))} \\ & & \qquad \cdot\sum_{j=1}^s \lambda_j \mathrm{rank}(E_j)(\mu(E_j) - \mu(E)) \nonumber \end{eqnarray} where $\mu(F) = \mathrm{deg}(F) / \mathrm{rank}(F)$ is the slope of the bundle $F$. On the other hand $F_\ell(M,L)$ is computed for some positive rational number depending only on $m$ as \begin{equation}\label{F-ell-bundle} F_\ell(M,L^k) = - C_\ell\ \frac{\chi(\Sigma, \det(E\otimes B^{-\frac1r}))}{\mu(E\otimes B^{-\frac1r})^2} \sum_{j=1}^s \lambda_j \mathrm{rank}(E_j)(\mu(E_j) - \mu(E)). \end{equation} By (\ref{Chow-bundle}) and (\ref{F-ell-bundle}) we see that $F_\ell(M,L^k) $ are proportional for all $\ell$, that they vanish if and only if $\mu(E_j) = \mu(E)$ for all $j = 1, \cdots, s$, and that $\mathrm{Chow}(M,L^k) = 0$ if and only if $F_\ell(M,L^k) = 0$ for some (and hence any) $\ell$. The slope stability of $E$ is related to the existence of cscK metric as in the following theorem. \begin{theorem}[\cite{ACGT0905}]\label{ACGTtheorem} A projective bundle ${\mathbb P}(E)$ over a smooth curve of genus $g \ge 2$ admits a K\"ahler metric of constant scalar curvature in some (and hence any) K\"ahler class if and only if $E$ is slope polystable. \end{theorem} We will not reproduce the formulas of $ \mathrm{Chow}(M,L)$ and $F_\ell(M,L)$ for polarized manifolds obtained by blowing-up at finite points, but the consequences of the formulas are summarized as follows. By a result of LeBrun and Simanca \cite{lebrunsimanca93} the cone $\mathcal E$ of extremal K\"ahler classes is open in the K\"ahler cone, and the locus where the Futaki invariant $F_1$ vanishes is the set $\mathcal C$ of all cscK classes. By the results of Arezzo and Pacard \cite{arezzopacard06}, \cite{arezzopacard09} there is a non-empty open set of cscK classes under mild conditions. Under such conditions we may be able to show that the locus $\mathcal Z$ where $F_2 = \cdots = F_m = 0$ is a Zariski closed subset in $\mathcal C$. Then a rational point in $\mathcal C \backslash \mathcal Z$ will be a cscK but asymptotically unstable polarization. This idea works for the blow-up of ${\mathbb C}{\mathbb P}^2$ at four points with all but one aligned. See \cite{DVZ10} for the detail. \section{Toric case} In this section we compare H.Ono's paper \cite{Ono10-1} with the work of Della Vedova and Zuddas \cite{DVZ10}. Let $\Delta \subset {\mathbb R}^m$ be an $m$-dimensional integral Delzant polytope. Namely, \\ (i) $\Delta$ has integral vertices ${\bf w}_1, \cdots, {\bf w}_d$, \\ (ii) $m$ edges of $\Delta$ emanate from each vertex ${\bf w}_i$, and \\ (iii) primitive vectors along those edges generate the lattice ${\mathbb Z}^m \subset {\mathbb R}^m$. \\ To a Delzant polytope there correspond a nonsingular toric variety and an ample line bundle $L$. The Ehrhart polynomial of $\Delta$ \begin{equation}\label{Ehrhart} E_P(k) = \mathrm{Vol}(\Delta)k^n + \sum_{j=0}^{m-1} E_{P,j}k^j \end{equation} has the property that $$E_P(i) = \sharp (iP\cap{\mathbb Z}^m).$$ It is also known that there exists an ${\mathbb R}^m$-valued polynomial \begin{equation}\label{s(k)} {\bf s}_\Delta(k) = k^{n+1} \int_\Delta {\bf x}\,dv + \sum_{j=1}^m k^j\,{\bf s}_{\Delta, j} \end{equation} such that \begin{equation} {\bf s}_\Delta (i) = \sum_{{\bf a} \in i\Delta \cap {\mathbb Z}^m} {\bf a}. \end{equation} Then Ono \cite{Ono10-1} proves that if, for each $i$, $(M_\Delta, L_{\Delta}^i)$ is (not necessarily asymptotically) Chow semistable, we have \begin{equation}\label{Onoformula} {\bf s}_\Delta (i) = \frac{E_\Delta(i)}{\mathrm{Vol}(i\Delta)}\int_{i\Delta} {\bf x}\,dv. \end{equation} Hence if $(M_\Delta, L_{\Delta})$ is asymptotically Chow semistable, we have the equality \begin{equation}\label{Onoformula2} \mathrm{Vol}(\Delta){\bf s}_\Delta (k) - kE_\Delta(k)\int_{\Delta} {\bf x}\,dv = \sum_{j=0}^m k^j \left(\mathrm{Vol}(\Delta){\bf s}_{\Delta,j} - E_{\Delta,j-1}\int_\Delta {\bf x} \,dv\right) = 0 \end{equation} as a polynomial in $k$. But the Ehrhart polynomial is equal to the Hilbert polynomial $\chi(M_\Delta, L_{\Delta}^k)$. Moreover, ${\bf s}_\Delta (k)$ can be regarded as a character of the torus and gives the weight $w(M_\Delta, L_\Delta^k)$ on $L_{\Delta}^k$ when restricted to a one parameter subgroup. Therefore, as a character, $$\mathrm{Vol}(k\Delta){\bf s}_\Delta (k) - kE_\Delta(k)\int_{\Delta} {\bf x}\,dv$$ is equal to $$\mathrm{Vol}(\Delta)w(M_\Delta, L_\Delta^k) - k\chi(M_\Delta, L_{\Delta}^k)\int_{M_\Delta} u_X \omega^m$$ when restricted to the one parameter group generated by an infinitesimal generator $X$. Put \begin{equation}\label{Onoformula3} \mathcal{F}_{\Delta,j} := \mathrm{Vol}(\Delta) {\bf s}_{\Delta,j} - E_{\Delta, j-1}\int_\Delta\,dv \in {\mathbb R}^m. \end{equation} By (\ref{Onoformula2}), $\mathcal{F}_{\Delta,j}$ vanishes if $(M_\Delta, L_{\Delta}^i)$ is Chow semistable. But (\ref{DVZformula}) shows \begin{equation}\label{Onoconjecture} \mathrm{Lin}_{\mathbb C}\{\mathcal F_{\Delta,j},\ j=1, \cdots, m \} = \mathrm{Lin}_{\mathbb C}\{\mathcal F_{\mathrm{Td}^{(p)}}|_{{\mathbb C}^m},\ p=1,\cdots, m\} \end{equation} where $\mathrm{Lin}_{\mathbb C}$ stands for the linear hull in ${\mathbb C}^m$. This gives a proof to Conjecture 1.6 in \cite{Ono10-1}. In \cite{Ono10-2}, Ono further gives a necessary and sufficient condition for Chow semistability condition for $(M_\Delta, L_\Delta^i)$ in terms of toric data. Shelukhin \cite{Shelukhin09} also expresses $\mathcal F_1(M,-K_M)$ for a toric Fano manifold $M$ in terms of toric data of $M$. \section{K stability} The notion of K-stability was first introduced by Tian in \cite{tian97} for Fano manifolds and proved that if a Fano manifold carries a K\"ahler-Einstein metric then $M$ is weakly K-stable. Tian's K-stability considers the degenerations of $M$ to normal varieties and uses a generalized version of the invariant $\mathcal F_1$ which were defined for normal varieties. Donaldson re-defined in \cite{donaldson02} the invariant $\mathcal F_1$ for general polarized varieties (or even projective schemes) as introduced in the previous section, and also re-defined the notion of K-stability for a polarized manifold $(M, L)$. For a polarized variety $(M, L)$, a test configuration of degree $r$ consists of the following.\\ (a)\ \ A flat family of schemes $\pi : {\mathcal M} \to {\mathbb C}$:\\ (b)\ \ ${\mathbb C}^*$-action on ${\mathcal M}$ such that $\pi : {\mathcal M} \to {\mathbb C}$ is ${\mathbb C}^\ast$-equivariant with respect to the usual ${\mathbb C}^*$-action on ${\mathbb C}$:\\ (c)\ \ ${\mathbb C}^*$-equivariant relatively ample line bundle $\mathcal{L} \to {\mathcal M}$ such that for $t \ne 0$ one has $M_t = \pi^{-1}(t) \cong M$ and $(M_t, {\mathcal L}|_{M_t}) \cong (M, L^r)$. ${\mathbb C}^*$-action on $(\mathcal M, \mathcal L)$ induces a ${\mathbb C}^*$-action on the central fiber $L_0 \to M_0 = \pi^{-1}(0)$. Moreover if $(M,L)$ admits a ${\mathbb C}^*$-action, then one obtains a test configuration by taking the direct product $L^r \times {\mathbb C} \to M \times {\mathbb C}$. This is called a product configuration. A product configuration endowed with the trivial ${\mathbb C}^\ast$ action is called the trivial configuration. \begin{definition}Let $(M,L)$ be a polarized variety, and $(\mathcal M, \mathcal L)$ a test configuration of $(M,L)$. We define DF-invariant $DF(\mathcal M, \mathcal L)$ to be the DF-invariant $F_1(M_0,L_0)$ of the central fiber $(M_0, L_0)$. \end{definition} \begin{definition}A polarized variety $(M,L)$ is said to be K-polystable (resp. stable) if the DF-invariant $DF(\mathcal M, \mathcal L)$ is negative or equal to zero for all test configurations $(\mathcal M, \mathcal L)$, and the equality occurs only if the test configuration is product (resp. trivial). \end{definition} \noindent {\bf Conjecture}(\cite{donaldson02}) : Let $(M,L)$ be a nonsingular polarized variety. Then a K\"ahler metric of constant scalar curvature will exist in the K\"ahler class $c_1(L)$ if and only if $(M,L)$ is K-polystable. \bigskip Let us recall the following general terminology. Let $V$ be a vector space over ${\mathbb C}$ and $\rho$ a one parameter subgroup of $SL(V)$. Let $[v] \in {\mathbb P}(V)$ and $\lambda \in {\mathbb C}^{\ast}$. Suppose $[\rho(\lambda)v] \to [v_0] \in {\mathbb P}(V)$ as $ \lambda \to 0$. Then we have an endomorphism $\rho(\lambda):{\mathbb C} v_0 \to {\mathbb C} v_0$. The weight of this endomorphism is called Mumford weight of $(v,\rho)$ and is denoted by $\mu(v,\rho)$. We say that $[v] \in {\mathbb P}(V)$ is semistable (resp. stable) with respect to $\rho$ iff $\mu(v,\rho) \le 0$ (resp. $\mu(v,\rho) < 0$). We also say that $[v] \in {\mathbb P}(V)$ is polystable iff $\mu(v,\rho) < 0$ or $\rho({\mathbb C}^{\ast})$ is contained in $Stab(v)$. The Hilbert-Mumford criterion says that $[v] \in {\mathbb P}(V)$ is semistable (resp. polystable) with respect to a subgroup $G$ of $SL(V)$ iff $[v] \in {\mathbb P}(V)$ is semistable (resp. polystable) with respect to arbitrary one parameter subgroup of $G$. Let us define Hilbert stability of a polarized variety $(M,L)$. Suppose $L^r$ is a very ample line bundle with $h^i(L^r) = 0$ for $i > 0$. Then $\chi(r) := h^0(L^r)$ can be computed by Riemann-Roch theorem. If we fix an isomorphism $H^0(L^r) \cong {\mathbb C}^{\chi(r)}$ this gives an embedding $\Phi_{|L^r|} : M \to {\mathbb P}^{\chi(r) -1}$. A different choice of the isomorphism gives a transformation by an element of $SL(\chi(r))$. When $k$ is sufficiently large we have an exact sequence $$ 0 \to I_k \to S^kH^0_M(L^r) \to H^0_M(L^{kr}) \to 0, $$ where $I_k$ denotes the set of all polynomials of degree $k$ vanishing along the image of $M$. The $k$-th Hilbert point of $(M,L^r)$ is the point in the Grassmannian $$ x_{k,r} \in G = G(S^k{\mathbb C}^{\chi(r)\ast};\chi(rk))$$ determined by the identification $H^0_M(L^r) \cong {\mathbb C}^{\chi(r)}$. We say that $(M,L)$ is Hilbert (semi)stable with respect to $r$ iff the image of $x_{r,k} \in G$ of the Pl\"ucker embedding $G \to {\mathbb P}^{\binom{\chi(r)+k-1}{\chi(rk)}}$ is (semi)stable for all large $k$. \begin{fact}[c.f. \cite{mumford}, Proposition 2.1]\ \ Let $L$ be a very ample line bundle with $h^i(L) = 0$ for $i > 0$, and $\rho$ a one parameter subgroup of $SL(h^0(L))$. Let ${\widetilde w}$ be the Mumford weight of the Hilbert point $x_k \in G(S^k{\mathbb C}^{h^0(L)\ast};\chi(k))$ with respect to $\rho$, and $e$ be the Mumford weight of the Chow point of $(M,L)$ with respect to $\rho$. Then we have $$ {\widetilde w} (k) = Cek^{m+1} + O(k^m)$$ with positive constant $C$. \end{fact} This says if $e < 0$ then ${\widetilde w}(k) <0$ for large $k$, namely Chow stability implies Hilbert stability. If ${\widetilde w}(k) \le 0$ for all $k$, then $e \le 0$, namely Hilbert semistable implies Chow semistable. Now let ${\widetilde w}(r,k)$ be the Mumford weight of $x_{r,k}$. We wish to express this in terms of $w(r)$ which was the weight for $H^0(L^r)$ of the one parameter group $\rho$ in $SL(h^0(L))$. As $\rho$ lies in $SL(h^0(L))$ we have to renormalize the one parameter group so that in lies in $SL(h^0(L^r))$. After this renormalization we find by putting $s = rk$ \begin{eqnarray*} {\widetilde w}(r,k) &=& - w(s) + \frac{w(r)}{r\chi(r)}s\chi(s)\\ &=& s\chi(s)( \frac{w(r)}{r\chi(r)} - \frac{w(s)}{s\chi(s)} )\\ &=& s\chi(s)(F_1(r^{-1} - s^{-1}) + O(r^{-2} - s^{-2})). \end{eqnarray*} \begin{theorem}[\cite{ross03}, \cite{rossthomas06}]\ \ If we put $ {\widetilde w}(r,k) = \frac 1{r\chi(r)} \sum_{i,j = 0}^{m+1} a_{i,j}r^{i+j}k^j $ then \begin{enumerate} \item $a_{m+1, m+1} = 0$: \item The Chow weight $e_r := {\mathrm Chow}(M,L^r)$ of $(M,L^r)$ is given by $$e_r = \frac{Cr^m}{\chi(r)} \sum_{i=0}^m a_{i,m+1}r^i$$ with a positive constant $C$: \item $a_{m,m+1}$ and $F_1(M,L)$ have the same sign. \end{enumerate} \end{theorem} This result says that if $e_r \le 0$ for all large $r$ then $F_1(M,L) \le 0$, namely that asymptotic Chow semistability implies K-semistability. Now we turn to the computation of $F_1(M,L)$. The following result of Wang gives a way of computing $F_1(M,L)$. Note that the sign convention for the DF-invariant is opposite in \cite{Xiaowei08}, \cite{Odaka09} and \cite{Odaka10}. \begin{theorem}[Wang \cite{Xiaowei08}]\label{Wang} For any test configuration $(\mathcal M, \mathcal L)$ of a polarized variety $(M,L)$ we consider its natural compactification $(\overline{\mathcal M}, \overline{\mathcal L})$. Then $F_1(\mathcal M, \mathcal L)$ is computed by \begin{equation*} DF(\mathcal M, \mathcal L) = \frac {-1}{2(m!)((m+1)!)} ( - m(L^{m-1}. K_M)(\overline{\mathcal L}^{m+1} + (m+1)(L^m)(\overline{\mathcal L}^m.K_{\overline{\mathcal M}/{\mathbb P}^1})) \end{equation*} where $K_{\overline{\mathcal M}/{\mathbb P}^1} = K_{\overline{\mathcal M}} - f^\ast K_{{\mathbb P}^1}$ with the projection $f : \overline{\mathcal M} \to {\mathbb P}^1$. The notation $(L^m)$ means the intersection number $L\ldots L$ ($m$ times) in $M$, and so on. \end{theorem} With different technicalities Odaka extends and applies this result to the semi test configuration $\mathcal B := Bl_{\mathcal J}(M\times {\mathbb C})$ obtained by blowing up the flag ideal $\mathcal J \subset \mathcal O_{\mathcal X \times {\mathbb C}}$ of the form $$ \mathcal J = I_0 + I_1 t + I_2 t^2 + \cdots + I_{N-1} t^{N-1} + (t^N) $$ where $I_0 \subset I_1 \subset \cdots \subset I_{N-1} \subset \mathcal O_M$ is a sequence of coherent ideals of $M$. Denote this blow-up by $\Pi : \mathcal B \to M \times {\mathbb C}$ and by $E$ the exceptional divisor, i.e., $\mathcal O(-E) = \Pi^{-1}\mathcal J$. We also put $\mathcal L := p_1^\ast L$ where $p_i $ is the projection of $M\times {\mathbb C}$ or $M \times {\mathbb P}^1$ to the $i$-th factor. We assume that the restriction of $\mathcal L(-E)$ to $\mathcal B$ is relatively semiample, and hence we have a semi test configuration $(\mathcal B, \mathcal L(-E)|_{\mathcal B})$. $\mathcal B$ is compactified to $\overline{\mathcal B}:= Bl_{\mathcal J}(X \times {\mathbb P}^1)$. Then the DF-invariant $DF(\mathcal B, \mathcal L(-E))$ is computed as follows. \begin{theorem}[Odaka \cite{Odaka09}]\label{Odaka1} \begin{eqnarray*} & &DF(\mathcal B, \mathcal L(-E)) = \frac {-1}{2(m!)((m+1)!)} ( - m(L^{m-1}. K_M)(\mathcal L(-E))^{m+1} \\ &&\qquad \qquad + (m+1)(L^m)(\mathcal L(-E)^m.p_1^\ast K_M) + (m+1)(L^m)(\mathcal L(-E)^m.K_{\mathcal B/M\times {\mathbb C}})) \end{eqnarray*} where the intersection numbers are taken on $M$ or $\overline{\mathcal B}$. \end{theorem} The next theorem shows that this computation is sufficient to check K-(semi)stability. \begin{theorem}[Odaka \cite{Odaka09}]\label{Odaka2} The negativity (resp. nonpositivity) of all the DF-invariance of the semi test configurations of the above blow-up type $(\mathcal B, \mathcal L(-E))$ with $\mathcal B$ Gorenstein in codimension 1 is equivalent to K-stability (resp. K-semistability) of $(M,L)$. \end{theorem} In \cite{Odaka10}, Odaka proves Theorem \ref{Odaka} using Theorem \ref{Odaka1} and \ref{Odaka2}. He also proves in \cite{Odaka10}\\ $\bullet$\ A semi-log-canonical canonically polarized variety $(X,\mathcal O_X(mK_X))$ with $m \in {\mathbb Z}_{>0}$ is K-stable.\\ $\bullet$\ A log-terminal polarized variety $(X,L)$ with numerically trivial canonical divisor $K_X$ is K-stable.\\ These results are expected to be true because of Calabi-Yau theorem \cite{yau78}. In \cite{OdakaSano10}, Odaka and Sano give an algebro-geometric proof of the fact that if the alpha invariant of a Fano manifold $M$, which is equal to the log canonical threshold, is bigger than $m/(m+1)$ then $(M,- K_M)$ is K-stable. This is of course another proof of a consequence of a theorem of Tian \cite{tian87}. \bibliographystyle{amsalpha}
proofpile-arXiv_068-721
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \label{sec:intro} All the photons which we receive have been emitted on our past light cone. In cosmology, looking far away always also means looking into the past. If the redshift of the objects under consideration is small, $z\ll 1$, and evolution is relevant only on cosmological time scales, this effect is small and may be neglected. However for redshifts of order unity or larger, the fact that we are not observing a spatial hypersurface but a part of the background lightcone becomes relevant. If we observe the large scale distribution of galaxies, we usually compare the true, observed distribution with the one of an unperturbed universe with background density $\bar\rho$ and measure its fluctuations, $\delta({\mathbf x})=(\rho({\mathbf x})-\bar\rho)/\bar\rho$, where $\bar\rho$ is usually the mean observed galaxy density. This is then cast in the power spectrum, $$ \langle \delta({\mathbf k})\delta({\mathbf k}')\rangle = (2\pi)^3\delta({\mathbf k}-{\mathbf k}')P_\delta(k) \;,$$ where $\delta({\mathbf k})$ is the Fourier transform of $\delta({\mathbf x})$ and we assume statistical homogeneity and isotropy. For small galaxy catalogs one may assume that we measure the density fluctuation today, $\delta({\mathbf x})=\delta({\mathbf x},t_0)$, but already for the Sloan Digital Sky Survey (SDSS) which determines the galaxy distribution out to $z\sim 0.2$ or $0.5$ (for Luminous Red Galaxies, LRG's) it is no longer a good approximation, to compare the observed power spectrum with the above defined $P_\delta(t_0)$. Time evolution of $P_\delta$ can be taken into account by multiplying the power spectrum with a growth factor. In addition to this there is the issue of gauge. The density fluctuation $\delta({\mathbf x},t)$ which we calculate in a given Friedmann background is not gauge invariant. It depends on the background Friedmann universe we compare the observed $\rho({\mathbf x},t)$ with. This is the cosmological gauge problem~\cite{mybook}. There are several attempts in the literature to deal with these issues, but they are so far incomplete. People have considered individual observational effects like redshift space distortions~\cite{rsd}, the Alcock-Pacinski effect~\cite{AP} or lensing. A first full treatment is attempted in~\cite{Yoo}. In the present paper we shall go beyond this work and determine the spectrum truly in terms of directly observable quantities. We derive gauge invariant expressions which are correct to first order in perturbation theory and which are straightforward to compare with observations. This is an important first step for this problem as the gauge issue is mainly relevant on very large scales, where perturbations are small so that first order perturbation theory is justified. Our results will be most significant for future galaxy catalogs like BOSS~\cite{boss}, DES~\cite{DES}, PanStarrs~\cite{Pan} or, especially Euclid~\cite{Euclid}, but also an analysis of SLOAN-7~\cite{SDSS} along the lines outlined here is interesting. \vspace{0.1cm} {\bf Notation:} We work with a flat Friedmann background and in conformal time, $t$, such that $$ ds^2 =a^2(t)\left( -dt^2+\delta_{ij}dx^ix^j\right) \,.$$ A photon geodesic in this background which arrives at position ${\mathbf x}_O$ at time $t_O$ and which has been emitted at affine parameter $\lambda=0$ at time $t_S$, moving in direction ${\mathbf n}$ is then given by $(x^\mu(\lambda)) =(\lambda+t_S, {\mathbf x}_O+(\lambda+t_S-t_O){\mathbf n})$. Here $\lambda = t-t_S = r_S-r$, where $r$ is the comoving distance $r=|{\mathbf x}(\lambda) -{\mathbf x}_O|$, hence $dr=-d\lambda$. We can of course choose ${\mathbf x}_O=0$. \section{The matter fluctuation spectrum in redshift space} In a galaxy redshift survey, the observers measure the number of galaxies in direction ${\mathbf n}$ at redshift $z$, let us call this $N({\mathbf n},z)d\Omega_{{\mathbf n}}dz$. They then average over angles to obtain their redshift distribution, $\langle N\rangle(z)dz$. From this they can build directly the redshift density perturbation \footnote{This is not what is done in practice, where the observed 'point process' {\em i.e. } the observed distribution of galaxies is compared to a random one with the same redshift distribution (but usually many more galaxies to reduce scatter~\cite{SDSS}).} {\em i.e. } the perturbation variable \begin{eqnarray} \delta_z({\mathbf n},z) &=& \frac{\rho({\mathbf n},z)-\langle\rho\rangle(z)}{\langle\rho\rangle(z)} =\frac{\frac{N({\mathbf n},z)}{V({\mathbf n},z)}-\frac{\langle N\rangle(z)}{V(z)}} {\frac{\langle N\rangle(z)}{V(z)}} \nonumber\\ &=& \frac{N({\mathbf n},z)-\langle N\rangle(z)}{\langle N\rangle(z)}-\frac{\delta V({\mathbf n},z)}{V(z)}~. \end{eqnarray} Here $V({\mathbf n},z)$ is the physical survey volume density per redshift bin, per solid angle. The volume is also a perturbed quantity since the solid angle of observation as well as the redshift bin are distorted between the source and the observer. Hence $V({\mathbf n},z)=V(z)+\delta V({\mathbf n},z)$. The truly observed quantity is the perturbation in the number density of galaxies \begin{equation} \label{Npert} \frac{N({\mathbf n},z)-\langle N\rangle(z)}{\langle N\rangle(z)}=\delta_z({\mathbf n},z)+\frac{\delta V({\mathbf n},z)}{V(z)} \equiv \Delta({\mathbf n},z) \end{equation} which therefore must be gauge invariant. Actually, as we shall see, both $\delta_z({\mathbf n},z)$ and $\delta V({\mathbf n},z)/V(z)$ are gauge invariant. This is not surprising, as we could measure the volume perturbation also with other tracers than galaxies and it is therefore measurable by itself and hence gauge invariant We neglect biasing in our treatment as we want to keep the expressions as model independent as possible. We shall add only some comments on how simple linear biasing could be included. \subsection{Computation of $\delta_z({\mathbf n},z)$} Let us first relate $\delta_z({\mathbf n},z)$ to the well known gauge dependent quantity $\delta({\mathbf x},t)$. For this we note that to first order \begin{eqnarray} \delta_z({\mathbf n},z)&=&\frac{\rho({\mathbf n},z)-\bar \rho(z)}{\bar \rho(z)}= \frac{\bar{\rho}(\bar{z})+\delta\rho({\mathbf n},z)-\bar\rho(z)}{\bar\rho(z)}\nonumber\\ &=&\frac{\bar{\rho}(z-\delta z)+\delta\rho({\mathbf n},z)-\bar\rho(z)}{\bar\rho(z)} \nonumber \\ \label{e:rhonz} &=&\frac{\delta\rho({\mathbf n},z)}{\bar\rho(\bar z)}-\frac{d\bar \rho}{d \bar z} \frac{\delta z({\mathbf n},z)}{\bar\rho(\bar z)} . \end{eqnarray} Here $\bar z =\bar z(t)$ is the redshift of a background Friedmann universe we compare our perturbation with and $\delta z$ is the redshift perturbation to this universe. Moreover $\rho({\mathbf n},\bar z(t)) =\bar\rho(t) +\delta\rho({\mathbf n},t)$, where the time is obtained by solving the background relation $\bar z=\bar z(t)$. Note that $\bar\rho(z) = \bar\rho(\bar z+\delta z)$ deviates to first order from $\bar\rho(\bar z)$. Cleary, both $\delta z$ and $\delta\rho$ depend on the chosen background and are hence gauge dependent. However their combination in Eq.~(\ref{e:rhonz}) must turn out to be gauge invariant, as it is in principle observable. Let us first compute the redshift in a perturbed Friedman universe with metric \begin{eqnarray} ds^2 &=&a^2(t) \Big[-(1+2A)dt^2 -2B_idtdx^i + \\ && +[(1+2H_L)\delta_{ij}+ 2H_{Tij} + 2H_{ij}]dx^idx^j\Big]~.\nonumber \end{eqnarray} Here $H_{ij}$ is the transverse traceless gravitational wave term and $A$, $B_i$, $H_L$ and $H_{Tij}$ are scalar degrees of freedom, two of which can be removed by gauge transformations. In Fourier space $B_i = -\hat k_i B$ and $H_{Tij} = (\hat k_i \hat k_j -\delta_{ij}/3)H_T$. For simplicity, we shall neglect the contribution from gravitational waves in the main text. In the appendix we include also these terms. We consider a photon emitted from a galaxy, the source, $S$, which is moving in direction ${\mathbf n}$ (hence, to lowest order, it is seen under the direction $-{\mathbf n}$ from the observer $O$). We denote the peculiar velocities of the source and observer by ${\mathbf v}_S$ and ${\mathbf v}_O$ . The observer receives the photon redshifted by a factor \begin{equation} 1+z = \frac{(n\cdot u)_S}{(n\cdot u)_O} \;. \end{equation} We solve the equation for the photon geodesic $n = a^{-2}(1+\delta n^0,{\mathbf n} +\delta{\mathbf n})$, where ${\mathbf n}$ denotes the unperturbed photon direction at the observer. Using that $u =a^{-1}(1-A,{\mathbf v})$, where ${\mathbf v}$ is the peculiar velocity, we find by the same calculation which leads to Eq.~(2.228) in~\cite{mybook} (see also~\cite{myrev}) \begin{eqnarray}\label{z} 1+z &=& \frac{a(t_O)}{a(t_S)}\Big\{1 +\Big[H_L+\frac{1}{3}H_T + {\mathbf n}\cdot{\mathbf V} + \Phi+\Psi\Big]_{t_S}^{t_O} \nonumber\\ && \qquad - \int_{r_S}^{0}(\dot\Phi+\dot\Psi)d\lambda \Big\} \;. \end{eqnarray} The first term is simply $1+\bar z$. Here $t$ denotes conformal time, $\Psi$ and $\Phi$ are the Bardeen potentials and ${\mathbf V}$ is the gauge invariant velocity perturbation which corresponds to the ordinary velocity perturbation in longitudinal gauge. For more details see~\cite{myrev,mybook} and Appendix~\ref{app:a}. In this redshift perturbation the dipole term ${\mathbf n}\cdot{\mathbf V}({\mathbf x}_O,t_O)$ is the only term in the square bracket in (\ref{z}) which depends on directions when evaluated at ${\mathbf x}_O$. The terms evaluated at the emission point of course do all depend on ${\mathbf n}$ via the position of the emission point which, to lowest order, is simply ${\mathbf x}_S={\mathbf x}_O-{\mathbf n}(t_O-t(\bar z_S))$. The integral extends along the unperturbed photon trajectory from the emission point, where we set $\lambda=0$, to our position where $\lambda = t_O-t_S = r_S$. Eq. (\ref{z}) implies that the redshift perturbation is \begin{eqnarray}\label{e:dez} \delta z &=& z-\bar{z} = \nonumber\\ && -(1+z)\Big[\big(H_L +\frac{1}{3}H_T + {\mathbf n}\cdot{\mathbf V} + \Phi+\Psi\big)({\mathbf n},z) \nonumber \\ && \qquad + \int_{0}^{r_S}(\dot\Phi+\dot\Psi)d\lambda \Big] , \end{eqnarray} where we have neglected the unmeasurable monopole term and the dipole term from the observer position. We indicate the source position by the direction it is seen under, $-{\mathbf n}$, and its observed redshift $z$. To lowest order ${\mathbf x}({\mathbf n},z) = -r_S(z){\mathbf n}$. To obtain the density fluctuation in redshift space, we now use that $\frac{d\bar \rho}{d \bar z} = 3\frac{\bar \rho}{1+\bar z}$. With this we obtain \begin{eqnarray} \delta_z({\mathbf n},z) &=& D_g({\mathbf n},z) +3({\mathbf V}\cdot{\mathbf n})({\mathbf n},z) +3(\Psi+\Phi)({\mathbf n},z) \nonumber \\ && \qquad + 3\int_{t_S}^{t_O}(\dot\Psi+\dot\Phi)({\mathbf n},z(t))dt \;. \label{dez2} \end{eqnarray} Here we relate a perturbation variable in direction ${\mathbf n}$ at redshift $z$ to its unperturbed position and time, $f({\mathbf n},z)=f\left({\mathbf x}({\mathbf n},z),t(z)\right)$ and overdots are partial derivatives with respect to $t$, the second argument in $f({\mathbf x},t)$. $D_g$ is the density fluctuation on the uniform curvature hypersurface. It is related to the density fluctuation in co-moving gauge, $D_{cm}$ by~\cite{myrev,mybook} $$ D_{cm} \equiv D = D_g +3\Phi + 3 k^{-1}{\cal H} V .$$ If we would want to introduce a bias between the matter density and the galaxy density it would probably be most physical to assume that both galaxies and dark matter follow the same velocity field as they experience the same gravitational acceleration. We then expect that biasing should be applied to the density fluctuation in co-moving gauge, $D_{cm}$, not to $D_g$. On small scales such differences are irrelevant but on large scales they do become relevant as becomes clear when considering the (linear) power spectra for the different density fluctuations variables, see Fig.~\ref{fig:Dens}. \begin{figure}[!h] \centerline{\epsfig{figure=Pspecs2.eps,height=4.5cm}} \caption{ \label{fig:Dens} The (linear) matter power spectrum on the uniform curvature hypersurface (top curve, green), in longitudinal gauge (middle curve, red) and in co-moving gauge (bottom curve, blue). } \end{figure} \subsection{Volume perturbations} As next step we compute the volume perturbation $\delta V/V$ in Eq.~(\ref{Npert}) which must be gauge invariant since also $\delta_z$ is gauge invariant by itself. This is not surprising as it would in principle be a measurable quantity if we would have an 'unbiased tracer' of the volume. We consider a small volume element at the source position. By this we mean the spatial volume seen by a source with 4-velocity $u^\mu$. This is given by \begin{equation} dV=\sqrt{-g}\;\epsilon_{\mu\nu\alpha\beta}\;u^\mu \mathrm{d}x^\nu \mathrm{d}x^\alpha \mathrm{d}x^\beta . \end{equation} We want to express the volume element in terms of the polar angles at the observer position, $\theta_O$ and $\varphi_O$, and the observed redshift $z$. We have \begin{eqnarray} dV &=&\sqrt{-g}\;\epsilon_{\mu\nu\alpha\beta}u^\mu\! \frac{\partial x^\nu}{\partial z} \! \frac{\partial x^\alpha}{\partial \theta_S}\! \frac{\partial x^\beta}{\partial \varphi_S}\! \left| \frac{\partial (\theta_S,\varphi_S)}{\partial (\theta_O,\varphi_O)} \right|\! \mathrm{d}z \mathrm{d}\theta_O\mathrm{d}\varphi_O \nonumber \\ \label{eq:vol} &\equiv& v(z,\theta_O,\varphi_O)\mathrm{d}z\mathrm{d}\theta_O\mathrm{d}\varphi_O ~, \end{eqnarray} where we have introduced the density $v$ which determines the volume perturbation, $$ \frac{\delta V}{V} = \frac{v -\bar v}{\bar v} = \frac{\delta v}{\bar v} . $$ $\left| \frac{\partial (\theta_S,\varphi_S)}{\partial (\theta_O,\varphi_O)} \right|$ is the determinant of the Jacobian of the transformation from the angles at the source to the angles at the observer. Eq.~(\ref{eq:vol}) is still exact. In a homogeneous and isotropic universe geodesics are straight lines and $\theta_S=\theta_O$ and $\varphi_S=\varphi_O$. In a perturbed universe the angles at the source are perturbed with respect to the angles at the observer and we have $\theta_S=\theta_O+\delta \theta$ and $\varphi_S=\varphi_O+\delta \varphi$. Hence to first order the Jacobian determinant becomes \begin{equation} \left| \frac{\partial (\theta_S,\varphi_S)}{\partial (\theta_O,\varphi_O)} \right|=1+ \frac{\partial \delta \theta}{\partial \theta}+\frac{\partial \delta \varphi}{\partial \varphi} . \end{equation} Using the expression for the metric determinant, $\sqrt{-g}=a^4(1+A+3H_L)$ and the 4-velocity of the source, $u=\frac{1}{a}(1-A,v^i)$, we find to first order \begin{eqnarray} v&=&a^3(1+A+3H_L)\Bigg[ \frac{d r}{d z} r^2 \sin\theta_S \left(1+\frac{\partial \delta \theta}{\partial \theta}+\frac{\partial \delta \varphi}{\partial \varphi}\right)\nonumber\\ &-&\left(A\frac{d \bar r}{d\bar z}+v_r\frac{d t}{d z}\right) \bar r^2\sin\theta_O\Bigg] . \end{eqnarray} Here $ dr/dz$ is to be understood as the change in comoving distance $r$ with redshift along the photon geodesic. At linear order we can write (the distinction between $z$ and $\bar z$ is only relevant for background quantities) \begin{equation}\label{eq:12} \frac{d r}{d z}=\frac{d\bar r}{d\bar z}+\frac{d \delta r}{d\bar z}- \frac{d \delta z}{d\bar z}\frac{d \bar r}{d \bar z}=\left(\frac{d \bar r}{dt}+ \frac{d \delta r}{d\lambda}-\frac{d\delta z}{d\lambda}\frac{d\bar r}{d \bar z} \right)\frac{dt}{d\bar z} , \end{equation} where we have used that for first order quantities we can set $dt=d\lambda$ when we have to take the derivative along the photon geodesic. The last term of Eq.~(\ref{eq:12}) contains the redshift space distortion which will turn out to be the biggest correction to the power spectrum. To lowest order along a photon geodesic $-d\bar r/d\bar z = dt/d\bar z =-H^{-1}= -a/{\cal H}$, where $H$ is the physical Hubble parameter and ${\cal H} = aH$ is the comoving Hubble parameter. With this the volume element becomes \begin{eqnarray} \label{Vt} v&=&\frac{a^4 \bar r^2 \sin \theta_O}{{\cal H}}\Bigg[1+3H_L + \left( \cot\theta_O+\frac{\partial}{\partial \theta}\right)\delta \theta +\frac{\partial \delta \varphi}{\partial \varphi} \nonumber\\ &&- {\mathbf v}\cdot {\mathbf n}+\frac{2\delta r}{r}-\frac{d\delta r}{d\lambda}+\frac{a}{{\cal H}}\frac{d \delta z}{d\lambda}\Bigg]~. \end{eqnarray} To obtain the fluctuation of $v$ we subtract the unperturbed part $\bar v(z)$. Note, however that we evaluate this at the observed redshift, $z=\bar z +\delta z$. Hence $$ \bar v(z) = \bar v(\bar z) + \frac{d\bar v}{d\bar z}\delta z . $$ From the unperturbed expression with $a=1/(\bar z+1)$, \begin{equation} \bar v(\bar z) = \frac{\sin\theta_O \bar r^2}{(1+\bar z)^4 {\cal H}} \end{equation} one infers \begin{equation} \label{dVz} \frac{d\bar{v}}{d \bar z}=\bar v(\bar z) \left(-4+\frac{2}{\bar r_S{\cal H}}+\frac{\dot{{\cal H}}}{{\cal H}^2} \right)\frac{1}{1+\bar z}~. \end{equation} Combining Eq.~(\ref{Vt}) and (\ref{dVz}) we obtain for the perturbation of the volume element \begin{eqnarray} \lefteqn{\frac{\delta v}{\bar v}({\mathbf n},z) = \frac{v(z) -\bar v(z)}{\bar v(z)} = } \\ && \hspace*{-2mm}3H_L+\left( \cot\theta_O+\frac{\partial}{\partial \theta}\right)\delta \theta + \frac{\partial \delta \varphi}{\partial \varphi}- {\mathbf v}\cdot {\mathbf n}+\frac{2\delta r}{r}\nonumber\\ &&\hspace*{-2mm}-\frac{d\delta r}{d\lambda}+\frac{1}{{\cal H}(1+\bar z)}\frac{d \delta z}{d\lambda} -\left(-4+\frac{2}{\bar r{\cal H}}+\frac{\dot{{\cal H}}}{{\cal H}^2} \right)\frac{\delta z}{1+\bar z} .\nonumber \label{e:volpert} \end{eqnarray} In order to express these quantities in terms of the perturbed metric and the peculiar velocity of observer and emitter, we need to compute the deviation vector that relates the perturbed geodesic to the unperturbed one $\delta x^\mu(\lambda)=x^\mu(\lambda)-\bar x^\mu(\lambda)$. Here we give only the main steps. More details on the derivation can be found in the appendix. We use \begin{equation} \frac{d x^\mu}{dt}=\frac{d x^\mu}{d\lambda}\frac{d\lambda}{dt}=\frac{n^\mu}{n^0} \end{equation} which leads to \begin{eqnarray} x^0(t_S)&=&-(t_O-t_S) = r_S \hspace{0.3cm} \mbox{at every order}\\ x^i(t_S)&=&-(t_O-t_S)\bar n^i-\int_{0}^{r_S} d\lambda(\delta n^i - \bar n^i\delta n^0) \end{eqnarray} to first order. In the following we neglect perturbations at the observer position since, as already mentioned, they give rise only to unmeasurable monopole term or a dipole term. Using the null geodesic equation for $n^\mu$ we find \begin{eqnarray} \delta x^i(t_S)&=&+\int_{0}^{r_S} d\lambda \Big( h_{\alpha i}{\bar n}^\alpha + h_{0\alpha}{\bar n}^i{\bar n}^\alpha \Big)\\ &+&\frac{1}{2}\int_{0}^{r_S} d\lambda(r_S-r) \Big( h_{\alpha\beta,i}+ \dot{h}_{\alpha\beta}{\bar n}^i\Big){\bar n}^\alpha{\bar n}^\beta\nonumber , \end{eqnarray} where $r(\lambda) = \lambda$. From this we obtain \begin{eqnarray} \delta r &\equiv& \delta x^i e_{r i}=-\delta x^i {\bar n}_i=-\frac{1}{2}\int_{0}^{r_S} d\lambda h_{\alpha\beta}{\bar n}^\alpha{\bar n}^\beta \nonumber \\ \label{dr_gi} &=&\int_{0}^{r_S} \hspace{-3mm}d\lambda (\Phi+\Psi)+\frac{B}{k}+ \frac{1}{k^2}\!\left(\!\frac{dH_T}{d\lambda}-2\dot{H}_T\! \right) . \end{eqnarray} We also use that ${\bar n}^i\partial_i+\partial_t=\frac{d}{d\lambda}=\frac{d}{dt}$ and $r_S=t_O-t_S$ to lowest order. For the derivative of $\delta r$ we obtain \begin{equation} \label{drdt_gi} \frac{d\delta r}{d\lambda}=-(\Phi+\Psi)+\frac{1}{k}\frac{dB}{d\lambda}+ \frac{1}{k^2}\left(\frac{d^2H_T}{d\lambda^2}-2\frac{d\dot{H}_T}{d\lambda} \right) . \end{equation} Similarly we find for the perturbed angles \begin{eqnarray} \delta \theta&\equiv&\frac{\delta x^ie_{\theta i}}{r_S}=\frac{1}{r_S}\int_{0}^{r_S} d\lambda \nonumber\\ &&\times\Big(h_{\alpha i}{\bar n}^\alpha e_\theta^i +\frac{1}{2}(r_S-r)h_{\alpha\beta,i}e_\theta^i{\bar n}^\alpha{\bar n}^\beta \Big) \label{dtheta}~,\\ \delta \varphi&\equiv&\frac{\delta x^ie_{\varphi i}}{r_S\sin\theta_O}= \frac{1}{r_S\sin\theta_O}\int_{0}^{r_S} d\lambda\nonumber\\ && \times\Big(h_{\alpha i}{\bar n}^\alpha e_\varphi^i +\frac{1}{2}(r_S-r)h_{\alpha\beta,i}e_\varphi^i{\bar n}^\alpha{\bar n}^\beta \Big). \end{eqnarray} We have used that ${\bar n}^i e_{\theta i}={\bar n}^i e_{\varphi i}=0$. The second term of the integral in Eq.~(\ref{dtheta}) can be rewritten as \begin{eqnarray} h_{\alpha\beta,i}e_\theta^i{\bar n}^\alpha{\bar n}^\beta&=&\frac{1}{r}\partial_\theta(h_{\alpha\beta}){\bar n}^\alpha{\bar n}^\beta\\ &=&\frac{1}{r}\Big[\partial_\theta(h_{\alpha\beta}{\bar n}^\alpha{\bar n}^\beta)-h_{\alpha\beta}\partial_\theta({\bar n}^\alpha{\bar n}^\beta)\Big]~,\nonumber \end{eqnarray} where $\partial_\theta{\bar n}^\alpha=-e_\theta^i\delta_{i\alpha}$, and analogously for $\varphi$. The angular contribution to the volume then reads \begin{eqnarray} \lefteqn{(\cot\theta+\partial_\theta) \delta\theta +\partial_\varphi\delta\varphi= \int_{0}^{r_S} d\lambda\frac{(r_S-r)}{2r_Sr}}\nonumber\\ && \hspace{-0.1cm}\times \Big[ \cot\theta\partial_\theta+\partial^2_\theta+\frac{1}{\sin^2\theta} \partial^2_\varphi \Big]h_{\alpha\beta}{\bar n}^\alpha{\bar n}^\beta \nonumber\\ && \hspace{-1cm} +\int_{0}^{r_S} d\lambda \frac{1}{r}\Big[ (\cot\theta+\partial_\theta)h_{i\alpha}e_\theta^i{\bar n}^\alpha +\frac{\partial_\varphi}{\sin\theta}h_{i\alpha}e_\varphi^i{\bar n}^\alpha\Big]\nonumber \\ \label{angle} &=& \frac{-1}{r_S}\int_{0}^{r_S} d\lambda\frac{(r_S-r)}{r}\Delta_\Omega(\Phi+\Psi) \nonumber \\ && \qquad - \frac{\Delta_\Omega H_T(t_S)}{k^2r_S^2}~, \label{angle_gi} \end{eqnarray} where $\Delta_\Omega$ denotes the angular part of the Laplacian \begin{equation} \Delta_\Omega\equiv\Big( \cot\theta\partial_\theta+\partial^2_\theta+\frac{1}{\sin^2\theta} \partial^2_\varphi\Big)~. \end{equation} It is interesting to note that the angular part of the volume perturbation is not a gauge-invariant quantity by itself. If $H_T\neq 0$ the angular and radial directions are mixed in a non-trivial way. This is not really surprising since the angular volume distortion is not a measurable quantity by itself. On the other hand, the convergence $\kappa$ (or the magnification $\mu$) that are observable, contain in addition to the angular volume distortion other perturbations (see~\cite{bonvin}, \cite{bernardeau}) and are consequently gauge invariant. The redshift contribution to the volume perturbation is obtained by differentiating Eq.~(\ref{e:dez}). \begin{eqnarray} \label{dzdt_gi} \lefteqn{\frac{1}{{\cal H}(1+\bar z)}\frac{d\delta z}{d\lambda}=}\\ &&\Phi+\Psi +H_L+\frac{H_T}{3}+{\mathbf V}\cdot{\mathbf n} +\int_{0}^{r_S} d\lambda(\dot\Phi+\dot\Psi)\nonumber\\ &&-\frac{1}{{\cal H}}\left(\bar n^i\partial_i(\Phi+\Psi)+\frac{dH_L}{d\lambda}+\frac{1}{3}\frac{dH_T}{d\lambda} +\frac{d({\mathbf V}\cdot{\mathbf n})}{d\lambda} \right) . \nonumber \end{eqnarray} Putting everything together we find after several integrations by part and a total Laplacian of $H_T$ which cancels a factor $1/k^2$, the following expression for the volume density perturbation \begin{eqnarray} \lefteqn{\frac{\delta v}{v}=-2(\Psi+\Phi) -4{\mathbf V}\cdot{\mathbf n} +\frac{1}{{\cal H}} \left[\dot\Phi+\partial_r\Psi-\frac{d({\mathbf V}\cdot{\mathbf n})}{d\lambda}\right]} \nonumber\\ &&+\left(\frac{\dot{{\cal H}}}{{\cal H}^2}+\frac{2}{r_S{\cal H}}\right) \left(\Psi+{\mathbf V}\cdot{\mathbf n}+ \int_{0}^{r_S} d\lambda(\dot{\Phi}+\dot{\Psi})\right)\nonumber\\ &&-3\int_{0}^{r_S} d\lambda(\dot{\Phi}+\dot{\Psi})+ \frac{2}{r_S}\int_{0}^{r_S} d\lambda (\Phi+\Psi)\nonumber\\ &&- \frac{1}{r_S}\int_{0}^{r_S} d\lambda\frac{r_S-r}{r} \Delta_\Omega(\Phi+\Psi)~. \label{dev} \end{eqnarray} Here and in the following, the functions without argument are to be evaluated at the source position ${\mathbf x}_S = {\mathbf x}_O -{\mathbf n}(t_O-t_S)$ and at the source time $t_S$. More details of the derivation of this result are given in the appendix. Adding the results (\ref{dez2}) and (\ref{dev}) we obtain the galaxy number density fluctuation in redshift space as defined in Eq.~(\ref{Npert}) \begin{eqnarray} \Delta({\mathbf n},z) &=& D_g + \Phi + \Psi + \frac{1}{{\cal H}} \left[\dot\Phi+\partial_r({\mathbf V}\cdot{\mathbf n})\right] \nonumber \\ && \hspace{-1.9cm}+ \left(\frac{\dot{{\cal H}}}{{\cal H}^2}+\frac{2}{r_S{\cal H}}\right)\left(\Psi+{\mathbf V}\cdot{\mathbf n}+ \int_0^{r_S}\hspace{-3mm}d\lambda(\dot{\Phi}+\dot{\Psi})\right) \nonumber \\ && \label{Dez} \hspace{-0.8cm} +\frac{1}{r_S}\int_0^{r_S}\hspace{-3mm}d\lambda \left[2 - \frac{r_S-r}{r}\Delta_\Omega\right] (\Phi+\Psi) . \end{eqnarray} Here we have used that also pressureless matter moves along geodesics so that $$ {\mathbf n}\cdot\dot{\mathbf V} +{\cal H}{\mathbf n}\cdot{\mathbf V} -\partial_r\Psi =0\,.$$ Equation~(\ref{Dez}) together with (\ref{dez2}) and (\ref{dev}) are our first main result. The first term in~(\ref{Dez}) is the gauge invariant density fluctuation. $D_g$ is the density fluctuation in the flat slicing. It is related to the density perturbation in Newtonian gauge by $D_g = D_s -3\Phi$. In terms of $D_s$ the first three contributions combine to $ D_g + \Phi + \Psi = D_s - 2\Phi + \Psi$. The term ${\cal H}^{-1}\partial_r({\mathbf n}\cdot{\mathbf V})$ is the redshift space distortion. As we shall see in the next section, this is the largest single correction on intermediate scales. The second line comes from the redshift perturbation of the volume. It contains a Doppler term and the ordinary and integrated Sachs-Wolfe terms. The third line represents the radial and angular volume distortions. The second term in the integral on the third line is especially relevant on large scales, it is the lensing distortion. \section{The angular power spectrum of the galaxy density fluctuations} For fixed redshift $\Delta(z,{\mathbf n})$ is a function on the sphere and it is most natural to expand it in spherical harmonics. Let us do this with the result (\ref{Dez}) \begin{equation} \Delta({\mathbf n},z) =\sum_{\ell m}a_{\ell m}(z)Y_{\ell m}({\mathbf n}) , \quad C_\ell (z) = \langle |a_{\ell m}|^2 \rangle . \end{equation} The coefficients $a_{\ell m}(z)$ are given by \begin{equation} a_{\ell m}(z) = \int d\Omega_{{\mathbf n}}Y_{\ell m}^*({\mathbf n})\Delta({\mathbf n},z). \end{equation} The star indicates complex conjugation. The different terms in $\Delta({\mathbf n},z)$ are either a perturbation variable evaluated at the source position or an integral of a perturbation variable over the unperturbed photon trajectory. Let us first consider a contribution from a perturbation variable at the source position, e.g. $\Psi$. We want to relate the $C_\ell(z)$ spectra to the usual power spectrum $P_\Psi(k,t)$ which is defined by $$ \langle \Psi({\mathbf k},t)\Psi^*({\mathbf k}',t)\rangle =(2\pi)^3\delta({\mathbf k}-{\mathbf k}')P_\Psi(k,t) . $$ The delta function and the fact that $P_\Psi$ depends only on the modulus of ${\mathbf k}$, $k\equiv |{\mathbf k}|$, are a consequence of statistical homogeneity and isotropy. Expressing $\Psi$ in terms of its Fourier transform, $$ \Psi({\mathbf x},t) = \frac{1}{(2\pi)^3}\int d^3k \Psi({\mathbf k},t)e^{-i({\mathbf k}\cdot{\mathbf x})} , $$ a short calculation (see e.g.~\cite{mybook}) gives \begin{equation} a^\Psi_{\ell m}(z_S) = \frac{i^{\ell}}{2\pi^2}\int d^3k j_{\ell}(kr_S)\Psi({\mathbf k},t_S)Y^*_{\ell m}(\hat{{\mathbf k}}) . \end{equation} Here $j_\ell$ is the spherical Bessel function of order $\ell$, see~\cite{AS}. Correspondingly the contribution from an integral $\int_{0}^{r_S}f({\mathbf x}(\lambda),t(\lambda)) d\lambda$ becomes \begin{equation} a^{\int\! \! f}_{\ell m}(z_S) = \frac{i^{\ell}}{2\pi^2}\int_{0}^{r_S}\hspace{-2mm} d\lambda\int d^3k j_{\ell}(k\lambda)f({\mathbf k},t)Y^*_{\ell m}(\hat{{\mathbf k}}) . \end{equation} For a velocity term ${\mathbf V}\cdot{\mathbf n}$ we use that ${\mathbf V}({\mathbf k},t) = i\hat{\mathbf k} V$, so that $ {\mathbf V}\cdot{\mathbf n}\exp[i({\mathbf k}\cdot{\mathbf n})r] = V\partial_{kr}\exp[i({\mathbf k}\cdot{\mathbf n})r]$. With this one obtains \begin{equation} a^{{\mathbf V}{\mathbf n} }_{\ell m}(z_S) = \frac{i^{\ell}}{2\pi^2}\int d^3k j'_{\ell}(kr_S)V({\mathbf k},t)Y^*_{\ell m}(\hat{{\mathbf k}}) . \end{equation} The prime in $j_\ell$ denotes derivation w.r.t the argument. Finally, for the redshift space distortion, $\partial_r({\mathbf V}\cdot{\mathbf n}) = -{\mathbf n}\cdot{\bm{\nabla}}({\mathbf V}\cdot{\mathbf n})$ we have to use the above identity twice and arrive at \begin{equation} a^{\partial_r({\mathbf V}{\mathbf n}) }_{\ell m}(z_S) = \frac{i^{\ell}}{2\pi^2}\int d^3k j''_{\ell}(kr_S)k^{-1} V({\mathbf k},t)Y^*_{\ell m}(\hat{{\mathbf k}}) . \end{equation} One can now write down the $C_\ell(z)$'s for one's theory of choice for the background and the perturbations, {\em e.g.} for modified gravity or a quintessence model. So far the derivation has been completely general. We have not used Einstein's equation. The only assumptions are that galaxies follow the distribution of matter which is made out of non-relativistic particles which move along geodesics, and that photons move along null geodesics. To proceed further, we have to be more specific. Here we just study the simplest model of purely {\em scalar adiabatic perturbations}, which have been generated at some early time in the past (e.g. inflation). If there are more e.g. isocurvature modes present, the subsequent calculation has to be repeated for them. In the case of one adiabatic mode, all the perturbation variables are given by transfer functions from some initial random variable that we take to be the Bardeen potential $\Psi$. Hence \begin{eqnarray} \Psi({\mathbf k},t) &=& T_\Psi(k,t)\Psi_{\rm in}({\mathbf k}) \\ \Phi({\mathbf k},t) &=& T_\Phi(k,t)\Psi_{\rm in}({\mathbf k}) \\ D_g({\mathbf k},t) &=& T_D(k,t)\Psi_{\rm in}({\mathbf k}) \\ V({\mathbf k},t) &=& T_V(k,t)\Psi_{\rm in}({\mathbf k}) . \end{eqnarray} The transfer functions $T_{\small\bullet}$ depend on the matter content and the evolution history of the Universe and on the theory of gravity which relates matter and metric degrees of freedom. What is important for us is that they are deterministic functions and do not depend on directions of ${\mathbf k}$. We characterize the initial power spectrum by a spectral index, $n$, and an amplitude, $A$, \begin{equation} k^3\langle \Psi_{\rm in}({\mathbf k})\Psi^*_{\rm in}({\mathbf k}')\rangle =(2\pi)^3\delta({\mathbf k}-{\mathbf k}')A(kt_O)^{n-1} . \end{equation} We have introduced present time, $t_O$, in order to keep $A$ dimensionless. From the CMB observations we know that it is of the order of $A\sim 10^{-8}$. With these identifications we can now relate $C_\ell(z)$ to the initial power spectrum $A (kt_O)^{n-1}$. Inserting the above in expression (\ref{Dez}) a short calculation gives \begin{equation} C_\ell(z_S) = \frac{2A}{\pi}\int\frac{dk}{k}(kt_O)^{n-1} \left|F_\ell(k,z_S)\right|^2 \end{equation} with \onecolumngrid \begin{eqnarray} F_\ell(k,z_S) &=& j_\ell(kr_S)\!\left[\!T_D +\left(\!1\! +\!\frac{\dot {\cal H}}{{\cal H}^2} \!+ \!\frac{2}{r_S{\cal H}}\!\right)T_\Psi \!+ \! T_\Phi \!+ \! \frac{1}{{\cal H}}\dot T_\Phi \!\right] + j_\ell'(kr_S)\left(\!\frac{\dot {\cal H}}{{\cal H}^2} \! +\! \frac{2}{r_S{\cal H}} \right)T_V \!+\! \frac{k}{{\cal H}}T_V j_\ell''(kr_S) \nonumber \\ && \hspace{-1.2cm} + \frac{1}{r_S}\int_{0}^{r_S}j_\ell(k\lambda)\left(2 +\frac{r_S-\lambda}{\lambda}\ell(\ell+1)\right) (T_\Psi + T_\Phi)d\lambda + \left(\frac{\dot {\cal H}}{{\cal H}^2} + \frac{2}{r_S{\cal H}}\right) \int_{0}^{r_S}j_\ell(k\lambda)(\dot T_\Psi +\dot T_\Phi)d\lambda . \label{Flkz} \end{eqnarray} \twocolumngrid Here $r_S=t_O-t_S$ is the source position . We now evaluate and compare the amplitude of different terms in a $\Lambda$CDM universe. Rather than entering in a precise numerical evaluation, we estimate the terms by using approximations for the transfer functions. This will help us to gain insight in the importance of the different terms. We plan to do a full numerical evaluation which can be used to estimate cosmological parameters in future work. From the first order Einstein equations, neglecting anisotropic stresses from neutrinos, we can relate the transfer functions $T_D, T_V$ and $T_\Phi$ to $T_\Psi$. We find \begin{eqnarray} T_\Phi&=&T_\Psi \label{TPhi}\\ T_D&=& -\frac{2 a}{3\Omega_m}\left(\frac{k}{{\cal H}_0}\right)^2T_\Psi-3T_\Psi-3\frac{{\cal H}}{k}T_V\label{TD}\\ T_V&=&\frac{2 a}{3\Omega_m}\frac{k}{{\cal H}_0^2}\left({\cal H} T_\Psi+\dot{T}_\Psi \right) \label{TV} \end{eqnarray} Using the notation of \cite{dod} (see also \cite{chris}), we decompose the transfer function $T_\Psi(k,t)$ into a growth rate $D_1(a)$ and a time independent transfer function $T(k)$ such that \begin{equation}\label{TPsi} T_\Psi(k,t)=\frac{9}{10}\frac{D_1(a)}{a}T(k), \end{equation} and we use CAMBcode~\cite{CAMB} to compute $T(k)$. The amplitude of the power spectrum can be expressed as~\cite{dod} $A=\frac{50\pi^2}{9}\delta_H^2\left(\frac{\Omega_m}{D_1(a=1)} \right)^2$. We choose $\Omega_m=0.24$, $\Omega_\Lambda=0.76$ and $\sigma_8=0.75$ leading to $\delta_H=5.6\cdot 10^{-5}$. \subsection{The transversal power spectrum} \begin{figure}[ht] \centerline{\epsfig{figure=ctot.eps,height=5.3cm}} \centerline{\epsfig{figure=cratio.eps,height=5.3cm}} \caption{ \label{f:tot} Top panel: The transversal power spectrum at (from top to bottom) $z_S=0.1$, $z_S=0.5$, $z_S=1$ and $z_S=3$. \\ Bottom panel: The ratio between the new contributions (lensing+potential) and the total angular power spectrum at (from top to bottom) $z_S=3$, $z_S=1$, $z_S=0.5$ and $z_S=0.1$. Solid lines denote positive contributions whereas dashed lines denote negative contributions.} \end{figure} Let us first determine the $C_\ell$'s at fixed redshift. They provide the transversal power spectrum, i.e. correlations on the sphere normal to the observer directions. Of course for the intrinsic density fluctuations these are not different from correlations in any other direction, but observational effects on them are different. E.g., since we can only observe on the background lightcone, we can only see fluctuations on this sphere at the same time but not fluctuations which have a different radial distance from us. On the other hand, in general the same redshift does of course not imply the same look-back time, since both these quantities are perturbed in different ways. \begin{figure}[!h] \centerline{\epsfig{figure=cz01.eps,height=5cm}} \centerline{\epsfig{figure=cz05.eps,height=5cm}} \centerline{\epsfig{figure=cz1_long.eps,height=5cm}} \centerline{\epsfig{figure=cz3.eps,height=5cm}} \caption{ \label{f:z01to3} The dominant terms at redshifts (from top to bottom) $z_S~=~0.1,~0.5,~1$ and $3$: density (red), redshift space distortion (green), the correlation of density with redshift space distortion (blue), lensing (magenta), Doppler (cyan), see Table~\ref{t:color}. The potential terms are too small to appear on our log-plot.} \end{figure} \begin{figure}[ht] \centerline{\epsfig{figure=czl20.eps,height=5cm}} \centerline{\epsfig{figure=cratiozl20.eps,height=5cm}} \caption{ \label{f:zl20} Top panel: The various terms as a function of $z_S$ for fixed value of $\ell=20$: density (red), redshift space distortion (green), the correlation of density with redshift space distortion (blue), lensing (magenta), Doppler (cyan) and potential (black), see Table~\ref{t:color}. Solid lines denote positive contributions whereas dashed lines denote negative contributions.\\ Bottom panel: The ratio between the new contributions (lensing+potential) and the total angular power spectrum as a function of $z_S$ for fixed value of $\ell=20$.} \end{figure} \begin{table} \begin{tabular}{|l|c|c|} \hline Density & $D$ & {\color{red}\bf red} \\ \hline redshift space & & \\ distortion & ${\cal H}^{-1}\partial_r({\mathbf V}\cdot{\mathbf n})$& {\color{green}\bf green} \\ \hline lensing & $\frac{-1}{r_S}\int_0^{r_S}\hspace{-1mm}d\lambda\frac{r_S-r}{r}\Delta_\Omega(\Phi+\Psi)$& {\color{magenta}\bf magenta}\\ \hline correlation & $2D{\cal H}^{-1}\partial_r({\mathbf V}\cdot{\mathbf n})$& {\color{blue}\bf blue} \\ \hline Doppler & $\left(\frac{\dot{{\cal H}}}{{\cal H}^2}+\frac{2}{r_S{\cal H}}\right){\mathbf V}\cdot{\mathbf n}$& {\color{cyan}\bf cyan}\\ \hline potential &$ \Psi - 2\Phi +\frac{2}{r_S}\int_0^{r_S}\hspace{-1mm}d\lambda(\Phi+\Psi)+ $ & \\ & $\left(\frac{\dot{{\cal H}}}{{\cal H}^2}+\frac{2}{r_S{\cal H}}\right)\left[\Psi+ \int_0^{r_S}\hspace{-1mm}d\lambda(\dot{\Phi}+\dot{\Psi})\right] $ & \\ & $ -\frac{2a}{\Omega_m}\left( \frac{{\cal H}}{{\cal H}_0}\right)^2\left(\Psi+\frac{\dot{\Phi}}{{\cal H}} \right)$ & {\bf black}\\ \hline \end{tabular} \caption{\label{t:color}The color coding the different terms of Eq.~(\ref{Dez}) in the angular power spectrum of $\Delta({\mathbf n},z)$ as shown in Figs.~\ref{f:z01to3} to~\ref{f:z01to3l20} and \ref{f:win}, \ref{f:zwinl20}, \ref{f:winz01to3l20}. In addition to the term given in the second column, all its correlations with the terms in the lines above are also included. Only the most dominant correlation between density and redshift space distortion is shown separately in blue. In Figs.~\ref{f:z01to3l20} and \ref{f:winz01to3l20} the 'standard terms', i.e. the top three lines, are represented together as the blue line. } \end{table} In Fig.~\ref{f:tot} (top panel) we show the total transversal power spectrum at redshifts $z_S=0.1,~0.5,~1$ and $3$. Note that the amplitude of the linear power spectrum from $z_S=0.1$ to $z_S=0.5$ is reduced by a factor 6 at $\ell \sim 100$ and by a factor 20 at $\ell\stackrel{<}{\sim} 10$. This comes from the following fact: the transversal power spectrum is dominated by the density fluctuation and the redshift space distortion which are proportional to integrals of the form $$ \int \frac{dk}{k}\left(\frac{k}{{\cal H}_0} \right)^4T^2(k)j_\ell^2(kr_S) \,. $$ At $x=kr_S=\ell$, this term goes like $\ell^4$ and it is therefore expected to dominate at large $\ell$. However, since for a constant transfer function, this integral would diverge, it is dominated by the maximum of the transfer function which is roughly at $k_{\rm eq}$. Since for $z\stackrel{>}{\sim} 0.5$, $k_{\rm eq}r_S \stackrel{>}{\sim} \ell$, $j_\ell^2(k_{\rm eq}r_S) \propto 1/(k_{\rm eq}r_S)^2$ which therefore decreases like $1/r_S^2$. Already this simple observation tells us that the amplitude of the transversal power spectrum at different redshifts might offer a possibility to constrain $r_S(z)$ and the growth factor, which both depend on cosmological parameters in different ways. On the other hand, this is complicated by non-linear effects and biasing which are not accounted for in this work. The different contributions to the power spectrum at different redshifts are shown in more detail in Fig.~\ref{f:z01to3}. For $z_S=1$, we show the spectrum up to $\ell=600$ while for the other redshifts we stop at $\ell = 100$ beyond which the structure does not change anymore. We denote by $D$ the density term in co-moving gauge, $$D = D_g + 3\Phi + 3\frac{{\cal H}}{k}V \,,$$ by $z$ the redshift space distortion, by $L$ the lensing term, by $V$ the Doppler terms and by $\Psi$ the gravitational potential terms (see Table~\ref{t:color} for a definition of each term). $C^{DD}_\ell$ represents for example the contribution from the density term alone and $C^{Dz}_\ell$ the correlation between the density and redshift space distortion. Except from the correlation between the density and redshift space distortion that we represent individually, we include the correlations with the smaller contribution. Note that usually the correlations between lensing, Doppler and gravitational potential are negligible and, except when explicitly specified we do neglect them. Therefore, when we plot for example the lensing term (magenta), it contains $C^{LL}_\ell+2C^{LD}_\ell+2C^{Lz}_\ell$. The formulae for the dominant $C_\ell$'s are given in Appendix~\ref{app:Cls}. \begin{figure}[!h] \centerline{\epsfig{figure=czz01tot_l20.eps,height=5cm}} \centerline{\epsfig{figure=czz05tot_l20.eps,height=5cm}} \centerline{\epsfig{figure=czz1tot_l20.eps,height=5cm}} \centerline{\epsfig{figure=czz3tot_l20.eps,height=5cm}} \caption{ \label{f:z01to3l20} Different terms of $C_\ell(z_S,z_{S'})$ at $\ell=20$ for redshifts (from top to bottom) $z_S=0.1,~0.5,~1$ and $3$, plotted as a function of $z_{S'}$: standard term, {\em i.e. } $C^{DD}_\ell+C^{zz}_\ell+2C^{Dz}_\ell$ (blue), lensing (magenta), Doppler (cyan), potential (black), see Table~\ref{t:color}. Solid lines denote positive contributions whereas dashed lines denote negative contributions.} \end{figure} \begin{figure}[ht] \centerline{\epsfig{figure=windensz01.eps,height=5cm}} \centerline{\epsfig{figure=winredz01.eps,height=5cm}} \centerline{\epsfig{figure=winlensz01.eps,height=5cm}} \caption{ \label{f:effwind} The effect of a window function on the density contribution (top panel), redshift space distortion (middle panel) and lensing contribution (bottom panel). We have chosen $z_S=0.1$ and $\Delta z_S=0$ (no window, top curve, red), $\Delta z_S=0.002$ (middle curve, green) and $\Delta z_S=0.01$ (bottom curve, blue).} \vspace{-0.5cm} \end{figure} The lensing term scales like $\ell^4$ and is in principle of the same order as the density and redshift space distortion terms. However it is given by an integral of the form (see Appendix~\ref{app:Cls}) $$\frac{\ell^2(\ell+1)^2}{r_S^2}\int\frac{dk}{k}T^2(k) \left[\int_{0}^{r_S}d\lambda \frac{r_S-r}{r}j_\ell(kr)\right]^2$$ which does converge when integrated over $k$. It is therefore dominated at $k=\ell/r$. (We have used Limber's approximation~\cite{loverde} to evaluate this integral which we have tested numerically and found to be of excellent accuracy.) The contribution of the lensing term becomes more important at larger source redshift for small $\ell$. But it always remains subdominant in the transversal power spectrum. In the bottom panel of Fig.~\ref{f:tot} we plot the ratio between the new contributions, i.e lensing term plus potential term, and the total angular power spectrum. We see that neglecting the new contributions for $z_S\le 1$ represents an error of no more than 0.1 percent, whereas for $z_S=3$ the error amounts to a few percent. Note that we do not include the Doppler terms in the new contributions since they appear already in the original Kaiser formula \cite{kaiser} (even though there the term from expansion $\propto \dot{\cal H}/{\cal H}^2$, which is of the same order for redshifts $z\ge 1$ is not considered). In Fig.~\ref{f:zl20} (top panel) we depict the redshift dependence of all the terms for a fixed value of $\ell=20$. The lensing and potential terms are both negative at small redshift and become positive at large redshift. This is due to the fact that at small redshift the dominant contribution comes from their correlation with the density that is negative, whereas at large redshift the dominant contribution is their auto-correlation, $C_\ell^{LL}$, respectively $C_\ell^{\Psi\Psi}$. The bottom panel of Fig.~\ref{f:zl20} shows the ratio between the new contributions and the total angular power spectrum. The error induced by neglecting the new terms increases with redshift and it reaches a few percent at high redshift. \subsection{The radial power spectrum} \begin{figure}[H] \centerline{\epsfig{figure=cwins001z01.eps,height=5cm}} \centerline{\epsfig{figure=cwins005z05.eps,height=5cm}} \centerline{\epsfig{figure=cwins01z1.eps,height=5cm}} \centerline{\epsfig{figure=cwins03z3.eps,height=5cm}} \caption{ \label{f:win} The effect of a window function with width $\Delta z_S=0.1z_S$ on the power spectrum $C_\ell(z_S)$ for redshifts (from top to bottom) $z_S=0.1,~0.5,~1$ and $3$. The different curves are: density (red), redshift space distortion (green), the correlation of density with redshift space distortion (blue), lensing (magenta), Doppler (cyan) and gravitational potential (black), see Table~\ref{t:color}. Solid lines denote positive contributions whereas dashed lines denote negative contributions.} \end{figure} The results above give us the transversal power spectrum at fixed redshift. But of course there is also a radial power spectrum which correlates fluctuation at different distances from us. This encodes different information and it is important to study them both. From the fact that the transfer function is not direction dependent, we infer that \begin{equation} \langle a_{\ell m}(z_S)a_{\ell' m'}(z_{S'})\rangle = \delta_{\ell,\ell'}\delta_{m,m'}C_\ell(z_S,z_{S'}) \,. \end{equation} Hence the radial power spectrum is given by \begin{equation} C_\ell(z_S,z_{S'}) = \frac{2A}{\pi}\int\frac{dk}{k}(kt_O)^{n-1} F_\ell(k,z_S)F^*_\ell(k,z_{S'}) \,. \end{equation} Here a interesting new phenomenon occurs: due to the fact that we evaluate $F_\ell(k,z_S)$ at different redshifts, we also evaluate the Bessel functions $j_\ell(kr_S)$ at different distances $r_S$. This leads to a suppression of the result due to oscillations, if the region in $k$-space where the integrand dominates has $kr _S> \ell$. As we discussed above, this is the case for the $k^2$--term of the density fluctuations and for the redshift space distortion, the terms which dominate the transversal power spectrum. These terms are therefore substantially suppressed in the radial power spectrum. All other terms have convergent integrals of $j^2_\ell(kr_S)$, already when neglecting the turnover of the transfer function, hence they are suppressed by powers of $\ell$ with respect to the lensing term. Therefore the lensing term dominates the radial power spectrum at low $\ell$. This is precisely what one sees in Fig.~\ref{f:z01to3l20}, where the lensing term (magenta) dominates for $z_{S'}$ significantly larger than $z_S$. As in Fig.~\ref{f:zl20}, at small redshift $z_S=0.1$ and $z_S=0.5$ the correlation density-lensing dominates (and is negative), whereas at large redshift $z_S=3$ the lensing-lensing term dominates. It is interesting to note how constant the lensing term remains while the density term and the redshift space distortion decay very rapidly with growing redshift difference. At $z_S=1$ the lensing-lensing term and its correlation with the density are of the same order of magnitude which explains the change of sign as $z_{S'}$ increases. Finally, at $z_S=0.1$ the Doppler term dominates over the standard term for some very specific values of $z_{S'}$. The first of them is actually the zero in the real space correlation function which e.g. at redshift $z_S=0.1$ corresponds to $\delta z = 0.011$. \begin{figure}[!h] \centerline{\epsfig{figure=ctotwin.eps,height=5cm}} \centerline{\epsfig{figure=cratiowin.eps,height=5cm}} \caption{ \label{f:totwin} Top panel: The total power spectrum at redshifts (from top to bottom) $z_S=0.1$, $z_S=0.5$, $z_S=1$ and $z_S=3$ smeared by a window function with width $\Delta z_S =0.1z_S$.\\ Bottom panel: The ratio between the new contributions (lensing+potential) and the total angular power spectrum at (from top to bottom) $z_S=3$, $z_S=1$, $z_S=0.5$ and $z_S=0.1$. Solid lines denote positive contributions whereas dashed lines denote negative contributions.} \end{figure} \begin{figure}[!h] \centerline{\epsfig{figure=cwins01zl20.eps,height=5cm}} \centerline{\epsfig{figure=cratiowinzl20.eps,height=5cm}} \caption{ \label{f:zwinl20} The various terms as a function of $z_S$ for fixed value of $\ell=20$ and smeared by a window function with width $\Delta z_S =0.1z_S$: density (red), redshift space distortion (green), the correlation of density with redshift space distortion (blue), lensing (magenta), Doppler (cyan) and potential (black), see Table~\ref{t:color}. Here the correlations between the lensing and Doppler and the lensing and potential are not negligible at large $z_S$, and they are included in the Doppler (cyan), respectively potential (black) curves. Solid lines denote positive contributions, dashed lines denote negative contributions.\\ Bottom panel: The ratio between the new contributions (lensing+potential) and the total angular power spectrum, smeared by a window function with width $\Delta z_S =0.1z_S$, plotted as a function of $z_S$ for fixed value of $\ell=20$.} \end{figure} An alternative way to measure radial correlations is to introduce a window function $W(z,z')$ which corresponds to a smearing of fluctuations on scales smaller than some width $\Delta z_S$. We use a Gaussian window around some mean redshift $z_S$ with width $\Delta z_S$. This suppresses power which comes from values of $k$ with $k\Delta r_S>\ell$ where $\Delta r_S = r(z_S+\Delta z_S)-r(z_S)$. This is also a more realistic case since we can measure the galaxy distribution only in redshift bins of some finite width. Already a small width does substantially affect the resulting spectrum of the density, see Fig.~\ref{f:effwind} top panel, and the redshift space distortion (middle panel). As expected the lensing term is insensitive to this smearing (bottom panel). In Fig.~\ref{f:win} we show the effect of a 10\% window on the different terms at different redshifts. As before, the terms which we indicate by 'lensing term' 'Doppler term' and 'gravitational potential contributions' in the figure are not only the corresponding term themselves but also their correlations with all other terms. If the latter dominate such a contribution can become negative. For example the lensing contribution for $z_S=1$ changes sign at $\ell=28$. For $\ell >28$ it is dominated by negative correlations with the density while for $\ell<28$ the positive autocorrelation dominates. Since the power from scales smaller than $k\Delta r_S$ is removed, the power at $\ell$ truly corresponds to that at $k = \ell/r(z)$ in the power spectrum. The 'wiggles' in the density and in the velocity terms for $z_S=0.1$ are the baryon acoustic oscillations (BAOs), the first of which appears at $\ell \simeq 15$ for $z_S=0.1$. They are also visible in the anti-correlation of the lensing term with the density for $z_S=0.5$ and $z_S=1$, but these terms are probably too small to be detected in real data. In Fig.~\ref{f:totwin} (top panel) we show the total $C_\ell(z_S)$'s smeared with a 10\% window function. Comparing it with Fig.~\ref{f:tot}, we mainly notice that the power is reduced significantly, by nearly 1.5 orders of magnitude. Furthermore, at $z_S=0.1$, the BAO's are clearly visible. In the presence of a window, different terms can dominate at different redshift and for different values of $\ell$. In the bottom panel of Fig. ~\ref{f:totwin}, we depict the ratio between the new contributions and the total angular power spectrum. Neglecting the new contributions induces an error of a few percent already at redshift 1, and this error increases to roughly 50 percent at redshift 3. Note that this ratio depends strongly on the width of the window function, and that a wider window would lead to a larger error. In Fig.~\ref{f:zwinl20} (top panel) we plot the different terms as a function of redshift, for fixed value of $\ell=20$. Contrary to Fig.~\ref{f:zl20}, where the lensing term always remains subdominant with respect to the density and redshift space distortion term, we see in Fig.~\ref{f:zwinl20} that for $z_S>2.4$ the lensing term dominates over the standard contribution. The redshift at which this dominance takes place depends of course on the chosen window function: for larger $\Delta z_S$, the lensing term starts to dominate at smaller redshift. In the bottom panel of Fig.~\ref{f:zwinl20} we show the ratio between the new contributions and the total angular power spectrum as a function of redshift for $\ell=20$. From this figure we understand why in Fig.~\ref{f:totwin} (bottom panel), the ratio at $z_S=1$ is not significantly larger than at $z_S=0.5$. The lensing contribution changes sign around $z_S=0.9$ and consequently it is still small at $z_S=1$. At a redshift of $z_S=1.5$, however the error induced by neglecting the new terms is already of the order of 10 percent. In Fig.~\ref{f:winz01to3l20}, we show correlations between different redshifts bins (with a $10 \%$ window function), for a fixed value of $\ell=20$. As in Fig.~\ref{f:z01to3l20} we see that the lensing term becomes dominant when the redshift separation between the bins increases. At large redshift, $z_S=3$, the lensing term is always dominant. The individual behaviour of each contribution is however quite different from Fig.~\ref{f:z01to3l20} which is due to the smearing introduced by the window function. Note that comparing the second panel in Fig.~\ref{f:winz01to3l20} with the results in~\cite{Thomas}, we see that the redshift separation between their 4 different bins (their Fig.~13) is too small for the lensing contribution to be relevant. However, a similar measurement with one of the bins situated around $z_S=0.7$ would already allow to detect the lensing contribution. Finally, we plot in Fig.~\ref{f:cintz} the angular power spectrum integrated from the observer until a maximum redshift $z_{\max}$. This corresponds to the situation where the redshifts of individual galaxies is unknown but obeys a given redshift distribution. Consequently only the integrated spectrum can be measured. We assume a flat distribution of galaxies between $z_{\min}=0.1$ and $z_{\max}=2$ with Gaussian tails at both ends. In Fig.~\ref{f:cintz} we see that the only relevant contributions are the density and the lensing, more precisely the cross-correlation between the lensing and the density which is negative. The redshift space distortion contribution is, as expected, completely negligible when the galaxy redshifts are unknown. The lensing contribution, however is very relevant; it reduces the result by roughly $40 \%$ of the contribution from the density alone. \begin{figure}[!h] \centerline{\epsfig{figure=cwinzz01tot_l20.eps,height=4.9cm}} \centerline{\epsfig{figure=cwinzz05tot_l20.eps,height=4.9cm}} \centerline{\epsfig{figure=cwinzz1tot_l20.eps,height=4.9cm}} \centerline{\epsfig{figure=cwinzz3tot_l20.eps,height=4.9cm}} \caption{ \label{f:winz01to3l20} Cross-correlations between different redshift bins $C_\ell(z_S,z_{S'})$ at $\ell=20$ with a $10\%$ window function and plotted as a function of $z_S'$. From top to bottom $z_S=0.1,~0.5,~1$ and $3$. Standard term, {\em i.e. } $C^{DD}_\ell+C^{zz}_\ell+2C^{Dz}_\ell$ (blue), lensing (magenta), Doppler (cyan), potential (black), see Table~\ref{t:color}. Solid lines denote positive contribution whereas dashed lines denote negative contributions.} \end{figure} \begin{figure}[!ht] \centerline{\epsfig{figure=cintz.eps,height=5cm}} \caption{ \label{f:cintz} Integrated power spectrum with a flat distribution between $z_{\min}=0.1$ and $z_{\max}=2$ with Gaussian tails at both ends. The density term is plotted in red and the lensing term in magenta. Note that the lensing term is completely dominated by its anti-correlation with the density and hence is negative. } \end{figure} \section{Conclusions} In this paper we have derived expressions for the transversal and radial galaxy power spectra, $C_\ell(z_S)$ and $C_\ell(z_S,z_{S'})$, taking into account not only redshift space distortions, which have also been studied e.g. in~\cite{Thomas} but also all other relativistic effects to first order in perturbation theory. Within our accuracy we are in reasonable agreement with the simulated results of Ref.~\cite{Thomas} (their Fig.~4) which analyzes the SDSS data taking into account redshift space distortion but not the other terms, {\em e.g.} the lensing, appearing in our formula (\ref{Dez}). They also take into account non-linearities in the matter power spectrum by using halofit~\cite{CAMB}. This enhances their results with respect to ours. We have seen that by measuring $C_\ell(z_S,z_{S'})$ for different redshift differences and different $\ell$'s we can measure different combinations of terms which depend on cosmological parameters in a variety of ways. Otherwise, one may measure the $C_\ell$'s smeared over a given redshift bin, $\Delta z_S$, $$ C_\ell(z_S,\Delta z_S) = \int dz dz' W(z,z')C_\ell(z,z')$$ where $W$ is a window function centred at $z_S$ with width $\Delta z_S$. Without smearing, the density contribution and the redshift space distortion always dominate. When smearing is included these terms are reduced and the lensing term can dominate. The method outlined in this paper represents a very flexible new path to estimate cosmological parameters and to test the consistency of the concordance model of cosmology. Of course, to do this we must master possible degeneracies not only with biasing but also evolutionary effects which have not been discussed in this work and which may become relevant at redshift larger than 1, see~\cite{AA} for a discussion. A detailed parameter estimation forecast e.g. for Euclid is left as a future project. \FloatBarrier \acknowledgments{We thank Anthony Lewis and Anthony Challinor who were accidentally working on a very similar project~\cite{AA}. We compared our results with them and although the derivation is different the analytical results completely agree. Anthony Lewis also shared their numerical results with us. This comparison helped us considerably, and we also do agree on the numerical part. We acknowledge useful discussions with Francis Bernardeau, Chiara Caprini, Chris Clarkson, Martin Kunz, Roy Maartens and Francesco Montanari. We thank the referee for useful suggestions. RD is supported by the Swiss National Science Foundation. CB is supported by a Herchel Smith Postdoctoral Fellowship and by King's College Cambridge.}
proofpile-arXiv_068-794
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{A ray-optics theory of the Eaton lens} A detailed description regarding the ray-optics theory of the Eaton lens can be found in Reference \cite{leonhardt2}. We briefly repeat it here for the reader's convenience. In geometric optics, there are two different but equivalent ways to describe the trajectory of a light ray. The first one is the Newtonian Euler-Lagrange equation \begin{equation} \frac{d^{2}\mathbf{r}}{d\xi^{2}}=\frac{\nabla n^{2}(\mathbf{r})}{2}, \label{ge1} \end{equation} where $n$ is the refractive index and the parameter $\xi$ is given by $d\xi=dr/n$. We can interpret the above equation by using Newton's law, $m\mathbf{a}=-\nabla U$, for a mechanical particle with unit mass moving in "time" $\xi$ under the influence of potential $U=-n^{2}/2+E$, with $E$ being an arbitrary constant. The second way is based on Hamilton's equation \begin{equation} \frac{d\mathbf{r}}{dt}=\frac{c}{n}\frac{\mathbf{k}}{k}, \:\:\frac{d\mathbf{k}}{dt}=\frac{ck}{n^{2}}\nabla n(\mathbf{r}), \label{ge2} \end{equation} with $\mathbf{k}$ being the wave vector and $c$ being the speed of light in free space. Notice that by treating frequency $\omega=ck/n$ as the Hamiltonian, the above equation resembles the standard form of Hamilton's equation. We can define an angular momentum as \begin{equation} \mathbf{L}=\mathbf{r}\times\frac{d\mathbf{r}}{d\xi}=\frac{n}{k}\mathbf{r}\times\mathbf{k}, \label{ge3} \end{equation} which leads to \begin{equation} \frac{d\mathbf{L}}{d\xi}=\mathbf{r}\times\frac{d^{2}\mathbf{r}}{d^{2}\xi}=\frac{1}{2}\mathbf{r}\times\nabla n^{2}=\frac{dn^{2}}{dr}\frac{\mathbf{r}\times\mathbf{r}}{2r}=0, \label{ge4} \end{equation} when the refractive-index profile $n(r)$ is spherically symmetric. The above equation suggests that the angular momentum $\mathbf{L}$ is conserved. Hence, a family of light rays propagating in the $xy$ plane at the beginning will always stay in the same plane. This fact implies that a two-dimensional Eaton lens with similar refractive-index profile $n(r)$ functions identically to the three-dimensional one. To solve the two-dimensional Newtonian Euler-Lagrange equation, it is convenient to introduce the complex number $z=x+iy$, and further reformulate the equation as \begin{equation} \frac{d^{2}z}{d\xi^{2}}=\frac{z}{2r}\frac{dn^{2}}{dr}=-\frac{1}{r^{3}}z=-\frac{1}{|z|^{3}}z, \label{ge5} \end{equation} by substituting the refractive index of the Eaton lens $ n(r)=\sqrt{(2-r)/r}$. The solution, following Equation (6.13) and (6.14) of Reference \cite{leonhardt2}, can be expressed as \begin{equation} z=e^{i\alpha}\left[\cos(2\xi')+i\sin\gamma\sin(2\xi')+\cos\gamma\right],\:\: d\xi=2|z|d\xi', \end{equation} which describes displaced ellipses rotated by the angle $\alpha$. \section{Maxwell-Garnett Formula} Consider a two-component mixture composed of inclusions embedded in an otherwise homogeneous matrix, where $\epsilon_{m}$ and $\epsilon_{d}$ are their respective dielectric functions. The average electric field $\langle\mathbf{E}\rangle$ over one unit area surrounding the point $\mathbf{x}$ is defined as \begin{equation} \langle\mathbf{E}(\mathbf{x})\rangle=\frac{1}{A}\int_{A}\mathbf{E}(\mathbf{x}')d\mathbf{x}'=f\langle\mathbf{E}_{m}(\mathbf{x})\rangle+(1-f)\langle\mathbf{E}_{d}(\mathbf{x})\rangle, \end{equation} with $f$ being the volume fraction of inclusion. A similar expression can be obtained for the average polarization \begin{equation} \langle\mathbf{P}(\mathbf{x})\rangle=f\langle\mathbf{P}_{m}(\mathbf{x})\rangle+(1-f)\langle\mathbf{P}_{d}(\mathbf{x})\rangle. \end{equation} We further assume that the following constitutive relations are valid \begin{equation} \langle\mathbf{P}_{m}(\mathbf{x})\rangle=\epsilon_{0}(\epsilon_{m}-1)\langle\mathbf{E}_{m}(\mathbf{x})\rangle, \:\:\:\langle\mathbf{P}_{d}(\mathbf{x})\rangle=\epsilon_{0}(\epsilon_{d}-1)\langle\mathbf{E}_{d}(\mathbf{x})\rangle, \end{equation} and the average permittivity tensor of the composite medium is defined by \begin{equation} \langle\mathbf{P}(\mathbf{x})\rangle=\epsilon_{0}(\overline{\epsilon}_{e}-\mathbf{\overline{I}})\cdot\langle\mathbf{E}(\mathbf{x})\rangle. \end{equation} Combining the above equations we can obtain the effective permittivity $\overline{\epsilon}_{e}$. Clearly the resultant $\overline{\epsilon}_{e}$ depends on the relationship between $\langle\mathbf{E}_{m}(\mathbf{x})\rangle$ and $\langle\mathbf{E}_{d}(\mathbf{x})\rangle$ \cite{bohren}. We now assume that the inclusion has the shape of a cylinder, and its radius is far smaller than the wavelength so that its optical properties can be well described by the electrostatic equation \begin{equation} \nabla\cdot(\epsilon(\mathbf{r})\phi)=0. \end{equation} By matching the boundary conditions we can prove that $\phi_{m}/\phi_{0}=2\epsilon_{d}/(\epsilon_{d}+\epsilon_{m})$, where $\phi_{m}$ is the total potential inside the cylinder when the external electric field $-\nabla\phi_{0}$ is homogeneous. This relation is further used to obtain the electric field \cite{yong2}. It is finally found that the average permittivity is scalar and can be expressed as \begin{equation} \epsilon_{e}=\epsilon_{d}\frac{(1-f)\epsilon_{d}+(1+f)\epsilon_{m}}{(1-f)\epsilon_{m}+(1+f)\epsilon_{d}}, \end{equation} consistent with the Maxwell-Garnett dielectric function. Equivalently we can express the filling fraction as \begin{equation} f(r)=\frac{(\epsilon_{e}-\epsilon_{d})(\epsilon_{m}+\epsilon_{d})}{(\epsilon_{e}+\epsilon_{d})(\epsilon_{m}-\epsilon_{d})}. \end{equation}
proofpile-arXiv_068-828
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The \emph{Multiobjective Spanner (MSp)} problem is a multiobjective generalization of the \emph{Minimum t-Spanner} problem. Given a connected, simple graph $G=(V,E)$ where every edge has a cost and length of $1$, a subset of edges $S$ is a \emph{t-spanner} of $G$ if for every pair of vertices $u,v \in V$, $\frac{d^S(u,v)}{d^E(u,v)} \leq t$ holds, with $d^S(u,v)$ being the distance from $u$ to $v$ in $S$ and $d^E(u,v)$ their respective distance in $E$ \cite{peleg1989}. For a given graph, the problem of finding the cheapest t-Spanner, with regard to the sum over all edge-costs, is commonly known as the Minimum t-Spanner problem. We refer to $t$ as the \emph{stretch factor}. The MSp generalizes the Minimum t-Spanner problem by introducing two edge-weight functions, allowing us to assign each edge a cost independent of its length. Furthermore, in contrast to the Minimum t-Spanner problem, the goal of the MSp is not to find a minimum weight spanner for a given stretch factor. Instead, the stretch factor is another objective we aim to minimize. The stretch factor is an interesting objective function in itself that is to be minimized in, e.g., the \emph{Minimum Max-Stretch Spanning Tree (MMST)} problem \cite{corneil1995}. Feasible solutions for the MSp are defined less restrictive than t-Spanners. We define a \emph{spanner} of a connected, undirected graph $G=(V,E)$, as a subset of edges $S \subseteq E$, such that $G'=(V,S)$ is a connected subgraph. Since the MSp is a \emph{Multiobjective Combinatorial Optimization (MOCO)} problem, solutions are mapped to a \emph{value vector} instead of a single value. Due to conflicts among objectives, there does not necessarily exist a solution that achieves the best value in all objective functions simultaneously. Instead, we look for value vectors for which there are no other value vectors that dominate them. This set of value vectors is called \emph{non-dominated set} (or Pareto-front) $\mathcal{Y}_N$. \begin{definition}[Multiobjective Spanner (MSp) Problem] The input is a connected, undirected graph $G=(V,E)$ and edge-weight functions $c_1\colon E \rightarrow \mathbb{Z}$ and $c_2\colon E \rightarrow \mathbb{N}_+$. Feasible solutions are spanners $S$ of $G$ and are assessed based on the two objective functions \[ f_1(S)= \sum_{e \in S} c_1(e) \text{ and } f_2(S) = \max_{u,v \in V} \frac{d^S_{c_2}(u,v)}{d^E_{c_2}(u,v)},\] with $d^S_{c_2}(u,v)$ and $d^E_{c_2}(u,v)$ being the length of the shortest u-v-path in $S$ and $E$ respectively, regarding the $c_2$-length. We consider an instance of the MSp to be solved if we output its non-dominated set $\mathcal{Y}_N$. \end{definition} A possible field of application for the MSp is the planning of emergency infrastructure. Natural disasters require a significant logistical effort to provide relief to victims and to distribute equipment and humanitarian goods. The efficient design of emergency infrastructure therefore forms the basis for initial responses, as well as for long-term measures taken to stabilise affected communities. Many optimization models used in emergency logistics only focus on either cost-effectiveness or responsiveness \cite{chen2020, boonmee2017, ozdamar2004}, as multiobjective approaches are considered too computationally expensive to solve. A recent literature review by Caunhye et al. \cite{caunhye2012} concluded, that these singleobjective models may hamper relief services by causing an oversupply of resources leading to difficulty with coordination, greater traffic, and complex scheduling. Therefore, the MSp could help to address this shortcoming. The concept of t-Spanners and their related problems were first introduced by Peleg, Schäffer and Ullman in the context of synchronization in distributed systems and communication networks \cite{peleg1989, Ullman1989} and have since been explored in a variety of publications. A greedy algorithm with one of the best cost-guarantees was developed by Althöfer \cite{althofer1993}. For a graph $G$ and a stretch factor of $t=2k-1 (k \in \mathbb{N}_{\geq 1})$, it creates a t-Spanner $S$ of $G$ containing $\mathcal{O}(n^{1+1/k})$ edges in $\mathcal{O}(m(n^{1+1/k}+n \log n))$. This algorithm can even be applied to the \emph{Weighted Minimum t-Spanner} problem, where every edge is assigned an arbitrary positive cost. Then the algorithm additionally guarantees that $S$ has a cost of at most $\mathcal{O}(n/k)$ times the cost of the minimum spanning tree of $G$. For undirected Minimum 2-Spanners, Kortsarz and Peleg \cite{kortsarz1994} published a $\mathcal{O}(\log(m/n))$-approximation with a theoretical running time of $\mathcal{O}(m^2 n^2 \log(n^2 /m))$. Baswana and Sen \cite{baswana2007} give a method that, for a weighted, undirected graph, computes a $t=2k-1$ spanner $S$ that contains at most $\mathcal{O}(kn^{1+1/k})$ edges in an expected running time of $\mathcal{O}(km)$, but with no cost guarantee. For unweighted graphs the size of $S$ is bounded by $\mathcal{O}(n^{1+1/k}+kn)$. Cai and Keil \cite{CaiKeil1994} focused on the complexity of the Minimum t-Spanner problem for degree bounded graphs and showed, among others, that if the maximum degree of the graph is at most $4$, the Minimum 2-Spanner problem can be solved in linear time, whereas the problem is \textbf{NP}-hard even if the maximum degree is at most $9$. A recent paper by Kobayashi \cite{kobayashi2018} focuses on the complexity of the Minimum t-Spanner problem in planar graphs and, as a byproduct, improves the degree bounds for \textbf{NP}-hardness found by Cai and Keil. As many decisions require the consideration of multiple goals and conflicting demands, MOCO problems are an important modelling tool in a variety of fields. Practical applications include routing problems in public transport \cite{delling2015, wagner2017}, the planning of radiotherapy \cite{hamacher1999, thieke2007, giantsoudi2013} and the determination of control strategies for vaccine administration in COVID-19 pandemic treatment \cite{libotte2020}. In the multiobjective context, a problem is called \emph{intractable}, if there is no algorithm capable of solving it in polynomial time \cite{ehrgott2005}. Due to the exponential size of their non-dominated sets, many interesting MOCO problems are intractable, e.g., multiobjective variants of the Traveling Salesperson \cite{emelichev1992}, Shortest Path \cite{hansen1980} or Spanning Tree \cite{hamacher1994} problem. Therefore, it makes sense to consider a complexity class that distinguishes between problems that cannot be solved in polynomial time due to the size of their output and the ones that are genuinely hard to solve. Moreover, in experimental studies, the non-dominated sets are much smaller (e.g., \cite{BC20}). There is also a theoretical reason for this behavior: In a smoothed analysis setting, Brunsch and Röglin showed that the size of the non-dominated set is at most polynomial in the input size for each fixed number of objectives \cite{BR15}. We say a MOCO problem $O$ is solvable in \emph{output-polynomial time} if there is an algorithm that, for any given instance $I$ of $O$ outputs every $y \in \mathcal{Y}_N$ exactly once, in polynomial time depending on the size of the input $I$ and the output $\mathcal{Y}_N$ \cite{johnson1988}. Such an algorithm is called \emph{output-polynomial}. We denote the class of problems for which an output-polynomial algorithm exists as \textbf{OP}. An interesting subset of the non-dominated set is the set of extreme points $\mathcal{Y}_X$. Since making decisions based on a potentially exponentially sized non-dominated set is generally not practical, many MOCO problems are approached by combining all the objective functions into one singleobjective scalar (or preference) function. One method to accomplish this is called \emph{weighted sum scalarization (WSS)}, where each objective function is weighted according to its importance. Note that in general not all non-dominated points can be found in this way. The extreme points of a MOCO problem instance are exactly the points that can be the solution to any WSS of the instance. Therefore, if every decision maker has a linear preference function, computing the extreme points suffices. Note, however, determining weights accurately reflecting the preferences of the deciders is not trivial. As every extreme point is non-dominated, while not every non-dominated value vector is an extreme point, solving the MSp could be hard, while the problem of only computing the set of extreme points could be in \textbf{OP}. This is the case for, e.g., \emph{Multiobjective Shortest Path} \cite{Bokler2015, Bokler2017}. For more information on MOCOs and related topics cf. the book by M. Ehrgott \cite{ehrgott2005}. \vspace{-3pt} \subsubsection{Contribution and Organisation.} In the remainder of this paper, we first give some definitions and establish basic concepts and results in \Cref{section:Preliminaries}. In \Cref{section:Intractability}, we study the classic tractability of MSp. \begin{theorem}\label{theorem:MSp_intractable} MSp is intractable even on degree-3 bounded outerplanar graphs. \end{theorem} This is an interesting result, as there are non-trivial stretch factors for which the Minimum t-Spanner problem is solvable in linear time, under such restrictions. In \Cref{section:Non-Dominated Set}, we first consider the output-sensitive complexity of computing the non-dominated set for unweighted instances of the MSp, where each edge has a cost and length of $1$. \begin{theorem}\label{theorem:unweighted_MSp_OP} If \textbf{P} $\neq$ \textbf{NP}, then MSp $\notin$ \textbf{OP}, even for unweighted instances. \end{theorem} Afterwards, we consider the \emph{BUCO} problem that can be interpreted as an unrestricted version of the Knapsack problem and discuss the output-sensitive complexity of computing the non-dominated set for degree bounded outerplanar instances of the MSp. While BUCO appears to be a straight forward problem, it is currently unknown whether it can be solved in output-polynomial time \cite{BUCO_complexity2020}. However, it has been shown that there are other problems of unknown output-sensitive complexity that the BUCO problem can be reduced to. This motivated the introduction of the complexity-class of \textbf{BUCO}-hard problems \cite{boklerDIS}. \begin{theorem}\label{theorem:MSp_BUCO_hard} MSp is \textbf{BUCO}-hard even on degree-3 bounded outerplanar graphs. \end{theorem} Moreover, this theorem implies that if there is a polynomial time algorithm for the minimum $t$-spanner problem on degree-3 bounded outerplaner graphs where $t>1$ is part of the input then BUCO can be solved in output polynomial time. As \Cref{theorem:unweighted_MSp_OP} states that we cannot compute the entire non-dominated set of unweighted MSp instances in output-polynomial time, in \Cref{section:Extreme Points} we define the problem of computing the set of extreme points for instances of the MSp (MSp\textsuperscript{YEx}) and show its hardness with regard to output-sensitive complexity. \begin{theorem}\label{theorem:MSP_YEx_notin_OP} If \textbf{P} $\neq$ \textbf{NP}, then MSp\textsuperscript{YEx} $\notin$ \textbf{OP}, even for unweighted instances. \end{theorem} Finally, \Cref{section:Conclusion} has concluding remarks. More details can be found in the appendix. Note that we also define a directed version of the MSp (diMSp), for which the same results are proven. The only exemption being \Cref{theorem:MSp_BUCO_hard}. The diMSp is \textbf{BUCO}-hard, even for degree-4 bounded outerplanar instances. \section{Preliminaries}\label{section:Preliminaries} We denote $\mathbb{N}=\{0,1,2,...\}$, $\mathbb{N}_+\coloneqq \mathbb{N} \setminus \{0\}$ and non-negative real numbers as $\mathbb{R}_{\geq}$. For $n \in \mathbb{N}$, we denote the set $\{1,...,n\}$ as $[n]$. For a graph $G=(V,E)$, an edge $\{u,v\} \subseteq E$ and an edge-weight function $c_i\colon E \rightarrow \mathbb{Z}$, $i\in [2]$ we abbreviate $c_i(\{u,v\})$ by $c_i(u,v)$. Furthermore, for a set of edges $S \subseteq E$, we denote $c_i(S)=\sum_{e \in S} c_i(e)$. In order to simplify the input of the (di)MSp, we sometimes combine the edge-weight functions $c_1$ and $c_2$ into one function $c\colon E \rightarrow \mathbb{Z} \times \mathbb{N}_+$ with $c(e)=(c_1(e), c_2(e))^\mathsf{T}$. Similarly, for an instance of the (di)MSp and a feasible (directed) spanner $S$, we combine the two objective functions $f_1(S)$ and $f_2(S)$ into one single function $f(S)=(f_1(S),f_2(S))^\mathsf{T}$ that directly maps $S$ to its value vector. We sometimes refer to $c\colon E \rightarrow \{ (1,1)^\mathsf{T} \}$ with $c(e)=(c_1(e), c_2(e))^\mathsf{T}=(1,1)^\mathsf{T}$ for all $e \in E$ as the \emph{trivial edge-weight function} and call instances of the MSp with these edge-weight functions \emph{unweighted}. The degree of a vertex in an undirected graph is the number of vertices it is adjacent to. The degree of a vertex in a directed graph is the number of its in- and out-going edges. For $\delta \in \mathbb{N}$, we call any graph $G=(V,E)$ \emph{degree-$\delta$ bounded} if for all $v \in V$ their degree is less than or equal to $\delta$. We call graphs \emph{outerplanar} if they have a drawing, in which every vertex lies on the boundary of the outer face. We call undirected graphs \emph{connected} if they are non-empty and any two of their vertices are linked by a path. A directed graph is called \emph{weakly connected} if replacing all of its arcs with undirected edges results in a connected (undirected) graph. See also \cite{diestel2017, bang2008}. For an instance of a MOCO problem with an objective function $f$, we denote the set of all its value vectors as $\mathcal{Y}$. For unequal value vectors $y,y'\in \mathcal{Y}$, we say $y$ is dominated by $y'$ if $y'$ is component wise less than or equal to $y$. Analogously, for feasible solutions $S, S'$ we say $S$ is dominated by $S'$ if $f(S)$ is dominated by $f(S')$. If a value vector is not dominated by any value vector, the associated solution is called \emph{Pareto-optimal}. For a weakly connected, directed graph $G=(V,A)$, we call a subset of arcs $S\subseteq A$ a \emph{directed spanner} of $G$, if $G'=(V,S)$ is a subgraph such that, for every pair of vertices $u,v \in V$, if there is a directed u-v-path in $E$, there is one in $S$ as well. Note, that the definition of a (directed) spanner does not require the resulting subgraph to be acyclic. Analogously to the MSp, we define the \emph{Directed Multiobjective Spanner (diMSp)} problem. The only differences being that the input is a weakly connected, directed graph, solutions are now directed spanners and that in the second objective function, we only consider pairs of vertices that are connected in the initial graph. This guarantees the well-definedness of the objective function values. \begin{lemma}\label{lemma:poly_restricted_of} For a set of (di)MSp instances $\mathcal{I}$, if there is a polynomial $p\colon \mathbb{N}\rightarrow \mathbb{N}$, such that for every instance $I \in \mathcal{I}$ and its set of solutions $\mathcal{S}\colon |f_i(\mathcal{S})| \leq p(|I|)$ for $i=1$ or $i=2$, then $|\mathcal{Y}_N| \leq p(|I|)$. \end{lemma} \begin{proof} Without loss of generality, we can assume that $f_1$ only has polynomially many different values in its image. For every $a \in f_1(\mathcal{S})$, there is one $s' \in S$ with $f_1(s')=a$ and $f_2(s')\leq f_2(s)$ for all $s \in S$ with $f_1(s)=a$. Hence, $(a,f_2(s'))^\mathsf{T}$ dominates $(a,f_2(s))^\mathsf{T}$ for all $s \in S$ with $f_1(s)=a$. Thus, for each $a \in f_1(\mathcal{S})$ there is only one non-dominated value vector. \hfill \qed \end{proof} \begin{observation}\label{observation:spanner_adding_edges_with_c1=0} It is clear that adding edges to a spanner never increases its stretch factor and that therefore, for every non-dominated value vector $y \in \mathcal{Y}_N$ there is a spanner $S$ with $f(S)=y$ and $e \in S$ for all $e \in E$ with $c_1(e) = 0$. \end{observation} We call the decision problem corresponding to the Minimum t-Spanner problem t-Spanner\textsuperscript{DEC}. For it, verifying the stretch factor $t$ of a spanner only requires considering pairs of vertices that are connected in the underlying graph \cite{peleg1989}. In case of the MSp an analogous statement can be made. \begin{lemma}\label{lemma:check_t-spanner} Let $I=(G=(V,E),c_1,c_2)$ be a MSp instance with a connected, undirected graph $G$ and edge-weight functions $c_1$ and $c_2$. For any spanner $S$ of $G$, $f_2(S) =\max_{u,v \in V} \frac{d^S_{c_2}(u,v)}{d^E_{c_2}(u,v)}=\max_{\{u,v\}\in E} \frac{d^S_{c_2}(u,v)}{d^E_{c_2}(u,v)}$ holds. \end{lemma} \begin{proof} Let $S$ be a spanner of $G$ and assume $f_2(S)=\frac{d^S_{c_2}(r,z)}{d^E_{c_2}(r,z)}$ with $\{r,z\} \notin E$. Let $\{r=u_0,u_1\}, \{u_1,u_2\},...,\{u_{m-1},u_m=z\}$ be the shortest r-z-path in $E$. Denote the set of pairs of vertices $(u_i, u_{i+1})$, $0\leq i \leq m-1$ as $U$. We get \begin{align*} \frac{d^{S}_{c_2}(r,z)}{d^{E}_{c_2}(r,z)} & \leq \frac{\sum_{i=0}^{m-1} d^{S}_{c_2}(u_i, u_{i+1})}{ \sum_{i=0}^{m-1} d^{E}_{c_2}(u_i, u_{i+1})} \leq \max_{(u_i, u_{i+1}) \in U} \frac{d^{S}_{c_2}(u_i, u_{i+1})}{d^{E}_{c_2}(u_i, u_{i+1})} \cdot \frac{\sum_{i=0}^{m-1} d^{E}_{c_2}(u_i, u_{i+1})}{ \sum_{i=0}^{m-1} d^{E}_{c_2}(u_i, u_{i+1})}\\ & = \max_{(u_i, u_{i+1}) \in U} \frac{d^{S}_{c_2}(u_i, u_{i+1})}{d^{E}_{c_2}(u_i, u_{i+1})}. \qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad \qquad ~\qed \end{align*} \end{proof} It is clear that the same arguments hold for the directed case. \section{Intractability}\label{section:Intractability} We begin by proving that no algorithm is capable of solving the MSp in polynomial time, even if we restrict the considered graphs to be both degree-3 bounded and outerplanar. We do this by showing that there is a family of instances, complying to these restrictions, for which the size of the non-dominated set $\mathcal{Y}_N$ is exponential in the size of the instance, proving the intractability of the MSp. Consider the following family of instances for $2\leq n \in \mathbb{N}$, of connected, undirected graphs $G=(V,E)$ and edge-weight functions $c_1\colon E \rightarrow \mathbb{Z}$ and $c_2\colon E \rightarrow \mathbb{N}_+$. For every $i \in [n]$, we create vertices $v_i, v_i'$ and $w_i$, and add edges $\{v_i, w_i\}$ with weights $(2^{i}, 2^{i})^\mathsf{T}$, as well as edges $\{v_i, v_i'\}$ and $\{v_i', w_i\}$ with respective weights of $(0, 2^i)^\mathsf{T}$. Furthermore, we introduce edges $\{w_i,v_{i+1}\}$ with weights $(0,1)^\mathsf{T}$, for $i \in [n-1]$. Finally, we define $v_1\coloneqq s$ and $w_n\coloneqq t$ and add the edge $\{s,t\}$ with weights $(2^{n+1}, 1)^\mathsf{T}$. An example of this construction can be seen in \Cref{figure:Intractability_deg3}. Note, that every so constructed graph is degree-$3$ bounded and outerplanar. \begin{center} \begin{figure}[b] \includegraphics[width=\textwidth]{Intractability_unger_deg3} \caption{The family of instances constructed for \Cref{theorem:MSp_intractable}. Thick edges hold weights $(0,1)^\mathsf{T}$.} \label{figure:Intractability_deg3} \end{figure} \end{center} Note that the graph $G$ contains at least $2^n$ spanners that do not contain the edge $\{s,t\}$. With \Cref{observation:spanner_adding_edges_with_c1=0}, we know that for every non-dominated value vector $y$, there is a Pareto-optimal spanner $S$ of $G$ that contains every $e \in E$ with $c_1(e)=0$, with $f(S)=y$. Define $X$ as the set of all feasible spanners $S$ of $G$, that contain every $e \in E$ with $c_1(e)=0$ and do not contain the edge $\{s,t\}$. We now simplify the second objective function $f_2(S)$, for every $ S \in X$, using \Cref{lemma:check_t-spanner}. \begin{lemma}\label{lemma:intract_f_2} For all spanners $S \in X$, $f_2(S)=\max_{u,v \in V} \frac{d^{S}_{c_2}(u,v)}{d^{E}_{c_2}(u,v)}=\frac{d^{S}_{c_2}(s,t)}{d^{E}_{c_2}(s,t)}$. \end{lemma} \begin{proof} Let $S \in X$ be a spanner. With \Cref{lemma:check_t-spanner}, we know that we only have to consider pairs of vertices $u,v \in V$ with $\{u,v\} \in E$ and $\{u,v\} \notin S$ in order to determine $f_2(S)$. We know that $\{s,t\} \notin S$ holds. Thus, \[ \frac{d^{S}_{c_2}(s,t)}{d^{E}_{c_2}(s,t)} \geq \frac{ \Bigl( \sum_{i=1}^{n}c_2(v_i,w_i)\Bigr) + \left( \sum_{i=1}^{n-1}c_2(w_i,v_{i+1}) \right)}{1}= 2^{n+1}+n-3.\] The only other pairs of vertices $u,v \in V$ with $\{u,v\} \in E$ for which $\{u,v\} \notin S$ might hold are $v_i,w_i$, for every $i \in [n]$. We get \[\frac{d^{S}_{c_2}(v_i,w_i)}{d^{E}_{c_2}(v_i,w_i)} \leq \frac{c_2(v_i,v_i')+c_2(v_i',w_i)}{c_2(v_i,w_i)} = \frac{2 \cdot 2^i}{2^i}=2 < 2^{n+1}+n-3\leq \frac{d^{S}_{c_2}(s,t)}{d^{E}_{c_2}(s,t)}.\qquad \qed\] \end{proof} We now conduct a proof by contradiction to show that two different spanners $S,S' \in X$ do not dominate each other and do not have the same value vector. An expanded proof can be found in \Cref{apendix:intractability}. \begin{lemma} For all $S, S' \in X$ with $S \neq S'$, $S$ and $S'$ do not dominate each other and have different value vectors. \end{lemma} \begin{proof} Let $S, S' \in X$ be two different spanners and assume $S$ dominates $S'$. Therefore, either $f_1(S)<f_1(S')$ or $f_1(S)=f_1(S')$ holds. We begin by considering the first case. Let $j \in [n]$ be the greatest index at which the shortest s-t-paths in $S$ and $S'$ differ. Since $f_1(S)<f_1(S')$ holds, $S$ must not contain the edge $\{v_j, w_j\}$ while $S'$ has to contain it. Let $P$ be the remaining path that is identical for $S$ and $S'$. Thus, with \Cref{lemma:intract_f_2}, \begin{align*} f_2(S')& \leq \left( \sum_{i=1}^{j-1} c_2(v_i, v_i')+c_2(v_i', w_i)+c_2(w_i,v_{i+1}) \right) +c_2(v_j, w_j)+c_2(P)\\ & = \left( \sum_{i=1}^{j-1} 2 \cdot 2^i +1 \right)+2^j +c_2(P) < \left( \sum_{i=1}^{j-1} 2^i +1 \right)+2^j+2^j+c_2(P)\\ & = \left(\sum_{i=1}^{j-1} c_2(v_i, w_i)+c_2(w_i,v_{i+1}) \right)+c_2(v_j, v_j')+c_2(v_j', w_j)+c_2(P)\\ & =f_2(S). \end{align*} This contradicts the assumed domination. Let us now consider the second case, in which $f_1(S)=f_1(S')$ holds. Then, in order for $S$ to dominate $S'$, $f_2(S)<f_2(S')$ must hold as well. By design of the $c_1$-edge-weights, we know that in order for $f_1(S)=f_1(S')$ to hold, it is true for every edge $\{v_i, w_i\} \in E$ that $\{v_i, w_i\} \in S \Leftrightarrow \{v_i, w_i\} \in S'$. This claim can be verified by considering that every edge $\{v_i, w_i\}$ has a unique $c_1$-cost that cannot be reproduced by any combination of edges $ e \in E \setminus \{v_i, w_i\}$. Consequently, the shortest s-t-paths in $S$ and $S'$ are exactly the same and therefore $f_2(S)=f_2(S')$ holds, which contradicts the assumed domination. \hfill \qed \end{proof} Finally, consider that no $S \in X$ can be dominated by any $\widehat{S} \notin X$. This is clearly the case, due to the high $c_1$-cost of the edge $\{s,t\}$. Concluding, we have shown that the set $X$ contains $2^n$ spanners, all of which have different value vectors that are not dominated. Thus, $|\mathcal{Y}_N|\geq |X|=2^n$ and, consequently, \Cref{theorem:MSp_intractable} hold. Note that an analogous proof can be conducted for the diMSp. The only difference to the undirected case lies in the construction of the family of instances. We turn the family of MSp instances into a family of diMSp instances. For every $i \in [n]$, we replace edges $\{v_i, w_i\}$ with arcs $(v_i, w_i)$, edges $\{v_i, v_i'\}$ with arcs $(v_i, v_i')$, edges $\{v_i', w_i\}$ with arcs $(v_i', w_i)$. Additionally, we replace edges $\{w_i,v_{i+1}\}$ with arcs $(w_i,v_{i+1})$, for $i \in [n-1]$. Finally, we replace $\{s,t\}$ with $(s,t)$. Every arc holds the same edge-weights as the undirected edge it replaced. \section{Non-Dominated Set}\label{section:Non-Dominated Set} In this section, we first consider the output-sensitive complexity of computing the non-dominated set of unweighted (di)MSp instances. Afterwards, we prove that if the (di)MSp can be solved by an output-polynomial algorithm, one can also solve the BUCO problem in output-polynomial time. Therefore, proving that the (di)MSp is \emph{\textbf{BUCO}-hard}. \begin{observation}\label{observation:trivial_ewf_restricted_YN} As the trivial edge-weight function only allows linear many values in the range of either of the two objective functions, with \Cref{lemma:poly_restricted_of}, we can infer that the non-dominated set of unweighted MSp instances is only polynomially sized. \end{observation} This observation directly infers that the non-dominated set of the (di)MSp cannot be computed in output-polynomial time, as this would enable us to solve the (directed) t-Spanner\textsuperscript{DEC} in polynomial time. Thus, \Cref{theorem:unweighted_MSp_OP} holds. We now study the output-sensitive complexity of degree bounded outerplanar instances. For a set of vectors $M$, $\min M$ refers to its non-dominated subset. \begin{definition}[Biobjective Unconstrained Optimization (BUCO) Problem \cite{boklerDIS}] The input are vectors $c^1, c^2 \in \mathbb{N}^n$. A feasible solution is an element of $\{0,1\}^n$. The goal is to find the set of the non-dominated vectors \[ \mathcal{Y}_N = \min \left\{ \left( \begin{array}{c} -c^{1^\mathsf{T}} \\ c^{2^\mathsf{T}} \end{array} \right) x ~ \middle| ~ x \in \{0,1\}^n \right\}. \] \end{definition} The BUCO problem can be interpreted as an unrestricted Knapsack problem. Without loss of generality, we can assume $c_i^1>0$ for every $i \in [n]$, since any item that does not contribute value is never part of a viable solution. Similarly, we can assume $c_i^2>0$ for every $i \in [n]$. We prove that the MSp is \textbf{BUCO}-hard by showing that if there is an output-polynomial algorithm $\mathcal{A}$ for the MSp, we could use it to solve the BUCO problem in output-polynomial time. We start by constructing an algorithm that transforms any BUCO instance $I$ into a valid MSp instance $I'$ in polynomial time. Subsequently, we show that the set of non-dominated value vectors of the constructed MSp instance $I'$, that can be found using the algorithm $\mathcal{A}$, can be transformed into the set of non-dominated value vectors of the BUCO instance $I$, using an output-polynomial filter-algorithm. Let $I$ be an instance of the BUCO problem given by $c^1, c^2 \in \mathbb{N}^n_+$. We construct an instance $I'$ of the MSp with a connected, undirected graph $G$, and edge-weight functions $c_1\colon E \rightarrow \mathbb{Z}$ and $c_2\colon E \rightarrow \mathbb{N}_+$. Define the constants $C^1 \coloneqq \sum_{i=1}^{n}c_i^1$ and $M \coloneqq C^1 +1$ and construct the graph $G=(V,E)$ in the following way: Create vertices $v_i$, $w_i$ and $v_i'$ for $i \in [n]$ and connect them with edges $\{v_i,w_i\}$ with weights $(0, c_i^2 +2)^\mathsf{T}$, edges $\{v_i,v_i'\}$ with weights $(0,1)^\mathsf{T}$ and edges $\{v_i',w_i\}$ with weights $(c_i^1, 1)^\mathsf{T}$. Furthermore, we add edges $\{w_i,v_{i+1}\}$ with weights $(0,1)^\mathsf{T}$ for $i \in [n-1]$. Define $v_1\coloneqq s$ and $w_n\coloneqq t$. Finally, we add the edge $\{s,t\}$ with weights $(M, 1)^\mathsf{T}$. An example of this construction can be seen in \Cref{figure:OS_BUCO_hard}. \begin{center} \begin{figure}[bt] \includegraphics[width=1\textwidth]{OS_BUCO_hard_deg3_1_nn} \caption{Showing the reduction for \Cref{theorem:MSp_BUCO_hard}. Thick edges hold weights $(0,1)^\mathsf{T}$.} \label{figure:OS_BUCO_hard} \end{figure} \end{center} We observe that every so constructed instance is a valid MSp instance and that all steps can be performed in polynomial time in the size of the instance $I$. Clearly, the constructed graph $G$ is degree-3 bounded and outerplanar. In order to show that the reduction is correct, we now have to prove that the non-dominated set of the constructed MSp instance can be transformed into the non-dominated set of the initial BUCO instance in output-polynomial time with regard to the instance $I$. Let $y$ be a value vector of a BUCO instance given by $c^1, c^2 \in \mathbb{N}^n_+$ and let $x \in \{0,1\}^n$ be a solution that is mapped to $y$ with $y= ( -c^{1^\mathsf{T}}x,c^{2^\mathsf{T}}x )^\mathsf{T}$. There is a spanner $S_x$ of $G$ with the following properties: For every $i \in [n]$, $S_x$ contains edges $\{v_i,w_i\}$ and $\{v_i,v_i'\}$, as well as edges $\{w_i,v_{i+1}\}$, for all $i \in [n-1]$. In addition, if $x_i=0$: $S_x$ contains the edge $\{v_i',w_i\}$. We observe that for every $x \in \{0,1\}^n$ the resulting set of edges $S_x$ is a feasible spanner of $G$ and that none of these spanners contains the edge $\{s,t\}$. Denote the set of all the spanners generated this way as $X$. With \Cref{observation:spanner_adding_edges_with_c1=0}, we know that for every non-dominated value vector $y$, there is a Pareto-optimal spanner $S$ of $G$ that contains every $e \in E$ with $c_1(e)=0$, with $f(S)=y$. Thus, clearly analogously to \Cref{lemma:intract_f_2}, $f_2(S)=\max_{u,v \in V} \frac{d^{S}_{c_2}(u,v)}{d^{E}_{c_2}(u,v)}=\frac{d^{S}_{c_2}(s,t)}{d^{E}_{c_2}(s,t)}$ holds for every spanner $S \in X$. We denote the set of edges $\{w_i,v_{i+1}\}$ for $i \in [n-1]$ as $W$. We now examine how the value vector $y$ of a BUCO solution $x \in \{0,1\}^n$ is connected to the value vector $f(S_x)$ of the corresponding spanner $S_x$. \begin{lemma}\label{lemma:BUCO-Spanner-value} For any value vector $y=(-c^{1^\mathsf{T}}x, c^{2^\mathsf{T}}x)^\mathsf{T}$ of a BUCO instance and its associated solution $x \in \{0,1\}^n$, for the constructed corresponding spanner $S_x \in X$, $f_1(S_x)= C^1 + y^1$ and $f_2(S_x)=y^2 +3n-1$ hold. \end{lemma} \begin{proof} Let $y=(-c^{1^\mathsf{T}}x, c^{2^\mathsf{T}}x)^\mathsf{T}$ be a value vector of a BUCO instance and let $x \in \{0,1\}^n$ be its associated solution. Let $S_x \in X$ be the spanner in the constructed MSp instance that is based on $x$. Consider the two objective functions. \begin{align*} f_1(S_x) &= \left(\sum_{i=1}^{n} c_1(v_i',w_i)\cdot (1-x_i) \right) = \left(\sum_{i=1}^{n} c_i^1 \cdot (1-x_i)\right) = C^1 + \sum_{i=1}^{n} -c_i^1 x_i \\ &=C^1 +y^1 \end{align*} \begin{align*} f_2(S_x)& = \left( \sum_{i=1}^{n} c_2(v_i,w_i) \cdot x_i+ (c_2(v_i,v_i')+c_2(v_i',w_i))(1-x_i) \right) +c_2(W)\\ & = \left( \sum_{i=1}^{n}(c_i^2+2) x_i + (1+1) (1-x_i) \right)+(n-1) \cdot 1\\ & = \left( \sum_{i=1}^{n} c_i^2 x_i +2 \right)+n-1 = y^2 +3n-1 \qquad \qquad \qquad \qquad \qquad \qquad ~~~ \qed \end{align*} \end{proof} Let us now consider the relationship between spanners $S_x \in X$ and spanners $S$ with $\{s,t\} \in S$. \begin{lemma}\label{lemma:BUCO_ndom_MSp_ndom} If $x \in \{0,1\}^n$ is a Pareto-optimal solution for the BUCO instance, its associated spanner $S_x$ is a Pareto-optimal solution for the constructed MSp instance. \end{lemma} \begin{proof} Let $x \in \{0,1\}^n$ be a Pareto-optimal solution for the BUCO instance and let $y=( - c^{1^\mathsf{T}}x, c^{2^\mathsf{T}}x)^\mathsf{T}$ be the corresponding value vector. Let $S_x \in X$ be the spanner generated according to the algorithm defined above. Assume there is a feasible spanner $\widehat{S}$ that dominates $S_x$. It is clear that due to the $c_1$-cost of the edge $\{s,t\}$, if $\{s,t\} \in \widehat{S}$ holds, $\widehat{S}$ does not dominate $S_x$. Therefore, we can assume that $\{s,t\} \notin \widehat{S}$ holds. Furthermore, with \Cref{observation:spanner_adding_edges_with_c1=0} and w.l.o.g., we can assume that $\widehat{S}$ contains the edges $\{v_i,v_i'\}$, $\{v_i,w_i\}$ for all $i \in [n]$ and $\{w_i,v_{i+1}\}$ for $i \in [n-1]$. Based on these observations, we can say that $\widehat{S}$ meets the specifications for a spanner that corresponds to a BUCO solution. Let us denote this BUCO solution as $\widehat{x}$ and its value vector as $\widehat{y}$. From now on we refer to $\widehat{S}$ as $S_{\widehat{x}}$. We use \Cref{lemma:BUCO-Spanner-value} to show that if $S_x$ is dominated by $S_{\widehat{x}}$, $\widehat{x}$ dominates $x$. Therefore, causing a contradiction. Assume $S_{\widehat{x}}$ dominates $S_x$, then $f_1(S_{\widehat{x}}) \leq f_1(S_x)$ and $f_2(S_{\widehat{x}}) \leq f_2(S_x)$ hold. We know that for all spanners $S \in X$ based on a BUCO solution $x'$ with value vector $y_{x'}$, $f_1(S)=C^1 + y^1_{x'} \text{ and } f_2(S_x)=y^2_{x'} +3n-1$ hold. In the first objective function, we therefore get: \[f_1(S_{\widehat{x}}) \leq f_1(S_x) \Leftrightarrow C^1+ \widehat{y}^1 \leq C^1 + y^1 \Leftrightarrow \widehat{y}^1 \leq y^1.\] In the second objective function, we get: \[f_2(S_{\widehat{x}}) \leq f_2(S_x) \Leftrightarrow \widehat{y}^2+3n-1 \leq y^2 +3n-1 \Leftrightarrow \widehat{y}^2 \leq y^2.\] Therefore, either $\widehat{x}$ dominates $x$, which contradicts the assumption that $x$ is Pareto-optimal or $\widehat{x}$ has the same evaluation as $x$, but in that case $S_{\widehat{x}}$ and $S_x$ also have the same evaluation. Hence, the value vector of $S_{\widehat{x}}$ does not dominate the value vector of $S_x$. Therefore, $S_x$ is Pareto-optimal. \hfill \qed \end{proof} In order to complete the verification of this reduction, we now prove that there are only polynomially many non-dominated value vectors in the non-dominated set of $I$ that are not based on a Pareto-optimal solution of the BUCO instance. Consider that the only way a spanner $S$ can divert from the form of a BUCO solution based spanner, without its value vector being dominated, is by containing the edge $\{s,t\}$. Hence, combining \Cref{lemma:poly_restricted_of} with the following Lemma proves the aforementioned claim. \begin{lemma}\label{lemma:v_l} For every Pareto-optimal spanner $S$ with $\{s,t\} \in S$ and $l \in [n]$ being the index of the BUCO item with the greatest $c_2$-weight for which $\{v_l',w_l\} \notin S$ holds, $f_2(S)=\frac{d^{S}_{c_2}(v_l',w_l)}{d^{E}_{c_2}(v_l',w_l)}=d^{S}_{c_2}(v_l',w_l)$. \end{lemma} \begin{proof} Let $S$ with $\{s,t\} \in S$ be a Pareto-optimal spanner and let $l \in [n]$ be the index of the BUCO item with the greatest $c_2$-weight for which $\{v_l',w_l\} \notin S$ holds. With \Cref{lemma:check_t-spanner}, and since $\{s,t\} \in S$ and \Cref{observation:spanner_adding_edges_with_c1=0} hold, we know that in order to determine $f_2(S)$, we only have to consider edges $\{v_i',w_i\}$, for all $i \in [n]$. Thus, \begin{align*} f_2(S) =\max_{u,v \in V} \frac{d^{S}_{c_2}(u,v)}{d^{E}_{c_2}(u,v)}=\max_{v_i',w_i \in V} \frac{d^{S}_{c_2}(v_i',w_i)}{d^{E}_{c_2}(v_i',w_i)} = \max_{v_i',w_i \in V} \frac{d^{S}_{c_2}(v_i',w_i)}{1} = d^{S}_{c_2}(v_l',w_l). \end{align*} Note, that there is an additional edge-case, in which the spanner $S$ contains every $e \in E$. Hence, there are only $n+1$ possible values in the range of the second objective function if the considered spanner contains the edge $\{s,t\}$. \hfill \qed \end{proof} With \Cref{lemma:poly_restricted_of}, we infer there are only $n+1$ non-dominated value vectors, in the non-dominated set of the constructed MSp instance, that do not correspond to a Pareto-optimal BUCO solution. Finally, we describe the algorithm that solves any BUCO instance in output-polynomial time, assuming that there is an output-polynomial algorithm $\mathcal{A}$ capable of solving any MSp instance. Let $I$ be a BUCO instance, and let $\mathcal{A}$ be an algorithm capable of solving a MSp instance in output-polynomial time. We begin by using the algorithm described above to transform $I$ into the corresponding MSp instance $I'$. Subsequently, we solve $I'$ using the algorithm $\mathcal{A}$ and receive the set of non-dominated value vectors $\mathcal{Y}_N^{\text{MSp}}$. We know that for every Pareto-optimal solution $x \in \{0,1\}^n$ of the BUCO instance, the constructed MSp instance contains a corresponding Pareto-optimal spanner $S_x$. Therefore, we can assume that for every such $x$, $f(S_x)=(f_1(S_x),f_2(S_x))^\mathsf{T} \in \mathcal{Y}_N^{\text{MSp}}$ holds. Now, we have to filter out all the non-dominated value vectors that do not correspond to a feasible BUCO solution. We do this by inspecting the $y^1$ value for each $y \in \mathcal{Y}_N^{\text{MSp}}$. If $y^1 \geq M$ holds, then the spanner corresponding to $y$ contains the edge $\{s,t\}$ and consequently is not based on a feasible BUCO solution. If $y^1 \leq C^1 < M$ holds, we transform $y$ according to \Cref{lemma:BUCO-Spanner-value}, so that its values match the ones of the corresponding BUCO solution. We construct $\hat{y}=( y^1-C^1 ,y^2 -3n+1)$ and add it to the set of non-dominated value vectors of the initial BUCO instance $\mathcal{Y}_N^{\text{BUCO}}$. All of these steps are output-polynomial with regard to the BUCO instance $I$ and therefore, the existence of an output-polynomial algorithm for the MSp directly implies the existence of an output-polynomial algorithm for the BUCO problem. Thus, \Cref{theorem:MSp_BUCO_hard} holds. An analogous reduction can be conducted for the diMSp, by replacing every undirected edge with an arc. This transformation works similar to the one conducted at the end of \Cref{section:Intractability}. Note that the diMSp instances require an additional arc $(v_i',v_i)$ with edge-weights $(0,1)^\mathsf{T}$, for every $i \in [n]$. These arcs ensure that the directed spanners constructed during the reduction are feasible. Observe that the resulting diMSp instances remain outerplanar but are only degree-4 bounded. \section{Extreme Points}\label{section:Extreme Points} In this section we define the problem of determining the set of extreme points of a given (di)MSp instance and consider its output-sensitive complexity. For all $y' \in \mathcal{Y}$, define $W(y')$ as the set of vectors $\lambda \in \mathbb{R}^d_{\geq}, \lambda \neq 0$, so that $\min_{y \in \mathcal{Y}} \lambda^\mathsf{T} y= \lambda^\mathsf{T} y'$. A value vector $y' \in \mathcal{Y}$ is called an \emph{extreme point} if there is a $\lambda \in \mathbb{R}^d_{\geq}, \lambda \neq 0$ with $\lambda \in W(y')$ and $\forall y \in \mathcal{Y}_N \setminus \{y'\}\colon \lambda \notin W(y)$ \cite[Definition 8.7]{ehrgott2005}. For an instance of the (di)MSp, we define \emph{(di)MSp\textsuperscript{YEx}} to be the problem of computing its set of extreme points $\mathcal{Y}_X$. We now show that if \textbf{P} $\neq$ \textbf{NP}, even the unweighted MSp\textsuperscript{YEx} cannot be solved in output-polynomial time. With \Cref{theorem:unweighted_MSp_OP}, we know that we cannot compute the entire non-dominated set of MSp instances in output-polynomial time. However, this does not imply the output-sensitive complexity of computing their set of extreme points. We do this by conducting an indirect reduction of 3SAT. Consider Cai's reduction of 3SAT to t-Spanner\textsuperscript{DEC}\cite{CAI1994187} for every $2\leq t \in \mathbb{N}$. We turn Cai's constructed 2-Spanner\textsuperscript{DEC} instances into MSp\textsuperscript{YEx} instances and show that iff the initial 3SAT instance is a yes-instance, the yes-witness for the 2-Spanner\textsuperscript{DEC} instance creates an extreme point in the corresponding MSp\textsuperscript{YEx} instance. In consideration of \Cref{observation:trivial_ewf_restricted_YN}, any output-polynomial algorithm capable of solving unweighted MSp\textsuperscript{YEx} instances in output-polynomial time could solve 3SAT in polynomial time. We begin with a quick summary of Cai's proof for the special case of $t=2$. \subsubsection{Revisiting Cai's Proof.}\label{subsubsection:Cais_proof} Cai transforms 3SAT to 2-Spanner\textsuperscript{DEC}. Given an instance $I=(U, C)$ of 3SAT consisting of a set $U$ of $n$ distinct variables and a collection $C$ of $m$ 3-element clauses over $U$. They construct a 2-Spanner\textsuperscript{DEC} instance $\hat{I}$, with a graph $G=(V,E)$ and a positive integer $K \in \mathbb{N}_+$, such that $G$ contains a 2-spanner with at most $K$ edges if and only if $C$ is satisfiable. They define a \emph{2-path} as a path with 2 edges. One can force an edge to be in any minimum 2-spanner of a graph by the addition of two distinct 2-paths between the two ends of the edge \cite[Lemma 3]{CAI1994187}. This operation is called \emph{forcing an edge}. Such an edge is called a \emph{forced edge} and the two 2-paths are called \emph{forcing paths}. They construct the \emph{truth-setting component} $T$ as follows: They take five vertices $z$, the literal vertices $x$ and $\bar{x}$, and the y-type vertices $y$ and $y'$; and join $z$ to each of the remaining four vertices by an edge. Finally, they add forced edges $\{x,\bar{x}\}, \{x,y\}, \{x,y'\}, \{\bar{x},y\}, \{\bar{x},y'\}$. They assign each variable $u_i \in U$, $i \in [n]$ a distinct copy $T_i$ of $T$ and identify all vertices $z_i$ into a single vertex $z$ to form a subgraph $T'$ of $G$. To finish the construction of $G$, they create a new vertex $v_i$ for each clause $c_i \in C$, $i \in [m]$, join it to vertex $z$ with an edge and add a forced edge between $v_i$ and each of the three literal vertices in $T'$ corresponding to the three literals of $c_i$. They finish the construction of the 2-Spanner\textsuperscript{DEC} instance, by setting $K=16n+9m$. For the constructed graph $G$, it holds that any minimum 2-spanner $S$ of $G$ contains at least $K$ edges. Furthermore, if $S$ contains exactly $K$ edges, then for each $T_i$, $i \in [n]$ exactly one of the two literal edges $\{z,x_i\}$ and $\{z,\bar{x_i}\}$ belongs in $S$ \cite[Lemma 4]{CAI1994187}. Examples of the described constructions can be seen in Figures \ref{apendix:figure:WSS_Cai_T2} and \ref{apendix:figure:WSS_Cai_complete} in the appendix. Now, suppose that $C$ is satisfiable and let $\phi$ be a satisfying truth assignment for $C$. They construct a yes-witness-spanner $S_w$ as follows: put every forced edge in $S_w$. For each forcing path, put one of the two edges in $S_w$. Finally, for each variable $u_i \in U$, if $u$ is \enquote{true} under $\phi$ then put edge $\{z,x_i\}$, in $S_w$ else put edge $\{z,\bar{x_i}\}$ in $S_w$. The complete proof that this is a correct reduction goes beyond the scope of this paper and can be found in the original paper \cite{CAI1994187}. Instead, let us now construct an equivalent MSp\textsuperscript{YEx} instance and prove that if the initial 2-Spanner\textsuperscript{DEC} instance is a yes-instance, the value vector of the yes-witness-spanner is an extreme point. \subsubsection{The Associated MSp\textsuperscript{YEx} Instance.} First, let $I$ be a 3SAT instance and let $\hat{I}=(G=(V,E),K)$ be the associated 2-Spanner\textsuperscript{DEC} instance, constructed according to the algorithm described in Cai's proof. We turn $\hat{I}$ into an instance $I'=(G,c)$ of the unweighted MSp\textsuperscript{YEx} by copying $G$ and adding the trivial edge-weight function $c\colon E\rightarrow \{(1,1)^\mathsf{T}\}$, $c(e)=(1,1)^\mathsf{T}$ for all $e \in E$. Clearly, this can be done in polynomial time. Note, that $I'$ is a valid, unweighted MSp\textsuperscript{YEx} instance. Now, let $\hat{I}$ be a yes-instance and let $S_w$ be the yes-witness-spanner. It is clear that the same spanner exists in $I'$ and that \[f_1(S_w)=\sum_{e \in S_w}c_1(e)=\sum_{e \in S_w} 1 = |S_w|=K \text{ and } f_2(S_w)=\max_{u,v \in V} \frac{d^{S_w}_{c_2}(u,v)}{d^{E}_{c_2}(u,v)}=2\] hold. We now show that the value vector of the yes-witness-spanner $f(S_w)$ is an extreme point by finding the non-dominated value vectors $\mathcal{Y}_N$ and showing that there is a $\lambda_{w} \in \mathbb{R}^d_{\geq}, \lambda_{w} \neq 0$, so that $\lambda_{w} \in W(f(S_w)) \text{ and } \forall y \in \mathcal{Y}_N \setminus \{f(S_w)\}\colon \lambda_{w} \notin W(y)$ hold. Note that $f(S_w)$ dominates or is equal to every value vector $y \in \mathcal{Y}$ with $y^1\geq K$ and $y^2 \geq 2$. Let us first find the non-dominated value vectors $y \in \mathcal{Y}_N$ with $y^2<2$. We begin by constructing the spanner $S_1 \subseteq E$ that contains the fewest edges for which $f_2(S_1) < f_2(S_w)=2$ holds. Consider $S_1=E$ and observe that if we remove any edge $e \in E$ from $S_1$, $f_2(S_1 \setminus \{e\})=2$ holds. Hence, $S_1=E$ and we conclude that for every feasible spanner $S \subseteq E$ with $K<f_1(S)<|E|$, $f(S_w)$ dominates $f(S)$. Consider the amount of edges in $G$. For every truth-setting component $T_i$, $i \in [n]$ there are $20$ forcing edges, $5$ forced edges and $4$ edges connecting to the vertex $z$. Furthermore, for each clause $c_i \in C$, $i \in [m]$ there are $12$ forcing edges, $3$ forced edges and one edge connecting to $z$. Hence, \[f_1(S_1)=|E|=29n + 16m \text{ and } f_2(S_1)=1.\] We now construct a hypothetical value vector $y_h$ that either dominates or is equal to every value vector $f(S)$ of feasible spanners $S$ with $f_1(S)<K$. We begin by constructing $y^1_h$. Consider the number of vertices in $G$. For every truth-setting component $T_i$, $i \in [n]$ there are $10$ vertices that are part of forcing paths and $4$ vertices $x_i,\bar{x_i},y_i,y_i'$. Furthermore, for each clause $c_i \in C$, $i\in [m]$ there is one vertex $v_i$ and $6$ vertices that are part of forcing paths. Finally, there is the vertex $z$. Thus, there are $14n+7m+1$ vertices in the $G$. Therefore, for every feasible spanner $S$ of $G$, $f_1(S)=|S| \geq 14n+7m$. Hence, we set $y^1_h= 14n+7m$. Let us now focus on $y^2_h$. We know that there are no spanners $S$ that contain fewer edges than $S_w$ for which $f_2(S)\leq 2$ hold. Hence, we set $y^2_h=3$. Thus, \[ y_h^1= 14n+7m \text{ and } y_h^2=3.\] Clearly, for every value vector $f(S)$ of a feasible spanner $S$ with $f_1(S)<K$, $y^1_h\leq f_1(S)$ and $y^2_h\leq f_2(S)$ hold. Hence, $y_h$ either dominates or is equal to every such value vector and thus, for every $\lambda \in \mathbb{R}^d_{\geq}, \lambda \neq 0$, $\lambda^\mathsf{T} y_h \leq \lambda^\mathsf{T} \cdot f(S)$ holds. An expanded proof for the following Lemma can be found in \Cref{appendix:Extreme_Points}. \begin{lemma} The value vector of the yes-witness-spanner is an extreme point. \end{lemma} \begin{proof} Consider the vector $\lambda_{w}=(2,15n+9m)^\mathsf{T} \in \mathbb{R}^d_{\geq}, \lambda_{w} \neq 0$. We show that $\lambda_{w} \in W(f(S_w))$ holds and that for all $y \in \mathcal{Y}_N \setminus \{f(S_w)\}\colon \lambda_{w} \notin W(y)$. \begin{gather*} \lambda_{w}^\mathsf{T} \cdot f(S_1) = 73n+41m = \lambda_{w}^\mathsf{T} y_h\\ \lambda_{w}^\mathsf{T} \cdot f(S_w) = 62n+36m < 73n+41m = \lambda_{w}^\mathsf{T} \cdot f(S_1) = \lambda_{w}^\mathsf{T} y_h, \end{gather*} for $n,m >0$. Hence, $\lambda_{w} \in W(f(S_w))$ and for all $y \in \mathcal{Y}_N \setminus \{f(S_w)\} \colon \lambda_{w} \notin W(y)$. \hfill \qed \end{proof} A sketch of the relevant value vectors can be seen in \Cref{apendix:figure:WSS_Punkte} in the appendix. Finally, let us consider the entire reduction. Suppose there is an algorithm $\mathcal{A}$ capable of solving unweighted MSp\textsuperscript{YEx} instances in output-polynomial time. We know that iff the initial 3SAT instance $I$ is a yes-instance, the 2-Spanner\textsuperscript{DEC} instance $\hat{I}$ constructed according to Cai's proof is a yes-instance too. Therefore, the yes-witness-spanner $S_w$ exists in $\hat{I}$ and thus, also in the associated MSp\textsuperscript{YEx} instance $I'$. Since the value vector $f(S_w)$ of $S_w$ is an extreme point, it is therefore part of the output $\mathcal{Y}_X$ of algorithm $\mathcal{A}$, when applied to $I'$. In consideration of $\mathcal{Y}_X \subseteq \mathcal{Y}_N$ and \Cref{lemma:poly_restricted_of} we infer that solving $I'$ with $\mathcal{A}$ and checking whether $f(S_w) \in \mathcal{Y}_X$ holds is possible in polynomial time in the size of $I$. In conclusion, if $\mathcal{A}$ existed, we could solve 3SAT in polynomial time. Thus, \Cref{theorem:MSP_YEx_notin_OP} holds. Similarly, based on Cai's reduction of 3SAT to directed 2-Spanner\textsuperscript{DEC} \cite[Section 3]{CAI1994187}, we can show the same results for the diMSp\textsuperscript{YEx}. The only difference to the undirected case lies in the construction of the directed 2-Spanner\textsuperscript{DEC} instance by Cai. These differences in turn cause slightly different values in the objective functions of the considered spanners in the diMSp\textsuperscript{YEx} instance that we construct analogously to the undirected case. It is easy to see that these differences have no influence on the validity of the statement. Due to the analogy of the proofs, we leave the details to the reader. \section{Conclusion}\label{section:Conclusion} What remains open is the output-sensitive complexity of computing the set of extreme points for degree-3 bounded outerplanar instances, as it is currently unknown whether there is a stretch factor $t$, such that the t-Spanner\textsuperscript{DEC} problem is \textbf{NP}-hard under these restrictions. Thus, such a prove requires a different approach than the one used in \Cref{section:Extreme Points}. Future work might include the development of approximation techniques for the MSp and related problems, as well as investigating what existing approaches can be applied to them. \printbibliography \newpage \begin{appendix} \section{Intractabilty}\label{apendix:intractability} \begin{customlemma}{4} For all $S, S' \in X$ with $S \neq S'$, $S$ and $S'$ do not dominate each other and have different value vectors. \end{customlemma} \begin{proof} Let $S, S' \in X$ be two different spanners and assume $S$ dominates $S'$. Therefore, either $f_1(S)<f_1(S')$ or $f_1(S)=f_1(S')$ holds. We begin by considering the first case. Let $j \in [n]$ be the greatest index at which the shortest s-t-paths in $S$ and $S'$ differ. Since $f_1(S)<f_1(S')$ holds, $S$ must not contain the edge $\{v_j, w_j\}$ while $S'$ has to contain it. Let $P$ be the remaining path that is identical for $S$ and $S'$. Thus, \begin{align*} f_2(S')&= d^{S'}_{c_2}(s,t)\\ & \leq \left( \sum_{i=1}^{j-1} c_2(v_i, v_i')+c_2(v_i', w_i)+c_2(w_i,v_{i+1}) \right) +c_2(v_j, w_j)+c_2(P)\\ & = \left( \sum_{i=1}^{j-1} 2 \cdot 2^i +1 \right)+2^j +c_2(P) = \left( \sum_{i=2}^{j} 2^i +1 \right)+2^j +c_2(P)\\ & < \left( \sum_{i=1}^{j-1} 2^i +1 \right)+2^j+2^j+c_2(P)\\ & = \left(\sum_{i=1}^{j-1} c_2(v_i, w_i)+c_2(w_i,v_{i+1}) \right)+c_2(v_j, v_j')+c_2(v_j', w_j)+c_2(P)\\ & \leq d^{S}_{c_2}(s,t)=f_2(S). \end{align*} This contradicts the assumed domination. Let us now consider the second case, in which $f_1(S)=f_1(S')$ holds. Then, in order for $S$ to dominate $S'$, $f_2(S)<f_2(S')$ must hold as well. By design of the $c_1$-edge-weights, we know that in order for $f_1(S)=f_1(S')$ to hold, it is true for every edge $\{v_i, w_i\} \in E$ that $\{v_i, w_i\} \in S \Leftrightarrow \{v_i, w_i\} \in S'$. This claim can be verified by considering that every edge $\{v_i, w_i\}$ has a unique $c_1$-cost that cannot be reproduced by any combination of edges $ e \in E \setminus \{v_i, w_i\}$. Consequently, the shortest s-t-paths in $S$ and $S'$ are exactly the same and therefore $f_2(S)=f_2(S')$ holds, which contradicts the assumed domination. \hfill \qed \end{proof} \section{Extreme Points}\label{appendix:Extreme_Points} \begin{customlemma}{8} The value vector of the yes-witness-spanner $f(S_w)$ is an extreme point. \end{customlemma} \begin{proof} Consider the vector $\lambda_{w}=(2,15n+9m)^\mathsf{T} \in \mathbb{R}^d_{\geq}, \lambda_{w} \neq 0$. We show that $\lambda_{w} \in W(f(S_w))$ holds and that for all $y \in \mathcal{Y}_N \setminus \{f(S_w)\}\colon \lambda_{w} \notin W(y)$. Begin by considering $\lambda_{w}^\mathsf{T} \cdot f(S_1)$ and $\lambda_{w}^\mathsf{T} y_h$. \begin{align*} \lambda_{w}^\mathsf{T} \cdot f(S_1) & = \begin{pmatrix} 2 & 15n+9m \end{pmatrix} \cdot \begin{pmatrix}29n + 16m \\ 1 \end{pmatrix} = 73n+41m\\ & = \begin{pmatrix} 2 & 15n+9m \end{pmatrix} \cdot \begin{pmatrix}14n+7m \\ 3\end{pmatrix} = \lambda_{w}^\mathsf{T} y_h \end{align*} Now, consider $\lambda_{w}^\mathsf{T} \cdot f(S_w)$. We get \begin{align*} \lambda_{w}^\mathsf{T} \cdot f(S_w) & = \begin{pmatrix} 2 & 15n+9m \end{pmatrix} \cdot \begin{pmatrix} K \\ 2\end{pmatrix} = \begin{pmatrix} 2 & 15n+9m \end{pmatrix} \cdot \begin{pmatrix} 16n+9m \\ 2\end{pmatrix} \\ & = 62n+36m < 73n+41m = \lambda_{w}^\mathsf{T} \cdot f(S_1) = \lambda_{w}^\mathsf{T} y_h, \end{align*} for $n,m >0$. Hence, $\lambda_{w} \in W(f(S_w))$ and for all $y \in \mathcal{Y}_N \setminus \{f(S_w)\} \colon \lambda_{w} \notin W(y)$. \hfill \qed \end{proof} \begin{center} \begin{figure} \centering \includegraphics[scale=0.4]{WSS_Cai_T2} \caption{A truth-setting component $T$ and its symbolic representation. Thick edges indicate forced edges. For clarity, forcing paths have been omitted from the figure.} \label{apendix:figure:WSS_Cai_T2} \end{figure} \end{center} \begin{center} \begin{figure} \centering \includegraphics[scale=0.5]{WSS_Cai_complete} \caption{The graph $G$ for $C=\{ \{\bar{u_1}, u_2, u_3\}, \{u_1, u_2, \bar{u_3} \} \}$. Thick edges indicate forced edges. For clarity, forcing paths have been omitted from the figure.} \label{apendix:figure:WSS_Cai_complete} \end{figure} \end{center} \begin{center} \begin{figure} \centering \includegraphics[scale=0.5]{WSS_Punkte} \caption{The value vectors $y_h,f(S_w), f(S_1)$ and slopes visualized.} \label{apendix:figure:WSS_Punkte} \end{figure} \end{center} \end{appendix} \end{document}
proofpile-arXiv_068-868
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \IEEEPARstart{T}{he} reconstruction of 3D models has been explored for decades. Its developments followed the trend of low-cost, high-quality sensors, and efficient computation hardware and were boosted in recent years with the progress in Deep Learning. In the field of reconstruction, most attention has been drawn to global optimizations with bundle adjustment and loop closure~\cite{cao2018real,dai2017bundlefusion,whelan2015elasticfusion}. Reconstructions using the Signed Distance Function (SDF) as a representation~\cite{curless1996volumetric} have been widely accepted as a fundamental basis since Kinect Fusion~\cite{newcombe2011kinectfusion} and VoxelHashing~\cite{niessner2013real}. Recently, this basis is being challenged by the new trend of Deep Learning, as those conventional approaches have issues with the memory requirements and quality of uncomplete scans. Relying on the high modeling ability of deep learning models, DeepSDF~\cite{park2019deepsdf} and Occupancy Networks~\cite{mescheder2019occupancy} propose implicit geometric representations that represent the shape in continuous space and thus are able to extract maps at an arbitrary resolution. Similar to deep local descriptor~\cite{yuan2021self} that uses a parametric function to encode the geometry, the deep implicit model further supports prediction of the fields. These deep implicit representations have been widely sought after for their flexible use in shape reconstruction~\cite{park2019deepsdf}, shape generation~\cite{chen2019learning} and more general tasks. By operating on the voxel-level, \cite{chabra2020deep,jiang2020local} even provide semantics-agnostic high-quality reconstructions. Relying on the success of deep implicit representation, in 2021, DI-Fusion~\cite{huang2021di} and NeuralBlox~\cite{lionar2021neuralblox} firstly proposed incremental neural implicit maps for 3D reconstructions. DI-Fusion especially has introduced a novel reconstruction pipeline that effectively combined the advantage of the efficiency of neural implicit representation and the robustness of field base registration. Different from the DI-Fusion that uses still the conventional SLAM pipeline, \cite{sucar2021imap, zhu2021nice} have proposed live-optimization with implicit representation for reconstruction which is also able to complete unseen surfaces. However, there is still a severe limitation for an implicit representation compared with the common (point cloud, TSDF) methods: implicit representations do not support transformation. And it is this limitation that makes it hard to implement relocalization and remapping for neural implicit map reconstructions. Yuan et al. proposes indirect registration to evade the transformation of field during registration~\cite{yuan2022indirect}. However, the rotation and translation is inevitable for remapping function. \begin{figure}[t!] \centering \psfragfig[width=1\linewidth]{im/eps/twobranch2}{ \psfrag{A}{$\mathbf T_g$} \psfrag{B}{$f_{encoder}$} \psfrag{C}{$f_{encoder}$} \psfrag{D}{$\mathbf S_g$} \psfrag{E}{voxel $v \in \{1,\cdots\}$} \psfrag{P}{$ \mathbf P$} \psfrag{Pb}{$\mathbf P^{'}$} \psfrag{Fb}{$\mathbf F_v^{'}$} \psfrag{F}{$\mathbf F_v$} } \caption{Two flow paths to SE(3)-transform and deep encode the point cloud. The {\color{nb}solid line} indicates the transform-encoding path to generate implicit map of $\mathbf T_g$-transformed point cloud $\mathbf P$. The {\color{yb}dash line} shows the encoding-transform path, transforming the map of features with transformation $\mathbf S_g$, that is introduced in this paper.} \label{fig:twobranch} \vspace{-.4cm} \end{figure} In this paper, we propose a transformation algorithm for neural implicit maps to fill in this gap. As shown in \cref{fig:twobranch}, encoding the transformed point cloud is equivalent to first encoding the points and transforming then on the neural implicit. $\mathbf S_g$ is the transformation on neural implicit corresponding to the Euclidean space transformation $\mathbf T_g$. The main challenge is the feature space transform with the corresponding alignment of the original point set. Thus, in this paper, we exploit the topic of equivariant representation~\cite{thomas2018tensor,fuchs2020se,deng2021vector}, to implement the implicit map transformation. As the focus of SE(3)-equivariant research is not the transformation, it is actually not adequate to solve full transformation in feature space. In addition, the recent approaches work only on small examples with simple structures. Thus, our proposed model works on a map of neural implicits, i.e., a set of neural implicit functions, instead of one holistic implicit function, to evade both limitations. The transformed implicit map is able to produce a very close result to the implicit map of the transformed point cloud. To demonstrate the advantage of our mapping model, we also embed it into a loop-closure equipped SLAM-algorithm~\cite{mur2017orb}. The contributions of this paper are as follows: \begin{itemize} \item We propose a transformation algorithm for neural implicit maps. \item We implement a 3D reconstruction with this remapping model. \end{itemize} In the following, we first describe briefly the related work for implicit function and equivariant features. Then, we introduce our transformation algorithm for neural implicit maps. After that, experiments demonstrate the performance and we conclude this work. \section{Related Work} \subsection{Deep Implicit Representations} Algorithms for Implicit Representation can be divided into two categories: First, the most widely used branch is the DeepSDF~\cite{park2019deepsdf}. For SDF, the geometry prior is encoded with MLP and then fed into another model together with query points to extract the signed distance values with a discretized distance field. A mesh is then extracted using the Marching Cubes algorithm~\cite{lorensen1987marching}. For non-closed shapes which is more general for point cloud data, an unsigned distance field neural model is introduced without indicating inside-outside~\cite{chibane2020neural}. The second category is Occupancy Networks~\cite{mescheder2019occupancy}. For Occupancy Networks, different from the distances in the SDF model, the probability of occupancy at a certain position is estimated from the implicit function. Then a Multiresolution IsoSurface Extraction (MISE) is implemented to obtain meshes. To efficiently reconstruct intricate surfaces, DeepLS~\cite{chabra2020deep} introduces a local deep geometry prior and performs the reconstruction with a set of local learned continuous SDFs. Similarly, \cite{jiang2020local} proposes a local implicit grid for reconstruction. Note that, one advantage of the local implicit model is that it relieves the pressure on the encoding model. As a local surface is much more simple compared to a whole complicated scene, such a local strategy is trained on a simple synthetic object dataset and generalized then to the real complex scene. In 2021, DI-Fusion~\cite{huang2021di} moves one step further and leads the research to a real reconstruction of a scene. It is the first implicit function research that realizes the incremental reconstruction of a scene. More importantly, they alleviate the memory inefficiency of SDF representation by updating a map of latent features while extracting distance values for registration and visualization, yielding a new direction of 3D reconstruction with Deep Learning. Similarly, NeuralBlox~\cite{lionar2021neuralblox} also proposes to fuse the grid of latent features. Given an external state estimation with noise, its latent code fusion still shows a robust performance. iMAP~\cite{sucar2021imap} proposes a non-conventional SLAM pipeline with implicit neural representation for incremental reconstruction. The main component of it is a differentiable rendering model. With an online optimization, the reconstruction is optimized by repeatedly minimizing the rendering distance to observed images. \begin{figure}[!] \centering \includegraphics[width=.8\linewidth]{im/PLV.png} \caption{PLIVox representation from DI-Fusion~\cite{huang2021di}.} \label{fig:PLIVox} \vspace{-0.5cm} \end{figure} In this work, we build on top of DI-Fusion by using their PLIVox representation as in \cref{fig:PLIVox}. \subsection{Equivariant Feature} Equivariance is a novel concept for 3D point clouds. The target is a universal representation of objects with different poses to avoid exhaustive data augmentation. We follow the definition in SE(3)-Transformers~\cite{fuchs2020se}: Given a set of transformations $\mathbf T_g: \mathcal V\rightarrow \mathcal V$ for $g \in G$, where G is an abstract group, a function $\phi:\mathcal V \rightarrow \mathcal Y$ is equivariant if for $g$, there exists a transformation $\mathbf S_g: \mathcal Y \rightarrow \mathcal Y$ such that \begin{align} \mathbf S_g[\phi(v)] = \phi( \mathbf T_g[v]) \text{\ \ for all\ } g\in G,v\in \mathcal V. \end{align} \begin{figure}[b!] \vspace{-0.5cm} \centering \psfragfig[width=1\linewidth]{im/eps/SO3_2}{ \psfrag{R}{$\mathbf R$} } \caption{SO(3)-equivariant representation for point cloud.} \label{fig:SO3} \end{figure} \begin{figure*}[] \centering \includegraphics[width=.9\linewidth]{im/pipeline_neuralImplicit.png} \caption{Pipeline for SLAM embedding our mapping module. {\color{nb}SLAM module} provides the point cloud $P_i^{'}$ and the pose table $\{T_1,\cdots,T_i\}$ for the {\color{yb} mapping module} with keyframe $i$.} \label{fig:pipeline} \vspace{-0.5cm} \end{figure*} From their definition, an SO(3)-equivariant function $\phi$ follows $\mathbf S_{\mathbf R}\phi(v) = \phi(\mathbf R[v])$ with function $\mathbf S$ an operation to produce same result as aligning the point cloud. While for the translation-equivariant, for convenience, the $\mathbf S$ operation is usually defined as identity mapping. As the translation is reduced with the relative position, most of works mainly focus on SO(3)-equivariant representations~\cite{thomas2018tensor, kondor2018clebsch, esteves2018learning, weiler20183d} with steerable kernel bases. SE(3)-Transformers~\cite{fuchs2020se} leverages the advance of self-attention on large point sets and graphs with various point numbers. Realizing equivariance by learning, those works are restricted to using convolutions and relative positions of neighboring points. Vector Neurons (VNN) firstly introduce a whole group of network layers that produce SO(3)-equivariant features~\cite{deng2021vector}. It is flexible to reconstruct PointNet~\cite{qi2017pointnet} or DGCNN~\cite{phan2018dgcnn} with VNN layers. This provides us a good basis as it is able to function as point-encoder. In this work, we mainly use three rotation-equivariant operations: VNLinear, VNLeaKyReLU, and mean-pooling. For example, with the input $\mathbf V\in\mathbb{R}^{C\times 3}$, VNLinear parameter $ \mathbf W_l\in\mathbb{R}^{C^{'}\times C}$, VNLinear produces the output $\mathbf W_l\mathbf V$ where $(\mathbf W_l\mathbf V)\mathbf{R}=\mathbf W_l(\mathbf V\mathbf R)$ is rotation-equivariant. LeaKyReLU produces each output vector-neuron $\mathbf v^{'}_c\in \mathbf V^{'}=f_{\text{LeaKyReLU}}(\mathbf V)$ with separate parameters $\mathbf W_c\in \mathbb{R}^{1\times C}$ and $\mathbf U_c\in \mathbb{R}^{1\times C}$ where $c\in \{1,\cdots, C\}$. For each vector-neuron $\mathbf v^{'}_c\in \mathbb{R}^{1\times 3} $, it maps the input feature $\mathbf V$ to the feature $\mathbf q_c = \mathbf W_c \mathbf V\in \mathbb{R}^{1\times 3}$ and direction $\mathbf k_c = \mathbf U_c \mathbf V \in \mathbb{R}^{1\times 3}$. Thus it produces \begin{equation} \mathbf v^{'} = \begin{cases} \mathbf q_c & \text{if\ } \langle \mathbf q_c,\mathbf k_c \rangle \geqslant 0 \\ \mathbf q_c - \left\langle \mathbf q_c,\frac{\mathbf k_c}{\|\mathbf k_c\|} \right\rangle \frac{\mathbf k_c}{\|\mathbf k_c\|} & \text{otherwise,} \end{cases} \label{eq:VN-ReLU} \end{equation} with the output $\mathbf V^{'}=[\mathbf v^{'}_c]_{c=1}^C$~\cite{deng2021vector}. Mean-pooling is an average on the same dimension of all points, therefore it is naturally rotation-equivariant. As the original goal of the equivariance concept is to provide universal features, i.e., $\mathbf S=\mathbf I$ is adequate for translation. However, with a different focus of transforming the feature space, this setting is not applicable. In this paper, only the feature rotation resorts to the SO(3)-equivariant architecture VNN~\cite{deng2021vector} and functions as \cref{fig:SO3}. The translation is solved using other techniques. \section{Methodology} We follow DI-Fusion~\cite{huang2021di} to use the evenly-spaced voxels (PLIVoxs) to represent the map. $\mathbf V = \{\mathbf v_m=(\mathbf c_m, \mathbf F_m, w_m) \}$ with $\mathbf c_m \in \mathbb R^3$, $\mathbf F_m\in \mathbb R^{L}$, $w\in \mathbb N$ the voxel centriod, latent representation of observed geometry and observation count respectively. \subsection{SO(3)-equivariant Features} \label{sec::feature} \begin{figure}[t!] \centering \psfragfig[width=.9\linewidth]{im/eps/point_encoder}{ \psfrag{p1}{$\mathbf p_1$} \psfrag{n1}{$\mathbf n_1$} \psfrag{pj}{$\mathbf p_j$} \psfrag{nj}{$\mathbf n_j$} \psfrag{phip}{$\phi_p$} \psfrag{Fm1}{$\mathbf F_{m,1}$} \psfrag{Fmj}{$\mathbf F_{m,j}$} \psfrag{phie}{$\color{yb}{\phi_e}$} \psfrag{phed}{$\color{yb}{\phi_d}$} \psfrag{Fm}{$\mathbf F_{m}$} \psfrag{p}{$\mathbf p$} \psfrag{m}{$\mu$} \psfrag{s}{$\sigma$} } \caption{The structure of encoder-decoder neural network. $\mathbf {p_j}$, $\mathbf {n_j}$ are point xyz and norm for certain point $\mathbf p_j$ in one $m$-th PLIVox. $\mathbf p$ are point for inference and $\mu$, $\sigma$ are estimated distance value and its standard derivation.} \label{fig:point_encoder} \end{figure} Our encoder-decoder neural network $\Phi$ follows the design of encoder-decoder in DI-Fusion~\cite{huang2021di}. $\phi_e$ encodes points in a PLIVox, and $\phi_d$ predicts distance mean and standard deviation for query points. But different from DI-Fusion that is using a simple PointNet structure, to realize the transformation on the neural implicit map, we propose to use VNN layers~\cite{deng2021vector} to build an SO(3)-equivariant encoder. For each local voxel, it uses the points $\mathbf P_m$ and the norm $\mathbf S_m$ as an input. Two branches of VNN-MLPs are respectively applied on $\mathbf P_m$ and $\mathbf S_m$ and produce features $\mathbf F_{\mathbf P_m}\in \mathbb R^{l\times 3}$ and $\mathbf F_{\mathbf S_m} \in \mathbb R^{l\times 3}$. Then by concatenating $\mathbf F_{\mathbf P_m}$ and $\mathbf F_{\mathbf S_m}$ along the $l$ axis, it achieves $\mathbf F_m\in \mathbb R^{2l\times 3}$. A point encoder $\phi_p$ is given in \cref{fig:point_encoder}. The local point set encoder $\phi_e$ produces the mean-pooling of the $\phi_p$ output in $\mathbf P_m$. In this encoder-decoder we changed the encoder with VNN to realize the SO(3)-equivariant functionality. For more details about the decoder network and Conditional Neural Processes-style training, please refer to DI-Fusion~\cite{huang2021di}. \subsection{Neural Implicit Mapping Module} Neural Implicit Mapping Module consists of Encoding~(\cref{sec::feature}), Fusion~(\cref{sec:map_removal_fusion}), Removal~(\cref{sec:map_removal_fusion}), and Transforming~(\cref{sec::map::trans}) functions. The input frame to this module is firstly encoded into a local neural implicit map and fused to the global map. When a loop is detected, remapping a certain frame requires removing, transforming, and fusing the corresponding local neural implicit map. A diagram of our mapping module is given in \cref{fig:pipeline}. The neural implicit mapping module serves as a mapping module for SLAM. \subsubsection{Transformation to Global Coordinate} \label{sec::map::trans} Our transformation algorithm of Neural Implicit Map consists of two steps: the grid transformation and the feature rotation. As demonstrated in \cref{fig:transform}, \begin{figure}[b!] \centering \psfragfig[width=1.\linewidth]{im/eps/transform}{ \psfrag{R}{$\mathbf R$} \psfrag{F}{$(\mathbf c_m, \mathbf F_m)$} \psfrag{T}{$\mathbf T$} } \caption{Demonstration of the transformation on the neural implicit map. The voxel grid is transformed to a new position. $\mathbf F_m$ rotates since $\mathbf F_m$ is still positioned at the center of a voxel in the grid after the transformation (center is transformed).} \label{fig:transform} \end{figure} given the transformation $\mathbf T$ of the local map $\mathbf V_l$ to global coordinates, the update is actually on the center $\mathbf c$ and its corresponding implicit feature $\mathbf F$ for each PLIVox. For PLIVox voxel $\mathbf v_m\in \mathbf V_l$, center (grid coordinate) $\mathbf c_m$ directly transforms \begin{align} \mathbf c_m \leftarrow \mathbf T\cdot \mathbf c_m. \end{align} Afterwards, for the feature $\mathbf F_m\in \mathbb{R}^{2l\times3}$, as the feature is always positioned at the voxel center, the rotation is left to solve. Thus \begin{align} \mathbf F_m \leftarrow ( \mathbf R \cdot \mathbf F_m^{T} )^T. \label{eq:feature_transform} \end{align} However, only transforming the local neural implicit frame is not sufficient to update the global map. Because the transformed voxel grid for the local frame may not be consistent with the grid of the global map. \subsubsection{Interpolation to Global Grid} \label{sec::map::interp} As shown in \cref{fig:transformed}, there is a small gap between global and local grid coordinates. Therefore, we need to additionally interpolate the local features on the global grid. \begin{figure}[b!] \vspace{-0.6cm} \centering \psfragfig[width=.6\linewidth]{im/eps/grid_align}{ \psfrag{T}{$\mathbf V_g$} \psfrag{V}{$\mathbf T_i \mathbf V_i$} } \caption{The transformed local grid is not well-fitted with the global grid. Here we show the center of the grid for better demonstration.} \label{fig:transformed} \end{figure} Since the distance between the local and the target voxel is small, and points involved for encoding are actually in a $2$-times voxel length region around its voxel in implementation, we propose to align voxels by linearizing the function $\phi_e(\mathbf P+\mathbf t)$. Each $\mathbf v_m$ in the local grid will contribute to the close neighbor $\mathbf v_{n}$ in the global grid: \begin{align} \mathbf F_{\mathbf v_{n}} = \phi_e(\mathbf P_m + \mathbf t_{m,n} ), \ \ \ \mathbf t_{m,n} = \mathbf c_{n}-\mathbf c_m \label{eq:f_nc} \end{align} Then by linearizing the right side of \cref{eq:f_nc}, we yield \begin{align} \mathbf F_{\mathbf v_{n}} = \mathbf F_{\mathbf v_m} + \frac{\partial}{\partial \mathbf t} [\phi_e(\mathbf P_m )] \mathbf t_{m,n}. \label{eq:f_lin} \end{align} Here we denote the Jacobian as $\mathbf{J}_m = \frac{\partial}{\partial \mathbf t} [\phi_e(\mathbf P_m )]$, for which $\mathbf J_m\in \mathbb{R}^{2l\times3\times3}$. Following the feature metric of PointNetLK~\cite{aoki2019pointnetlk}, we approximate each column of the Jacobian using \begin{align} \mathbf J_{m,p}\approx \frac{ \phi_e( \mathbf P_m+\mathbf t_p)-\phi_e(\mathbf P_m)}{\Delta t} \in \mathbb{R}^{2l\times 3} \label{eq:Jacobian_1} \end{align} where $\mathbf t_p \in \{[\Delta t,0,0],[0,\Delta t,0], [0,0,\Delta t]\}$. Then the Jacobian of the feature over the translation is \begin{align} \mathbf J_m = [\mathbf J_{m,1}\ \mathbf{J}_{m,2} \ \mathbf {J}_{m,3}]. \label{eq:Jacobian_2} \end{align} To note that the Jacobian is computed together with the implicit feature which is \emph{before} the transformation (with $\mathbf T$). Thus each column of Jacobian need a pre-transformation $\mathbf J_{m,p}\leftarrow \mathbf J_{m,p}\mathbf R^T$ together with its feature transformation in \cref{eq:feature_transform}. In addition, the translation bias $\mathbf t_{m,n}$ in \cref{eq:f_lin} cannot be directly multiplied with $\mathbf J_m$. An inverse rotation is required, that is $\mathbf J_m \mathbf R^T \mathbf t_{m,n}$. We then keep the formulation \cref{eq:f_lin} still valid by rewriting the Jacobian as $\mathbf J_m\leftarrow \mathbf J_m \mathbf R^T$. In our implementation, for each target grid $\mathbf v_n$ in $\mathbf V_g$ that has neighbors with center distance$<$voxelSize, we find its $K$ nearest neighbors $\mathbf v_m$ where $m\in\{c_1,\cdots c_K \}$ in the local grid. Here $c_i$ denotes the PLIVox index of neighbors. Then we interpolate \begin{align} \mathbf F_{n} = \sum_{m\in\{c_1,\cdots c_K \}} s_{n,m}( \mathbf F_m + \mathbf J_m \mathbf t_{n,m} ) \end{align} where $s_{n,m} = \frac{\exp(-||\mathbf t_{n,m}||^2)}{\sum\exp( -|| \mathbf t_{n,\cdot}||^2)}$. Moreover, the voxel point number $w$ is required for the following global neural implicit map updating (\ref{sec:map_removal_fusion}), thus should also be recorded \begin{align} w_{n} = \sum_m s_{n,m} \cdot w_{m}. \end{align} \subsubsection{Map Removal \& Fusion} \label{sec:map_removal_fusion} DI-Fusion provided an updating of the neural implicit map in a voxel-to-voxel manner. As we transform and fit the local grid to the global grid in previous \cref{sec::map::trans}~\cref{sec::map::interp}, we are now ready for the neural implicit map updating. Since the global map has been fused previously with the local map $\mathbf V_{k}$, i.e., after a pose update, the global map $\mathbf V_g$ requires a local removal, and afterwards a local fusion with the updated neural implicit map. We similarly formulate the removal formula. The removal and fusion are done as following: \begin{align} \mathbf F_m\leftarrow \frac{\mathbf F_mw_m\mp \mathbf F_m^k}{w_m \mp w_m^k},\quad w_m\leftarrow w_m\mp w_m^k \end{align} \subsubsection{Mesh Extracting} We follow DI-Fusion~\cite{huang2021di} to build the reconstruction model from the neural implicit map. First, signed distance fields are generated for each PLIVox by using the decoder $\phi_d$ from \cref{sec::feature}. Then with the one complete signed distance field, the Marching Cube algorithm is used to extract the mesh model. The whole pipeline is plotted in \cref{fig:pipeline}. When frame $i$ is processed by the SLAM system, an external localization module (see experiment section) is required to track the camera and maintain the pose graph. In each frame, our neural implicit mapping module encodes the frame and fuses it into the global implicit representation. When there is a loop closure, it updates the sequence of poses and our mapping system checks the pose of each frame. If the pose of a certain frame is updated, the mapping module will \emph{(1) remove the old local neural implicit map} of that frame, \emph{(2) transform to a new pose and interpolate} to produce a new local neural implicit map from the original copy of the local map, and \emph{(3) fuse this new local map into global}. \section{Experiments} \subsection{Setting} \subsubsection{Datasets} Three datasets have been used in our experiments. The object dataset ShapeNet~\cite{chang2015shapenet} is used for training purpose. The RGB-D dataset ICL-NUIM~\cite{handa2014benchmark} and Replica~\cite{straub2019replica} are utilized for quantitative evaluation. \paragraph{ShapeNet~\cite{chang2015shapenet}} ShapeNet is a rich-annotated large variety 3D shape dataset. We follow \cite{huang2021di} to select 6 categories (bookshelf, display, sofa, chair, lamp, and table) and 100 samples to train the encoder and decoder model. For more details about the pre-procession of this data, please refers to \cite{huang2021di}. \paragraph{ICL-NUIM~\cite{handa2014benchmark}} ICL-NUIM is a widely used RGB-D dataset for SLAM and Reconstruction. It contains living rooms and office room scenes. From which the living room scene contains a ground truth surface model. So this living room scene is widely involved in research for surface comparison. We use lr-kt[0-3] with synthetic noise for standard surface comparison. \paragraph{Replica~\cite{straub2019replica}} The Replica data set is a highly photo-realistic 3D indoor scene reconstruction dataset. We use iMAP's~\cite{sucar2021imap} 8 sequences (5 offices and 3 apartments) from Replica. The 8 sequences contain rendered 2000 RGB-D frames each. Different from ICL-NUIM sequences that do not repeatedly record certain views, because of the live-optimization of iMAP, the replica sequences cover each direction and surface multiple times. We extensively implement our model on this dataset to demonstrate the reconstruction quality. \subsubsection{Implementation details} All of the experiment is implemented on a NUC-computer (CPU-i7-10710U 1.10GHz, 32GB memory, GTX2080Ti-12GB). We follow the DI-Fusion to set PLIVox parameters: voxel-size$=0.1m$. For mesh extraction, we also set the same $\sigma$ threshold $\sigma_D=0.06$ to fairly compare with DI-Fusion. For encoding, we set the length of a feature to $9\times 3\times2$ for the VNN SO(3)-equivariant feature. Three VNN-linear operations are required on each point and normal to get the two $9 \times 3$ features for each point in PLIVox. More specifically, the point encoder sequentially VNLinear(1,32)$\rightarrow$VNLeakyReLU$\rightarrow$VNLinear(32, 32)$\rightarrow$VNLeakyReLU$\rightarrow$VNLinear(32,9) results in a $9\times 3$ size feature at point and normal branch respectively. Then, by mean-pooling and concatenating, one $18 \times 3$ size point set feature is obtained for that PLIVox. For decoding and optimization loss, we follow the same as DI-Fusion to predict both mean and variance and train the whole encoder-decoder network in a similar strategy as the Conditional Neural Processes~\cite{garnelo2018conditional}. For interpolation, we set the candidate number $K=8$. For mesh extracting, resolution$=4$ for the space grid in each PLIVox used. To test on the Replica dataset, we set $\sigma_D=0.15$ and resolution$=3$. \subsubsection{Training} Even with the different encoder structures, our model is actually trained with the same setting as DI-Fusion~\cite{huang2021di} on ShapeNet data. \begin{figure}[b!] \vspace{-0.5cm} \centering \includegraphics[width=.8\linewidth,height=.4\linewidth]{im/transform/completeness.png} \includegraphics[width=.8\linewidth,height=.4\linewidth]{im/transform/accuracy.png} \caption{Accuracy and completeness. Accuracy metrics show how close {\color{YellowOrange}encode-transform} extracted points to points in {\color{blue}transform-encode branch} are. While completeness metrics show how close {\color{blue}transform-encode} extracted points to points in {\color{YellowOrange}encode-transform branch} are.} \label{fig:exp:transform} \end{figure} \subsubsection{Testing} During testing, the pre-trained encoder-decoder is transferred to the new scene of indoor scale reconstruction. We have two tests, one is about the functionality of the transformation, and the other is about the incremental reconstruction. For the first test in \cref{sec::exp::trans}, we use the mapping module for a single frame. For the second test in \cref{sec::exp::recons}, ORB-SLAM2 RGB-D is utilized to provide the localization. Then we use two benchmarks to evaluate the performance: ICL-NUIM and Replica~\cite{straub2019replica}. ICL-NUIM is the most widely used standard test which has a standard surface model and metrics tools for comparison. On the standard ICL-NUIM benchmark, we compare with DVO-SLAM~\cite{kerl2013dense}, Surfel Tracking~\cite{keller2013real}, ElasticFusion~\cite{whelan2015elasticfusion}, BundleFusion~\cite{dai2017bundlefusion}, and DI-Fusion~\cite{huang2021di}. On Replica dataset, we follow the paper iMap~\cite{sucar2021imap} to select the data and compare it with iMAP as a baseline. Values are taken from the iMAP paper as its source is not released. \subsection{Evaluate the Functionality of Transformation on Neural Implicit Maps} \label{sec::exp::trans} As introduced in \cref{fig:twobranch}, there are actually two paths to encode point clouds into the neural implicit map with transformation. To evaluate the functionality of our transformation algorithm, we generate neural implicit maps in both branches and then measure the performance with respect to accuracy and completeness between the reconstruction from the result neural implicit maps. Accuracy shows the average distance between the sampled reconstruction points in the encode-transform path and the nearest points in the transform-encode path. Completeness shows the average distance between the sampled points from transform-encode reconstruction and the nearest points in the encode-transform path. We select lr-kt[0-3] as the test sequences, and GT-trajectory to provide the transformation accordingly. After generating the neural implicit maps, the decoder is used to generate the Signed Distance Field and the Marching Cubes algorithm is used to generate the surface. Each frame is recorded separately to compute the surface error. We also draw the error for each frame, the distribution of error is shown in \cref{fig:exp:transform}. It is clear that our method retains a similar reconstruction with the transformation on the neural implicit maps. For the completeness especially, the very low error shows that our transformation-on-implicit well-reconstructs the surface region compared to the transformation-then-implicit build. However, there still exists an $\sim 3cm$ error. The success of the incremental reconstruction in the following experiment shows that this is more a small surface generation effect in the whole reconstruction. \begin{table}[] { \small \centering \setlength{\tabcolsep}{6.0pt} \caption{Comparison of surface error on ICL-NUIM~\cite{handa2014benchmark} benchmark (measured in centimeters).} \label{tbl:icl-surface} \begin{tabularx}{\linewidth}{X|cccc} \toprule & lr kt0 & lr kt1 & lr kt2 & lr kt3 \\ \midrule DVO-SLAM~\cite{kerl2013dense} & 3.2 & 6.1 & 11.9 & {5.3} \\ RGB-D SLAM~\cite{endres2012evaluation} & 4.4 & 3.2 & 3.1 & 16.7\\ MRSMap~\cite{stuckler2014multi}& 6.1 & 14 & 9.8 & 24.8\\ Kintinuous~\cite{whelan2015real} &1.1&0.8& 0.9& 15.0\\ ElasticFusion~\cite{whelan2015elasticfusion} &0.7 & 0.7& \textbf{0.8}&2.8\\ DI-Fusion~\cite{huang2021di} & \textbf{0.6} & 1.5 & 1.1 & 4.5 \\ \midrule Ours &1.2 & 1.58 & 1.0 & \textbf{1.2} \\ \bottomrule \end{tabularx} } \vspace{-0.4cm} \end{table} \subsection{Evaluate the Incremental Reconstruction Performance} \label{sec::exp::recons} \subsubsection{ICL-NUIM test} \label{sec::exp::recons::ICL} In this part, we evaluate our model on the ICL-NUIM benchmark with synthetic noise added. We use a surface error to metric for the difference between reconstruction and the ground-truth model. The quantitative evaluation is demonstrated in \cref{tbl:icl-surface}. We observe that the neural implicit map based algorithm achieves high accuracy compared to others. However, in lr-kt3 which contains loops, DI-Fusion does not exceed ElasticFusion. But our model gets the best score with $1.2$. To note that, our model is able to detect and remap the start-end loop on lr-kt3, which is reflected on the scores, $1.2cm$, exceeding the rest. This demonstrates that our model is able to \emph{address the problem of DI-Fusion which is not compatible to a loop closure module}. We find that our model scores similar on lr-kt1, 2 with DI-Fusion. It also approves that our VNN-encoder well-represent the feature while holding the SO(3)-equivariant functionality. \begin{table*}[!] \centering \caption{Reconstruction Test on Replica Dataset~\cite{straub2019replica}.} \label{tab:replica} \begin{adjustbox}{max width = \linewidth} \begin{threeparttable} \setlength{\tabcolsep}{0.36em} \begin{tabular}{clccccccccc} \toprule & & \tt{room-0} & \tt{room-1} & \tt{room-2} & \tt{office-0} & \tt{office-1} & \tt{office-2} & \tt{office-3} & \tt{office-4} & Avg. \\ \midrule \multirow{3}{*}{iMAP$^*$~\cite{sucar2021imap} } & {\bf Acc.} [cm] $\downarrow$ & 3.58 & 3.69 & 4.68 & 5.87 & 3.71 & 4.81 & 4.27 & 4.83 & 4.43 \\ & {\bf Comp.} [cm] $\downarrow$ & 5.06 & 4.87 & 5.51 & \textbf{6.11} &\textbf{ 5.26} & \textbf{5.65} & 5.45 & 6.59 & \textbf{5.56}\\ & {\bf Comp. Ratio} [$<$ 5cm \%] $\uparrow$ & 83.91 & {83.45} & 75.53 & 77.71& \textbf{79.64} & 77.22& 77.34 & \textbf{77.63} & 79.06\\ \midrule \multirow{3}{*}{\textbf{Ours}} % & {\bf Acc. } [cm] $\downarrow$ &\textbf{ 2.05} & \textbf{1.74}&\textbf{1.97} & \textbf{2.03 }& \textbf{1.63} & \textbf{2.10 }& \textbf{2.75} & \textbf{3.07} & \textbf{2.17} \\ & {\bf Comp. } [cm] $\downarrow$ &\textbf{3.75} &\textbf{3.41} &\textbf{4.60} & 9.68 & 8.73 & \textbf{5.67} & \textbf{4.77} & \textbf{5.14} & 5.72 \\ & {\bf Comp. Ratio } [$<$ 5cm \%] $\uparrow$ & \textbf{86.59} & \textbf{87.60 }& \textbf{83.57}& \textbf{79.28} & 78.14 & \textbf{77.76} &\textbf{78.19} & 74.16 & \textbf{80.66} \\ \bottomrule \end{tabular}% \begin{tablenotes} \item[$^*$] Values taken from \cite{sucar2021imap}. \end{tablenotes} \end{threeparttable} % \end{adjustbox} \vspace*{-2mm} \end{table*} \begin{figure*}[t!] \centering \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/office0.png} \caption{office0} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/office1.png} \caption{office1} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/office2.png} \caption{office2} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/office3.png} \caption{office3} \end{subfigure}% \\ \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/office4.png} \caption{office4} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/room0.png} \caption{room0} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/room1.png} \caption{room1} \end{subfigure}% \begin{subfigure}{.2\textwidth} \centering \includegraphics[width=1.\linewidth]{im/textured/room2.png} \caption{room2} \end{subfigure}% \caption{Reconstruction Demonstration. Our post-processed textures are averaged from projected image colors.} \label{fig:draw_demo} \vspace*{-6mm} \end{figure*} \subsubsection{Replica Test} We also evaluate our model on the iMAP~\cite{sucar2021imap} Replica Dataset sequences. The metrics follow iMAP on accuracy, completion, and completion ratio. The completion ratio is an important metric because the ground truth model contains the ceiling which is mostly non-touched in data sequences. In \cref{tab:replica}, we see that our model scores best on all accuracy tests, and best on most completion and completion ratio. On the average scores, our model achieves best on accuracy and completion ratio. Our average completion does not exceed iMAP. Note that iMAP is a live-training model with differential rendering. It is naturally capable to complete the blocked region of point clouds. Our model does not have this point cloud completion function. This explains why iMAP exceeds ours on completion. But its higher completion but lower or similar completion ratio of iMAP means the guess of unobserved surface usually fails. Some results are textured and plotted at \cref{fig:draw_demo}. \subsection{Efficiency Test} \subsubsection{Time Cost} Time efficiency of encoding and remapping directly influences the usage of our model in online reconstruction. Thus we recorded the time cost of lr-kt3 test from \cref{sec::exp::recons::ICL} that contains a large loop. For each frame that is fed into the mapping model, it is encoded as a local neural implicit map. The recorded encoding time is $\mathbf{0.0077s\pm0.0019s}$ per frame. When a loop is detected, certain frames require being removed, transformed and fused again onto the global map. This is accomplished with neural implicit maps. Per frame removal from global map takes $\mathbf{0.0040s\pm0.00038s}$. Per frame transformation and interpolation take $\mathbf{0.018s\pm0.0039s}$. Per frame fusion to global map takes $\mathbf{0.0032\pm0.00044s}$. Thus the encoding processes around $\mathbf{130Hz}$, and the remapping processes around $\mathbf{50Hz}$ on our NUC-computer. Therefore this remapping can be well-adequate for the online application. \subsubsection{Space Cost} The space cost mainly consists of the Network (encoder, decoder), Neural Implicit Map, and Meshing. We evaluate this by saving network parameters and neural implicit maps into files. The parameter files are $31.5kB$ for encoder and $207kB$ for decoder. We save full result map from lr-kt3 test with $torch.save$, merely $29.2MB$ are taken. During the encoding, we fetched the network parameters that take $26.5kB$. Points are passed to our VNN-Encoder. As we previously provided the specific layers. The space cost is computed as $n\times \max\{1,32,32,9\}\times 3\times2=192n$ $float32$ as an intermediate buffer is not reserved. Thus we count the points into the encoder to compute the encoding buffer $105MB\pm59MB$. We do not count the space cost of the mesh extracting, as in \cref{fig:pipeline}, it is for visualization and can be operated externally. \subsection{Demonstration on Campus-scale Reconstruction} We are also interested in scenarios that other methods cannot do. We see that for indoor scenes, many sequences do not contain large loops for the front-end restructuring. But when it turns to outdoor LiDAR SLAM, such as KITTI-odometry~\cite{geiger2012we}, loop closure shows vital significance to remove the accumulated error for a long trajectory. Thus, we attempt to produce a neural implicit mapping on such a scene to further reveal the capability of our algorithm. The LiDAR localization model PyICP-SLAM\footnote{\url{https://github.com/gisbi-kim/PyICP-SLAM}} is utilized to provide tracking estimation and the pose graph. Due to the scale difference to the indoor scene and limited available memory, we use a voxel size of $4m$, other hyperparameters are preserved. A reconstruction on KITTI-odometry sequence $00$ is given in \cref{fig:kitti}. \begin{figure}[!] \centering \includegraphics[height=.7\linewidth]{im/FMT_kitti.png} \caption{Incremental reconstruction result on KITTI-odometry sequence 00.} \label{fig:kitti} \vspace{-.5cm} \end{figure} \section{Conclusion} In this paper, we have presented a neural implicit mapping module that does support loop closing. By utilizing an SO(3)-equivariant encoder, we are able to implement SE(3)-transformations directly on the neural implicit maps. In combination with an interpolation step, our mapping module supports updating the neural implicit map when the pose of certain frame changes without touching the original 3D point cloud. In addition, we showed in our experiments, that our SO(3)-equivariant encoder takes the responsibility of generating neural implicit maps, and based on that, our transforming module functions well and provides high-quality reconstruction with and without a loop closure. \vspace{-.3cm} {\small \bibliographystyle{IEEEtran}
proofpile-arXiv_068-968
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} \section{Introduction} \label{sec:introduction} \normalem \IEEEPARstart{M}{any} real-world continuous optimization problems involve the optimization of multiple, often conflicting objectives, and constraints that need to be respected~\cite{Ma2019}. Such problems are known as \emph{constrained multiobjective optimization problems} (CMOPs) and have recently gained much interest in the evolutionary computation community. Indeed, several novel techniques for constraint handling and new test suites of CMOPs have been proposed recently (e.g.,~\cite{Fan2019b, Zhu2020, Ma2021a, Zhi-Zhong2021}). Despite the large amount of recently published articles in the field of constrained multiobjective optimization, the CMOPs for benchmarking \emph{multiobjective evolutionary algorithms (MOEAs)} and corresponding \emph{constraint handling techniques (CHTs)} are still unsatisfactorily understood and characterized~\cite{Picard2021, Vodopija22}. Consequently, the selection of appropriate CMOPs for benchmarking is difficult and lacks a formal background. In the circumstances, preparing a sound and well-designed experimental setup for constrained multiobjective optimization is a challenging task. A poorly designed benchmark might lead to inadequate conclusions about CMOP landscapes and MOEA performance~\cite{Vodopija22}. According to~\cite{Bartz-Beielstein2020}, there are two main options for characterizing and evaluating the quality of optimization problems, namely through the \emph{feature space} and \emph{performance space}. The feature space can be seen as a space of problem characteristics, including basic characteristics such as problem dimensionality and type of objective and constraint functions, as well as more advanced problem characteristics derived using methods developed in the field of \emph{exploratory landscape analysis (ELA)}~\cite{Mersmann2011}. On the other hand, the performance space represents the problems based on the obtained algorithm performance (behavior) while solving these problems. Similar to the feature space, basic statistics can be used, such as mean or median algorithm performance, as well as more advanced methods, e.g., \emph{data profiles}~\cite{More2009} or \emph{empirical cumulative distribution functions (ECDFs)}~\cite{Hansen21, Hansen2022}. In contrast to the aggregated values (means, medians, etc.), the latter two methods consider the progress of the whole algorithm run and, this way, provide more comprehensive information about the algorithm behavior. In our previous work~\cite{Vodopija22}, we provided an extensive study of characterizing CMOPs through the feature space, while, to the best of our knowledge, the performance space has not been addressed in sufficient depth yet. In the literature, the performance indicators used in constrained multiobjective optimization are the same as those used in unconstrained multiobjective optimization, they are simply applied only to feasible solutions. The most frequently employed indicators are the \emph{hypervolume indicator}~\cite{Zitzler1999} and \emph{inverted generational distance}~\cite{Bosman2003} since they can provide information about the convergence and the diversity of the obtained Pareto front approximation. For monitoring the performance during the run, one can use convergence graphs, data profiles or ECDFs. However, none of these techniques provides relevant information until feasible solutions are discovered. As a result, essential insights about the algorithm behavior and CMOP characteristics are missed. To overcome this situation, some papers report also the constraint satisfaction progress~\cite{Fan2019a}. Nevertheless, to the best of our knowledge, no method from the literature simultaneously measures both the convergence towards the Pareto front and constraint satisfaction, making the experimental analysis incomplete. Moreover, we are aware of only a single work analyzing the CMOPs from the performance space perspective which was conducted in 2017~\cite{Tanabe2017}. The authors used five CHTs to characterize five artificial and seven real-world test problems. The results revealed that only a single artificial test problem was suitable for benchmarking algorithms since the other four problems could be solved even without employing a CHT. Additionally, the studied real-world problems were inadequate since they could not differentiate among MOEAs—a desired property of a test problem as it provides relevant information for algorithm designers~\cite{Bartz-Beielstein2020}. Since 2017, several novel test suites of CMOPs have been proposed, and their ability to differentiate among MOEAs has not been investigated yet~\cite{Filipic21}. In this paper, we present a novel anytime performance assessment approach specifically designed for constrained multiobjective optimization. It simultaneously monitors both the Pareto front approximation and constraint satisfaction. The approach is inspired by the anytime performance assessment of algorithms on unconstrained bi-objective optimization problems used in COCO (COmparing Continuous Optimizers)~\cite{Hansen21}. In addition, we propose an approach to measure the capability of a given problem to differentiate among MOEAs. The resulting measure is then used to evaluate the most frequently used artificial test suites of CMOPs. Because of space limitations, we present only selected results in this paper. The interested reader can find the complete results online\footnote{https://vodopijaaljosa.github.io/cmop-web/}. The rest of this paper is organized as follows. In Section~\ref{sec:theoretical_background}, we provide the theoretical background for constrained multiobjective optimization and introduce the COCO platform Then, in Section~\ref{sec:methodology}, we extend the performance assessment from the COCO platform to CMOPs and propose an approach to characterize CMOPs based on the algorithm performance. Section~\ref{sec:experiments} provides details on the experimental setup, while Section~\ref{sec:results} presents the results, evaluates the existing test suites of CMOPs, and discusses some limitations of the proposed methodology. Finally, a summary of findings and ideas for future work are discussed in Section~\ref{sec:conclusions}. \section{Background} \label{sec:theoretical_background} In this section, we provide the theoretical background for this work. After the definitions for CMOPs and constraint violation, we describe the performance assessment approach for multiobjective optimization used in the COCO platform. \subsection{Constrained Multiobjective Optimization Problems} A CMOP is, without loss of generality, formulated as: \begin{equation} \label{eq:cmop} \begin{split} \text{minimize} \quad &f_m(x), \quad m = 1, \dots, M \\ \text{subject to} \quad &g_i(x) \leq 0, \quad i = 1, \dots, I\\ \end{split} \end{equation} where $x = (x_1, \dots, x_D)$ is a \emph{search vector}, $f_m: S \rightarrow \R{}$ are \emph{objective functions}, $g_i: S \rightarrow \R{}$ \emph{constraint functions}, $S \subseteq \R{D}$ is a \emph{search space} of dimension $D$, and $M$ and $I$ are the numbers of objectives and constraints, respectively. In particular, when $M=1$ the corresponding problem is a \emph{constrained single-objective optimization problem (CSOP)}. To differentiate between CMOPs and CSOPs, in the later case, we will omit the index $m$ ($f = f_1$). Additionally, if the problem has no constraints, it is called a \emph{single-objective optimization problem (SOP)} when $M = 1$, and a \emph{multiobjective optimization problem (MOP)} when $M > 1$. One of the most important concepts in constrained optimization is the notion of the \emph{constraint violation}. For a single constraint $g_i$ it is defined as $v_i(x) = \max(0, g_i(x))$ and combined for all constraints together as \begin{equation} v(x) = \sum_{i=1}^{I} v_i(x) \end{equation} into the \emph{overall constraint violation}. A solution $x$ is feasible iff its overall constraint violation equals zero ($v(x) = 0$). Note that other definitions for overall constraint violation exist, and their use would impact the analysis performed in this study. However, the proposed definition for the overall constraint violation is by far the most commonly adopted in constrained optimization~\cite{Filipic21}, and as such, it represents the most appropriate choice. A feasible solution $x \in S$ \emph{dominates} another solution $y \in S$ iff $f_m(x) \leq f_m(y)$ for all $1 \leq m \leq M$ and $f_m(x) < f_m(y)$ for at least one $1 \leq m \leq M$. Additionally, a solution $x^* \in S$ is \emph{Pareto optimal} if there is no solution $x \in S$ such that dominates $x^*$. We can generalize the Pareto dominance to sets. A set $X$ dominates set $Y$ iff for each $y \in Y$ there exists at least one solution $x \in X$ that dominates $y$. The set of all feasible solutions is called the \emph{feasible region} and is denoted by $F = \{x \in S \mid v(x) = 0\}$. All nondominated feasible solutions represent a \emph{Pareto-optimal set}, $S_\text{o}$. The image of the Pareto-optimal set in the objective space is the \emph{Pareto front} and is denoted here by $P_\text{o} = \{f(x) \mid x \in S_\text{o}\}$. The \emph{ideal objective vector}, $z^{\mathrm{ide}}$, is defined as the vector in the objective space that contains the optimal objective value for each objective separately and it is expressed as \begin{equation} z^{\mathrm{ide}} = \left(\inf_{x \in F}f_1(x), \dots, \inf_{x \in F}f_M(x)\right). \end{equation} Additionally, the \emph{nadir objective vector}, $z^{\mathrm{nad}}$, consists in each objective of the worst value obtained by any Pareto-optimal solution. It can be expressed as \begin{equation} z^{\mathrm{nad}} = \left(\sup_{x \in P_\text{o}}f_1(x), \dots, \sup_{x \in P_\text{o}}f_M(x)\right). \end{equation} An additional important concept is the region of interest in the objective space, $Z$, which represents the set of objective vectors bounded by the ideal and nadir objective vectors. If good approximations for ideal and nadir objective vectors are known, the objective functions can be normalized to \begin{equation} \frac{f_m(x) - z^{\mathrm{ide}}_m}{z^{\mathrm{nad}}_m - z^{\mathrm{ide}}_m}. \end{equation} This way, the objective values are of approximately the same magnitude and the range of the objective values for Pareto-optimal solutions is $[0,1]$. Note that after normalization the $z^{\mathrm{ide}}$ consists of $m$ zeros ($z^{\mathrm{ide}} = (0, \dots, 0)$), $z^{\mathrm{nad}}$ of $m$ ones ($z^{\mathrm{nad}} = (1, \dots, 1)$), and the region of interest $Z$ equals $[0,1]^M$. In particular, for SOPs and CSOPs the normalization results in $f(x^*) = 0$. In the rest of this paper, we assume that all the objective functions are normalized. \subsection{Empirical Runtime Distributions (ERDs)} \label{sec:coco} The performance measurement approach used in the COCO framework~\cite{Hansen21, Hansen2022} relies on the number of function evaluations\footnote{When we refer to a function evaluation, we actually mean the evaluation of all the objective and constraint functions. For example, for a bi-objective problem with three constraints we need to perform five evaluations, however, we count this as a single function evaluation.}---called \emph{runtime}---needed for an algorithm, $a$, to reach predefined quality indicator targets. More precisely, we can present an algorithm run after preforming $T$ function evaluations as a sequence of candidate solutions, $A^T(a) = \{x^1(a), \dots, x^T(a)\}$. Within this framework, a quality indicator, $I$, is defined as a function mapping $A^T(a)$ to a real value. Here, we assume that low quality indicator values indicate better sequences of candidate solutions, and vice versa. Additionally, the runtime for a given quality indicator target equals to the lowest $T$ for which $I(A^T(a))$ reaches the given target precision value, $\tau$. Note that in the following, if there is no ambiguity, we remove the algorithm notation $a$ from $A^T(a)$. In practice, we define several target precision values to understand the algorithm behavior through the entire run. Runtimes can be formally defined as random variables and displayed as an empirical cumulative distribution function---called empirical runtime distribution (ERD) in the COCO framework. ERDs are used to display the proportion of target values reached within a specified budget and can be easily aggregated over multiple restarts, runs or even multiple problems. For more details on ERDs, see~\cite{Hansen21, Hansen2022}. The runtime data set for an algorithm $a$ and all targets $\tau$-s is denoted as $\{T_a(\tau)\}_{\tau}$. Finally, the runtimes in COCO are usually studied in a logarithmic scale and this perspective is used throughout this paper as well. \subsection{Quality Indicators} Based on the nature of the optimization problem there are various quality indicators used by the COCO framework. Those relevant for this work are as follows. \subsubsection{Single-objective optimization} In this case, the quality indicator is the best so far observed objective function value: \begin{equation} \label{eq:isop} I^\mathrm{SOP}(A^T) = \min_{x \in A^T} f(x). \end{equation} \subsubsection{Constrained single-objective optimization} The quality indicator for unconstrained problems (\ref{eq:isop}) is extended by the addition of the overall constraint violation as follows: \begin{equation} \label{eq:icsop} I^\mathrm{CSOP}(A^T) = \min_{x \in A^T} f(x) + v(x). \end{equation} \subsubsection{Multiobjective optimization} The quality indicator for MOPs consists of two parts. When no solution from the sequence $A^T$ dominates the nadir point (reference point), the distance to the region of interest $Z$ is used to measure the quality of the solutions (see Fig.~\ref{fig:1b}). In contrast, when at least one of the solutions dominates the nadir point, the hypervolume indicator is used instead (see Fig.~\ref{fig:1c}). This quality indicator can be mathematically expressed as: \begin{equation} \label{eq:imop} I^\mathrm{MOP}(A^T) = \begin{cases} - I^{\mathrm{HV}}(A^T) & \text{if } A^T \preceq \{ z^{\mathrm{nad}} \} \\ d(A^T, Z) & \text{otherwise}. \end{cases} \end{equation} Here, \begin{equation} I^{\mathrm{HV}}(A^T) = V \left(\bigcup_{x \in \mathcal{N}(A^T)} [f_1(x), 1] \times \cdots \times [f_M(x), 1] \right) \end{equation} represents the hypervolume of the archive $A^T$, $\mathcal{N}(A^T)$ is the set of all the points from $A^T$ dominating the reference point which equals $(1, \dots, 1)$, and \begin{equation} \label{eq:dist} d(A^T, Z) = \inf_{(x,z) \in A^T \times Z} \norm{f(x) - z} \end{equation} is the smallest Euclidean distance between the archive and the region of interest $Z$. Additional information on this quality indicator can be found in~\cite{Hansen2022}. \subsection{Target Precision Values for MOPs} For each problem a set of quality indicator target values is chosen, which is used to measure algorithm runtimes and in turn to calculate ERDs. The target values are computed in the form of $\tau(\varepsilon) = \tau^\mathrm{ref} + \varepsilon$, where $\tau^\mathrm{ref}$ is a reference $I^\mathrm{MOP}$ value. It is either based on the hypervolume of the true Pareto front or an estimation for it. In COCO, 51 positive target precision values $\varepsilon \in \{10^{-5}, 10^{-4.9}, \dots, 10^{-0.1}, 10^{0}\}$ are chosen\footnote{In COCO, negative values are also introduced in case the algorithms find better Pareto front approximations than available ones. However, in our situation this cannot happen as all the Pareto fronts are known in advance.}. Note that it is not uncommon that the quality indicator value of the algorithm never reaches some of these target values, which leads to missing runtime measurements. \begin{figure*}[!t] \centering \subfloat[]{\includegraphics[width=0.31\textwidth]{fig1a}% \label{fig:1a}} \hspace{0.025\textwidth} \subfloat[]{\includegraphics[width=0.31\textwidth]{fig1b}% \label{fig:1b}} \hspace{0.025\textwidth} \subfloat[]{\includegraphics[width=0.31\textwidth]{fig1c}% \label{fig:1c}} \caption{The quality indicator $I^\mathrm{CMOP}$ at three stages of the algorithm search: (a) All the solutions belong to the infeasible space (areas in gray) and the quality indicator relies on the overall constraint violation. (b) There exists at least one feasible solution but no solutions dominate the reference point $z^{\mathrm{nad}}$. The quality indicator relies on the distance to the region of interest $Z$ (area bounded by the doted lines and the coordinate axes). (c) There exists at least one feasible solution dominating the reference point. The quality indicator is based on the hypervolume (area depicted with a mesh).} \label{fig:1} \end{figure*} \section{Methodology} \label{sec:methodology} This section provides an extension of ERDs to constrained multiobjective optimization. It also discusses an approach to measure the problem's effectiveness to distinguish algorithms based on the distance between ERDs. \subsection{Quality Indicator for CMOPs} There are two main paradigms to approach constrained optimization problems. The first one is applicable when the constraints must be satisfied at any cost, while the second one allows for partial violation of constraints if the objective values of a solution are of good quality. Although both paradigms have their pros and cons, we study the former one as this is the prevalent approach in the literature\footnote{The COCO framework uses the second paradigm for CSOPs.}. Furthermore, in many real-world scenarios the objective values cannot be calculated if the solution is infeasible~\cite{Eiben2003}. This often happens in simulation-based optimization where the simulator cannot return meaningful results if some of the constraints are not satisfied. Consequently, our main assumption in constructing a quality indicator for constrained multiobjective optimization is that an infeasible solution is strictly worse than any feasible solutions regardless of the objective value quality---this is exactly the pillar of the first paradigm. For example, in Fig.~\ref{fig:1c}, solution $z^4$ has better objective values than $z^5$. Actually, if no constraints were considered, $z^4$ would dominate $z^5$. Nevertheless, $z^5$ is considered to be strictly better than $z^4$. This desired property of a quality indicator can be mathematically expressed as follows: \begin{equation} \label{eq:ic} I(x) < I(y) \quad \text{ for all } (x,y) \in F \times S \backslash F. \end{equation} The quality indicator for CSOPs (\ref{eq:icsop}) does not satisfy this property. The biggest disadvantage of an indicator not satisfying (\ref{eq:ic}) is that no matter how small the quality indicator value is, we cannot know whether there exists a feasible solution in $A^T$. For example, for a certain CSOP there might exist a solution in $A^T$ with $f(x) = 0$ and an arbitrarily small overall constraint violation value. In other words, unless $I^\mathrm{CSOP}$ equals zero ($x^* \in A^T$), we cannot know for certain whether we found a feasible solution relying solely on the quality indicator values. From a practical point of view, we wish for a quality indicator to involve a threshold that unequivocally indicates when the algorithm reached the feasible space. Considering this, we propose an extension of the quality indicator for MOPs (\ref{eq:imop}) as follows: \begin{equation} \label{eq:icmop} I^\mathrm{CMOP}(A^T) = \begin{cases} \min(I^\mathrm{MOP}(A^T \cap F), \tau^*) & A^T \cap F \neq \emptyset\\ \min_{x \in A^T} v(x) + \tau^* & \text{otherwise} \end{cases} \end{equation} where $\tau^*$ is a threshold to indicate that the feasible space was reached. For example, in the COCO framework, it can be set to the largest considered target for MOPs, which equals 1. It is also obvious that $I^\mathrm{CMOP}$ satisfies the property (\ref{eq:ic}). The behavior of the proposed quality indicator is illustrated in Fig.~\ref{fig:1}. Additionally, note that in (\ref{eq:icmop}) only feasible solutions are considered in the calculation of $I^\mathrm{MOP}$. This can be seen in Figs.~\ref{fig:1b}~and~\ref{fig:1c} where the infeasible solutions are not considered for the calculation of the distance and hypervolume, respectively, once feasible solutions have been found. \subsection{Performance Space Comparison} \label{sec:performance_space_comparison} According to~\cite{Bartz-Beielstein2020}, a good test suite should include problems that ``enable the user to tell the algorithms apart in the performance space''. To measure the ability of a given problem to differentiate among MOEAs, we rely on the area between the corresponding ERDs (see Fig.~\ref{fig:2}, area in gray). The intuition is that a large area between ERDs indicates large differences between the runtimes and in turn between the algorithms. \begin{figure}[!t] \centering \includegraphics[width=0.62\columnwidth]{delta-new} \caption{ERDs corresponding to algorithms $a$ (solid line) and $b$ (dashed line). The area between the lines (in gray) represents the difference in algorithm performance, $\Delta(a,b)$.} \label{fig:2} \end{figure} Based on the area between the two ERDs we want to propose a metric, $\Delta$, that with a single number provides information about the similarity between algorithms and their performance. Let us assume we are dealing with two algorithms $a$ and $b$, and we have their corresponding runtime data sets for a certain optimization problem $\{T_a(\tau)\}_{\tau}$ and $\{T_b(\tau)\}_{\tau}$. Then, the area of a single segment between the runtimes (in the logarithmic scale) for the same target can be obviously calculated as follows (see Fig.~\ref{fig:2}, the bold line between runtimes): \begin{equation} \label{eq:segment} \frac{\abs{\log\left(T_a(\tau)\right) - \log\left(T_b(\tau)\right)}}{\ntarget{}}, \end{equation} where $\ntarget{}$ is the number of targets. In particular, when a certain runtime is missing, we set its value to the maximal budget (number of function evaluations). This is done for calculation purpose and has no particular meaning. Using the formula (\ref{eq:segment}), the area bounded by the two ERDs and thus $\Delta$ can be expressed as the sum of these segment areas over all the targets: \begin{equation} \Delta(a, b) = \frac{\sum_{\tau \in \tau}\abs{\log\left(\frac{T_a(\tau)}{T_b(\tau)}\right)}}{\ntarget{} \log N_{f}}, \end{equation} where $N_{f}$ is the number of function evaluations. The formula is additionally divided by $\log N_f$ for normalization purposes. It can be easily seen that $\Delta(a, b) \in [0,1]$ for all algorithms and problems. In particular, small values indicate similar behavior of the chosen algorithms, and vice versa. For example, in the extreme case algorithm $a$ might solve all the targets within a single evaluation, while the algorithm $b$ does not reach any target. In this case $\Delta(a, b)=1$. In the second extreme case all the runtimes coincide and $\Delta(a, b) = 0$. The $\Delta(a, b)$ metric can be additionally expressed as: \begin{equation} \Delta(a, b) = \frac{\ntarget{-}}{\ntarget{}} \Delta^-(a, b) + \frac{\ntarget{+}}{\ntarget{}} \Delta^+(a, b) \end{equation} where $\Delta^-$ and $\Delta^+$ represent the sum of segments (\ref{eq:segment}) over targets measuring constraint satisfaction, $\tau^-$, and targets expressing the algorithm effectiveness in approximating the Pareto front (called front approximation for short), $\tau^+$, respectively. Additionally, $\ntarget{+}$ is the number of $\tau^+$ targets, and $\ntarget{-}$ the number of $\tau^-$ targets. In particular, $\Delta^-$ can be seen as a measure of algorithm differences in constraint satisfaction, while $\Delta^+$ is a metric measuring differences in front approximation. \section{Experimental Analysis} \label{sec:experiments} This section introduces the test suites of CMOPs used for the experiments, discusses the chosen MOEAs and their CHTs, and provides the parameter and implementation details. \subsection{Test Suites} \label{sec:test_suites} The most notable artificial test suites of CMOPs used to assess the performance of constrained multiobjective optimization algorithms are CTP~\cite{Deb2001}, CF~\cite{Zhang2008}, C-DTLZ~\cite{Jain2014}, NCTP~\cite{Li2016}, DC-DTLZ~\cite{Li2019}, LIR-CMOP~\cite{Fan2019a}, DAS-CMOP~\cite{Fan2019b}, and MW~\cite{Ma2019}. The basic characteristics of the test suites are summarized in Table~\ref{tab:suites}. \begin{table} \centering \caption{Characteristics of test suites: number of problems, dimension of the search space $D$, number of objectives $M$, and number of constraints $I$.} \label{tab:suites} \begin{tabular}{lllll} \hline Test suite & \#problems & $D$ & $M$ & $I$ \\ \hline CTP~\cite{Deb2001} & \phantom{0}8 & * & 2 & 2, 3 \\ CF~\cite{Zhang2008} & 10 & * & 2, 3 & 1, 2 \\ C-DTLZ~\cite{Jain2014} & \phantom{0}6 & * & * & 1, * \\ NCTP~\cite{Li2016} & 18 & * & 2 & 1, 2 \\ DC-DTLZ~\cite{Li2019} & \phantom{0}6 & * & * & 1, * \\ DAS-CMOP~\cite{Fan2019b} & \phantom{0}9 & * & 2, 3 & 7, 11 \\ LIR-CMOP~\cite{Fan2019a} & 14 & * & 2, 3 & 2, 3 \\ MW~\cite{Ma2019} & 14 & * & 2, * & 1--4 \\ \hline \multicolumn{5}{l}{*Scalable parameter.}\\ \end{tabular} \end{table} \subsection{Multiobjective Evolutionary Optimization Algorithms} \label{sec:moea} Three well-known CMOEAs were used to investigate the proposed assessment methodology and to compare the test suites: NSGA-III~\cite{Deb14, Jain2014}, C-TAEA~\cite{Li2019} and MOEA/D-IEpsilon~\cite{Fan2019a} all equipped with their default CHTs. NSGA-III is a well-known algorithm that uses the constrained domination principle (CDP)~\cite{Deb02} as a CHT. This principle is an extension of the dominance relation and is the most widely-used technique to solve CMOPs. It strictly favors feasible solutions over infeasible ones. While feasible solutions are compared based on Pareto dominance ($\preceq$), infeasible solutions are compared according to the overall constraint violation. The formal definition of CDP, as presented in~\cite{Zhu2020}, is the following: \begin{equation} x \preceq_{\mathrm{CDP}} y \Leftrightarrow \begin{cases} x \preceq y & \text{if } v(x) = v(y) = 0 \\ v(x) < v(y) & \text{otherwise} \\ \end{cases}. \end{equation} Next, the CHT used in MOEA/D-IEpsilon is based on the $\varepsilon$-constraint relation: \begin{equation} x \preceq_{\varepsilon} y \Leftrightarrow \begin{cases} g^{\mathrm{tc}}(x) < g^{\mathrm{tc}}(y) & \text{if } (v(x) \leq \varepsilon \text{ and } v(y) \leq \varepsilon) \\ & \text{or } v(x) = v(y) \\ v(x) < v(y) & \text{otherwise} \\ \end{cases}, \end{equation} where \begin{equation} g^{\mathrm{tc}}(x \mid \nu)=\max_{1 \leq m \leq M} \{\nu_m\abs{f_m(x) - z^*_m}\} \end{equation} is the Tchebycheff aggregation function. The comparison level $\varepsilon_t$ is updated in each generation following the expression: \begin{equation} \label{eq:eps_update} \varepsilon_{t} = \begin{cases} v(x^{\theta}) & \text{if } t = 0 \\ (1 - \tau) \varepsilon_{t - 1} & \text{if } \rho_{\mathrm{F}}(P_t) < \alpha \text{ and } t < T_{\mathrm{c}} \\ (1 + \tau) v_{\mathrm{max}} & \text{if } \rho_{\mathrm{F}}(P_t) \geq \alpha \text{ and } t < T_{\mathrm{c}} \\ 0 & \text{if } t \geq T_{\mathrm{c}} \\ \end{cases} \end{equation} where $t$ is the generational counter, $\tau$, $\alpha$, $T_{\mathrm{c}}$ are user-defined parameters, $v(x^{\theta})$ is the overall constraint violation of the top $\theta$-th individual (according to overall constraint violation value) in the initial population, and $\rho_{\mathrm{F}}(P_t)$ is the proportion of feasible solutions in the current population $P_t$. Additional details on MOEA/D-IEpsilon can be found in~\cite{Fan2019a}. Finally, the main idea behind C-TAEA is the maintenance of two separate archives. One archive is used to promote convergence, while the other one to maintain diversity. Besides, a special restricted mating approach is employed to balance between the two archives. The CHT used by C-TAEA is incorporated within the update of the convergence archive. Similarly to CDP, this CHT strictly favors feasible solutions which are compared based on Pareto dominance. On the other hand, the infeasible solutions are ranked using nondominated sorting for a custom bi-objective problem expressed as \begin{equation} \label{eq:cht_ctaea} \text{minimize} \quad (v(x), g^{\mathrm{tc}}(x \mid \nu)). \end{equation} The convergence archive is updated with all feasible solutions and the best infeasible solutions according to the Pareto ranking applied in~(\ref{eq:cht_ctaea}). In addition, the diversity archive does not consider feasibility at all allowing infeasible solutions to persists in the population. More information on this method are available in~\cite{Li2019}. We chose three MOEAs with complementary CHTs: (i) the CDP employed in NSGA-III strictly favors feasible solutions, (ii) the diversity archive in C-TAEA allows infeasible solutions to remain in the population, and (iii) MOEA/D-IEpsilon adaptively updates the comparison level each generation following~(\ref{eq:eps_update}); when the feasibility ratio of the current population becomes large, $\varepsilon_t$ increases and progressively more solutions (infeasible ones included) are compared solely according to the objective values. \subsection{Parameter Settings} \label{sec:experimental_setup} The proposed performance assessment is demonstrated on on the test suites listed in Table~\ref{tab:suites}. In particular, three-objective C-DTLZ and DC-DTLZ problems were considered with the default number of constraints. Additionally, a difficulty triplet of (0.5, 0.5, 0.5) was used for the DAS-CMOP suite as this is by far the most frequently used difficulty triplet in the literature. Three dimensions of the search space $D \in \{5, 10, 30\}$ were used to evaluate the proposed performance assessment methodology and compare the test suites. All the Pareto fronts can be analytically expressed and the corresponding hypervolume values exactly calculated. \begin{table} \centering \caption{The population size $N_{\mathrm{p}}$ and number of generations $N_{\mathrm{g}}$ used in the experimental analysis, based on the number of objectives $M$ and the dimension of the search space $D$.} \label{tab:settings} \scriptsize \begin{tabular}{l@{\hspace*{0.035\textwidth}}lll@{\hspace*{0.035\textwidth}}lll} \hline & \multicolumn{3}{c}{$M=2$} & \multicolumn{3}{c}{$M=3$} \\ \cline{2-7} $D$ & \phantom{00}5 & \phantom{0}10 & \phantom{00}30 & \phantom{00}5 & \phantom{0}10 & \phantom{00}30 \\ \hline $N_{\mathrm{p}}$ & 200 & 200 & \phantom{0}200 & 300 & 300 & \phantom{0}300 \\ $N_{\mathrm{g}}$ & 300 & 600 & 1800 & 200 & 400 & 1200 \\ \hline \end{tabular} \end{table} All MOEAs were run with an equal population size, $N_{\mathrm{p}}$, and the same number of generations, $N_{\mathrm{g}}$. In particular, the population size was set to $N_{\mathrm{p}} = 100M$. The number of generations was set to $N_{\mathrm{g}} = 120D/M$ and was selected as approximately the minimal value to obtain convergence for all the MOEAs (in total, $12000D$ function evaluations). Note, the division by $M$ in the expression for $N_{\mathrm{g}}$ is necessary to enable aggregation over CMOPs with different number of objectives. The resulting values for $N_{\mathrm{p}}$ and $N_{\mathrm{g}}$ are shown in Table~\ref{tab:settings}. Other parameters of the algorithms and their operators were set to their default values~\cite{Deb14, Li2019, Fan2019a}: The polynomial mutation was used in all the MOEAs. The mutation probability was set to $1/D$ and the distribution index to 20. Specifically, the simulated binary crossover was used in NSGA-III and C-TAEA with the crossover probability of 1 and the distribution index of 30. In contrast, a differential-evolution-based crossover was used in MOEA/D with a crossover probability of 1 and scaling factor of 0.5. Additionally, in MOEA/D, the neighborhood size was set to 30, probability of neighborhood mating to 0.9, the maximal number of solutions replaced by a child to 2, $\tau$ to 0.1, $\alpha$ to 0.95, $T_{\mathrm{c}}$ to 0.8, and $\theta$ to $0.05N_{\mathrm{p}}$. Finally, ERDs were computed without employing restarts or bootstrapping. \subsection{Target Precision Values for CMOPs} The values of the distance metric $d$ defined in (\ref{eq:dist}), as well as those of the overall constraint violation $v$ can result in different magnitudes for different CMOPs. Consequently, it is impossible to define a set of target precision values that would provide meaningful results for all the studied CMOPs. As we wanted to compare different CMOPs and suites, we first sampled 100 solutions $\{x^i\}_i$ for each CMOP and normalized $d$ and $v$ as follows: \begin{equation} \widetilde{d} = d /10^{\lceil\log(\text{med}(\{d^i\}_i))\rceil} \end{equation} and \begin{equation} \widetilde{v} = d /10^{\lceil\log(\text{med}(\{v^i\}_i))\rceil} \end{equation} where $\text{med}(\{d^i\}_i)$ and $\text{med}(\{v^i\}_i)$ are median values of the sets $\{d^i\}_i=\{d(x^i, Z)\}_i$ and $\{v^i\}_i=\{v(x^i)\}_i$, respectively. After applying this procedure and preforming several experiments, we set $\tau^*$ to 1. Additionally, a good set of target precision values for $I^\mathrm{CMOP}$ corresponds to $\tau(\varepsilon) = \tau^\mathrm{ref} + \varepsilon$, where $\varepsilon \in \{10^k \mid k \in \{-5, -4.9, \dots, 0\}\} \cup \{1 + 10^k \mid k \in \{-5, -4.9, \dots, 0\}\}$. The first half of target precision values $\tau^+$ applies to feasible solutions and represents how well the algorithm approximates the Pareto front, while the second half of the targets $\tau^-$ is used to understand the algorithm performance in satisfying the constraints. \subsection{Implementation Details} \label{sec:implementation_details} All the CMOPs, MOEAs and the performance measurement procedure were implemented in the Python programming language~\cite{Rossum09}. We used the \texttt{pymoo}~\cite{Blank20} implementation for CTP, DAS-CMOP, MW, NSGA-III, C-TAEA and hypervolume calculation. The rest of the suites, MOEA/D-IEpsilon and other functionalities were implemented from scratch. \section{Results and Discussion} \label{sec:results} In this section, we first present the experimental results. Next, we discuss the existing test suites of CMOPs and their potential in differentiating algorithms. Finally, we present some limitations of the proposed methodology. \subsection{Results} Let us first look at the results for a single problem. As an example we select the MW13 problem. Fig.~\ref{fig:ecdfs_mw13} shows the ERDs for this problem aggregated over 30 runs for each of the three algorithms. To aid visualization and comparison, the function evaluations ($x$-axis) are divided by problem dimension and shown in logarithmic scale. The horizontal dashed line divides the targets into $\tau^-$ and $\tau^+$. Note that this intuition is true only for a single run without aggregation. Nevertheless, if an ERD (single or aggregated) starts above this line, then all the algorithm runs started in the feasible region---the first initialized solution is feasible. On the other hand, if an ERD starts below the dashed line and never crosses it, then this indicates that the corresponding algorithm runs never reached the feasible region. Moreover, the line is thicker when some, but not all runs have found feasible solutions. As we can see, all the MOEAs reach all the targets for the two smaller dimensions, while NSGA-III fails to reach some targets on 30-$D$ problems. All the algorithms are able to find feasible solutions in all of the runs with NSGA-III being generally quickest in this regard. \begin{figure*}[!t] \centering \subfloat[MW12 ($D=5$)]{\includegraphics[clip, trim={0, 10pt, 0, 25pt}, width=0.33\textwidth]{ECDF_MW12-5}% \label{fig:ecdfs_mw12_d5}} \hfil \subfloat[MW12 ($D=10$)]{\includegraphics[clip, trim={0, 10pt, 0, 25pt}, width=0.33\textwidth]{ECDF_MW12-10}% \label{fig:ecdfs_mw12_d10}} \hfil \subfloat[MW12 ($D=30$)]{\includegraphics[clip, trim={0, 10pt, 0, 25pt}, width=0.33\textwidth]{ECDF_MW12-30}% \label{fig:ecdfs_mw12_d30}} \caption{Empirical runtime distribution aggregated over multiple runs for the MW12 problem for the three MOEAs and dimension 5 (left), 10 (center) and 30 (right).} \label{fig:ecdfs_mw13} \end{figure*} The aggregated results over all problems of a suite are shown in Figs.~\ref{fig:ecdfs_1}~and~\ref{fig:ecdfs_2} on the left hand side of each subfigure. For example, Fig.~\ref{fig:CTP_d5} shows the ERDs for the selected MOEAs aggregated over all problems from the CTP suite in 5-$D$. On the right hand side of each subfigure we see violin plots of distributions for $\Delta^+$ (top left), $\Delta^-$ (bottom left), and $\Delta$ (right) values. Each of these values was computed for 30 runs of each pair of algorithms on each problem and is represented in the plot as a black dot. The first column shows the distributions for all the considered CMOPs and the rest of the columns correspond to each suite separately. The $y$-axis depicts the $\Delta^+$, $\Delta^-$, or $\Delta$ value, while the $x$-axis has no specific meaning and is used solely for better visualization. The violin plot (colored area) approximates the probability density function for $\Delta^+$, $\Delta^-$, or $\Delta$ values. For example, Fig.~\ref{fig:MW_d5} shows there are more problem instances in the MW suite with $\Delta \approx 0.05$ than those with $\Delta \approx 0.10$. In addition, there are no problem instances with $\Delta > 0.25$. \begin{figure*}[!t] \centering \subfloat[CTP ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-CTP-5}% \label{fig:CTP_d5}} \hfil \subfloat[CTP ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-CTP-10}% \label{fig:CTP_d10}} \hfil \subfloat[CTP ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-CTP-30}% \label{fig:CTP_d30}} \hfil \subfloat[CF ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-CF-5}% \label{fig:CF_d5}} \hfil \subfloat[CF ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-CF-10}% \label{fig:CF_d10}} \hfil \subfloat[CF ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-CF-30}% \label{fig:CF_d30}} \hfil \subfloat[C-DTLZ ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-C-DTLZ-5}% \label{fig:C-DTLZ_d5}} \hfil \subfloat[C-DTLZ ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-C-DTLZ-10}% \label{fig:C-DTLZ_d10}} \hfil \subfloat[C-DTLZ ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-C-DTLZ-30}% \label{fig:C-DTLZ_d30}} \hfil \subfloat[NCTP ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-NCTP-5}% \label{fig:NCTP_d5}} \hfil \subfloat[NCTP ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-NCTP-10}% \label{fig:NCTP_d10}} \hfil \subfloat[NCTP ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-NCTP-30}% \label{fig:NCTP_d30}} \hfil \caption{Results of the three MOEAs on CMOPs from CTP, CF, C-DTLZ, and NCTP suites. The left plot of each subfigure shows empirical runtime distribution aggregated over all CMOPs in the suite and all targets in dimension 5 (left), 10 (center) and 30 (right). On the right of each subfigure, violin plots depict distributions of $\Delta^+$ (top left), $\Delta^-$ (bottom left), and $\Delta$ (right) values. The larger the diversity, the better.} \label{fig:ecdfs_1} \end{figure*} \begin{figure*}[!t] \centering \subfloat[DC-DTLZ ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-DC-DTLZ-5}% \label{fig:DC-DTLZ_d5}} \hfil \subfloat[DC-DTLZ ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-DC-DTLZ-10}% \label{fig:DC-DTLZ_d10}} \hfil \subfloat[DC-DTLZ ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-DC-DTLZ-30}% \label{fig:DC-DTLZ_d30}} \hfil \subfloat[DAS-CMOP ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-DAS-CMOP-5}% \label{fig:DAS-CMOP_d5}} \hfil \subfloat[DAS-CMOP ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-DAS-CMOP-10}% \label{fig:DAS-CMOP_d10}} \hfil \subfloat[DAS-CMOP ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-DAS-CMOP-30}% \label{fig:DAS-CMOP_d30}} \hfil \subfloat[LIR-CMOP ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-LIR-CMOP-5}% \label{fig:LIR-CMOP_d5}} \hfil \subfloat[LIR-CMOP ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-LIR-CMOP-10}% \label{fig:LIR-CMOP_d10}} \hfil \subfloat[LIR-CMOP ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-LIR-CMOP-30}% \label{fig:LIR-CMOP_d30}} \hfil \subfloat[MW ($D=5$)]{\includegraphics[width=0.33\textwidth]{plot-MW-5}% \label{fig:MW_d5}} \hfil \subfloat[MW ($D=10$)]{\includegraphics[width=0.33\textwidth]{plot-MW-10}% \label{fig:MW_d10}} \hfil \subfloat[MW ($D=30$)]{\includegraphics[width=0.33\textwidth]{plot-MW-30}% \label{fig:MW_d30}} \hfil \caption{Results of the three MOEAs on CMOPs from DC-DTLZ, DAS-CMOP, LIR-CMOP, and MW suites. The left plot of each subfigure shows empirical runtime distribution aggregated over all CMOPs in the suite and all targets in dimension 5 (left), 10 (center) and 30 (right). On the right of each subfigure, violin plots depict distributions of $\Delta^+$ (top left), $\Delta^-$ (bottom left), and $\Delta$ (right) values. The larger the diversity, the better.} \label{fig:ecdfs_2} \end{figure*} \subsection{Test Suites Evaluation} As already discussed in Section~\ref{sec:performance_space_comparison}, a well-designed test suite should include a wide variety of problems that can differentiate among MOEAs ($\Delta$). Since we are dealing with constrained problems, we are particularly interested in evaluating the ability of the problems to differentiate among the algorithms with respect to constraint handling ($\Delta^-$). \begin{itemize} \item CTP: As we can see, for all problems the algorithms start in the feasible space (Figs.~\ref{fig:CTP_d5}, \ref{fig:CTP_d10}, and \ref{fig:CTP_d30}). The main difficulty they face is front approximation. This is additionally confirmed by the violin plots showing that $\Delta^-$ equals 0 for all dimensions. \item CF: Unlike for CTPs, the MOEAs find no feasible solutions at the very beginning of the evolution process (Figs.~\ref{fig:CF_d5}, \ref{fig:CF_d10}, and \ref{fig:CF_d30}). Interestingly, the difference in algorithm performance originates mainly in constraint satisfaction. \item C-DTLZ: From the performance space perspective, this suite is well-designed. The algorithms struggle to find feasible solutions in the initial phase of the evolution process (Figs.~\ref{fig:C-DTLZ_d5}, \ref{fig:C-DTLZ_d10}, and \ref{fig:C-DTLZ_d30}). In addition, the suite can differentiate between algorithms in both constraint satisfaction and front approximation. \item NCTP: Although the three MOEAs need a large number of function evaluations to reach a feasible region, their main challenge is front approximation (Figs.~\ref{fig:NCTP_d5}, \ref{fig:NCTP_d10}, and \ref{fig:NCTP_d30}). The vast majority of the difference in algorithm performance also comes from front approximation. \item DC-DTLZ: Figs.~\ref{fig:DC-DTLZ_d5}, \ref{fig:DC-DTLZ_d10}, and \ref{fig:DC-DTLZ_d30} show that all the three MOEAs struggle in obtaining feasible solutions, which are discovered only late in the evolution process. Like CFs, the DC-DTLZ suite is especially good at differentiating the constraint satisfaction part of algorithm performance. \item DAS-CMOP: As we can see, for all the DAS-CMOPs the NSGA-III algorithm always starts with a feasible solution, while this is not true for other two MOEAs (Figs.~\ref{fig:DAS-CMOP_d5}, \ref{fig:DAS-CMOP_d10}, and \ref{fig:DAS-CMOP_d30}). Nevertheless, feasible solutions are easily discovered by all the algorithms. Moreover, algorithm performance differences are almost exclusively obtained in front approximation, since $\Delta^- \approx 0$ for all problems and MOEAs. \item LIR-CMOP: The performance space characteristics of this suite are very similar to those of NCTPs. Although the studied algorithms need some time to find feasible solutions, the main difference in the algorithm performance is contained in the front approximation phase (Figs.~\ref{fig:LIR-CMOP_d5}, \ref{fig:LIR-CMOP_d10}, and \ref{fig:LIR-CMOP_d30}). \item MW: From the performance space perspective, MW is one of the most versatile and well-designed artificial test suites found in the literature. It is the best among the studied suites in differentiating the three MOEAs (Figs.~\ref{fig:MW_d5}, \ref{fig:MW_d10}, and \ref{fig:MW_d30}). Moreover, as shown by the violin plots, the algorithm performance is diverse in both constraint satisfaction and front approximation. \end{itemize} \subsection{Limitations} We see two potential limitations of the proposed methodology to evaluate the performance space. Firstly, the results can be severely effected by the selection of the algorithms and their budgets, and secondly, the choice of target precision values also has a great impact on the outcome. To alleviate the first issue, we selected three different MOEAs equipped with distinct CHTs. Additionally, the number of function evaluations was set large enough to assure convergence thus revealing the deviations between the algorithms. Finally, the logarithmic scale was applied to the budget to not bias the results towards the tail of the convergence graphs where the algorithms have already converged. On the other hand, we were not able to satisfactorily address the issue of some targets having greater impact than others. For example, there is no assurance that progressing from target $1 + 10^{-4.2}$ to target $1 + 10^{-4.3}$ is equally important or difficult as progressing from target $1 + 10^{-4.3}$ to target $1 + 10^{-4.4}$ for all the problems and algorithms at hand. Nevertheless, using the target approach and logarithmic scale to define these targets is argued to be much more efficient to compare algorithm performance than just relying on regular convergence graphs~\cite{Hansen2022}. \section{Conclusions} \label{sec:conclusions} This paper presents a holistic investigation of the existing artificial test CMOPs from a performance space perspective. Firstly, we have proposed a performance assessment methodology capable of simultaneously monitoring both the front approximation and constraint satisfaction. This methodology is an extension of the approach used by the COCO platform for unconstrained bi-objective optimization problems. Next, the resulting performance methodology has been used to analyze and contrast eight artificial test suites. In particular, the test suites have been assessed with respect to the effectiveness of differentiating between three well-known MOEAs. Finally, the paper discusses the advantages and drawbacks of the existing artificial test suites and discloses some limitations of the proposed methodology. The experimental results show that the CF, DC-DTLZ, and especially MW suites have the greatest potential in differentiating the three MOEAs. They all include multiple CMOPs that can separate the MOEAs in both front approximation and constraint satisfaction. Additionally, our findings indicate that half of the artificial test suites fail to satisfactorily differentiate among the three MOEAs. This suggests that CMOPs from those suites provide limited information for the algorithm designer and are thus of little value for the benchmarking purpose. Finally, we saw that the predominant source of complexity in artificial test CMOPs is the front approximation. As for the future work, we suggest to extend the proposed methodology to CMOPs with more than three objectives. In particular, since the hypervolume calculation is expensive in high-dimensional objective spaces, one could investigate the effect of using different performance indicators, e.g., inverted generational distance, epsilon indicator, etc. Additionally, the potential of the proposed methodology in studying algorithm behavior while solving real-world problems should also be addressed. Measuring algorithm performance in this case is especially challenging as in a real-world scenario the Pareto front is usually unknown. Finally, the proposed methodology should be tested with additional MOEAs to further support our findings. \ifCLASSOPTIONcaptionsoff \newpage \fi \bibliographystyle{IEEEtran}
proofpile-arXiv_068-1099
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The Fabry–P\'{e}rot cavity-type multilayer photonic structures (PSs) based on the distributed Bragg mirrors with a defect layer between them, which are characterized by the appearance of transmission peaks in the band gaps, are the optical materials promising for creating functional elements of nanophotonic and optoelectronic devices \cite{1,2}. These peaks correspond to the spatial distributions of light fields called localized (defect) modes. In general, all-dielectric devices of this type cannot operate simultaneously in the transmission and reflection regime, since these regimes complement each other: at the mode frequencies, the transmittance peaks will correspond to dips against a wide reflection band coinciding with the photonic band gap (PBG). The use of ultrathin metallic films with a thickness much less than the light wavelength as additional elements of multilayer structures allows one not only to optimize their reflectivity \cite{3,4,5} but also to create fundamentally new structures capable of operating in the selective transmission and reflection modes simultaneously. In particular, based on the asymmetric dielectric Fabry–P\'{e}rot cavity containing an ultrathin metallic film with the refractive index $n_{m}=n-ik$, the real and imaginary parts of which should satisfy the condition $n\approx k$, a filter operating in both regimes was developed \cite{6}. The theoretical and experimental studies showed that the reflectance and transmittance are maximum at a central wavelength of $\lambda_{0}=700$~nm. At the other wavelengths, the metal, being an effective absorber, ensured a high rejection level in reflection. The possibility of the angular tuning of the properties of a new reflection-and-transmission filter was theoretically investigated in \cite{7}. It was demonstrated that, as the angle of incidence of the probe radiation increases, the reflection and transmission peaks synchronously shift toward shorter wavelengths for both the TM and TE polarizations. However, for the mode frequency tuning, the field control techniques are more suitable in nanophotonic and optoelectronic devices. In practice, the photonic structures with liquid-crystal (LC) components are highly promising. The properties of LCs, including their wide spectral transparency range, large birefringence, and high sensitivity to external factors (temperature, magnetic, and electric fields) open up wide opportunities for efficient control of the spectral and optical characteristics of PSs \ins{\cite{Kitz,Oz1,Oz2,8,9,10}}. Among LCs, of particular interest are the dual-frequency cholesteric and nematic mixtures, which conventionally exhibit the positive dielectric anisotropy at low frequencies and the negative dielectric anisotropy above a certain frequency of an applied electric field. Therefore, the alignment of molecules and, consequently, the optical properties of the LC layers can be significantly changed via switching the frequency of an applied voltage \cite{11,12}, which makes it possible to use LCs as a controllable element in the reflection-and-transmission PSs. At the same time, features of the field-induced structural transformations in the LC layer impose certain restrictions on its thickness. The layers with a thickness of several microns are considered to be optimal \cite{13}. It should be noted that the approach developed in \cite{6,7} was implemented primarily for the single-mode PSs, in which the optical thickness of a structural defect is a half-wave layer. In this case, there are single transmission and reflection peaks within the band gap, the frequency of which coincides with the Bragg frequency of a periodic structure of mirrors. Meanwhile, the applicability of this approach is unobvious in microcavities, which are multimode structures with a defect layer thickness of several microns, since in this case, a series of modes for any polarization inevitably arise within the band gap. Here, the Bragg frequency does not necessarily coincide with the frequency of any of the modes; in particular, it can be located in the free spectral range (wavelength separation between two successive reflected or transmitted optical intensity maxima). At present, multilayer photonic structures with the metal film are investigated within the concept of a Tamm plasmon-polariton (TPP) \cite{14,15,16}. \ins{ Its hybridization with microcavity modes are still an exotic object of study \cite{Kal,Bruk,Pankin}. Moreover, to the best of our knowledge, investigations of hybrid states of broadband TPP-microcavity modes have not been studied.} This approach could provide new insight into the optical properties of the reflection-and-transmission PSs and expands the range of their application, in particular, for observing new physical effects and phenomena. In \cite{14, 15}, based on the coupled mode theory, the possibility of localized state formation at the interface between a Bragg mirror and a thin metallic layer was theoretically predicted and experimentally demonstrated. As was shown in \cite{16}, the use of chromium as a metal leads to the excitation of a broadband TPP. In view of the aforesaid, in this study, we investigate the possibility of synchronous tuning of optical modes in the transmittance and reflectance spectra of a multimode reflection-and-transmission PS consisting of an asymmetric dielectric Fabry–P\'{e}rot microcavity and an ultrathin chromium film with the optical constants $n\approx k$. A dual-frequency nematic mixture is used as an electrically controlled structural element. The spectra of eigenmodes corresponding to the extraordinary waves are detected in the reflection and transmission simultaneously. The experimental data are compared with the results \ins{of the numerical simulation by Berreman $4\times4$-matrix technique}. \section{Experimental} The investigated PS consisted of two Bragg mirrors with an ultrathin metallic chromium (Cr) film ($\sim12$~nm) on the input mirror and a nematic LC layer between them as a structural defect: Sub/Cr(LH)$^m$--D--(HL)$^n$H (Fig.~\ref{fig1}a), where L (SiO$_2$) and H (ZrO$_2$) are the different dielectric optically isotropic layers with a low ($n_{\text{L}} = 1.45$) and high ($n_{\text{H}} = 2.05$) refractive indices, $m = 3$ and $n = 5$ are the numbers of the LH and HL bilayers (periods) to the left and right of defect layer D, respectively. Quartz glass substrates were used. A multilayer structure of mirrors was formed by alternate vacuum deposition of the ZrO$_2$ and SiO$_2$ oxides onto a substrate. According to the TEM data, the thicknesses of each layer were $d_{\text{L}} = (89 \pm 5)$~nm and $d_{\text{H}} = (66 \pm 5)$~nm. The periodicity of the structure produces a photonic band gap in the transmittance spectrum in the wavelength range of $420\div615$~nm. The presence of a metallic film on the input mirror forms a specific rejection band in the reflectance spectrum, the edges of which coincide with the PBG edges. Therefore, the overall reflectance profile $R(\lambda)$ of the PS acquires the form characteristic of the transmittance spectrum $T(\lambda)$. In turn, the periodicity violation leads to the appearance of resonance transmission and reflection peaks in both bands at the same wavelengths, which correspond to the optical modes localized on the defect layer. A dual-frequency nematic mixture MLC-2048 (Merck) was used as defect layer D. The dielectric anisotropy of this LC at $20^\circ$C is $\Delta\epsilon = +3.2$ at an applied electric field frequency of $f_1 = 1$~kHz and $\Delta\epsilon=-3.1$ at $f_2 = 50$~kHz \cite{12}. The nematic layer thickness was $d_{\text{LC}} = (7.0 \pm0.2)$~$\mu$m. In the initial state, a hybrid configuration of director \textbf{n} was formed in the nematic layer (Fig.~\ref{fig1}a). The hybrid alignment of director \textbf{n} is the distorted configuration created by the planar (P) orientation on the input mirror and the homeotropic (H) orientation on the other mirror. To obtain this configuration, the mirrors were coated with polyvinyl alcohol (PVA) (Sigma Aldrich) and surfactant N1,N2-Didodecyl-N1,N1,N2,N2-tetra-methylethane-1,2-diammoniumdibromide (Belarusian State Technological University), respectively. Transparent indium tin oxide (ITO) electrodes with a thickness of $\sim150$~nm deposited onto the surface of multilayers make it possible to control the structural transformations in the nematic by an electric field directed along the layer plane normal. \begin{figure} \centerline{\includegraphics[width=6cm]{fig1.png}} \caption{ (a) Configuration of the Cr-PS/LC cell. The LC hybrid configuration is implemented in the initial state ($U = 0$~V). Nematic mixture molecules are shown by elongated ellipsoids. (b) Scheme of an experimental setup for simultaneous recording of the transmittance and reflectance spectra of the photonic structure. Applied electric field \textbf{E} is directed along the $z$-axis, P is the polarizer. } \label{fig1} \end{figure} The spectral positions of the transmission and reflection peaks of the Cr-PS/LC cell eigenmodes in the regimes of structural transformations “hybrid configuration -- quasi-homeotropic state” and “hybrid configuration -- quasi-planar state” in the nematic layer were experimentally investigated using setup schematically shown in Fig.~\ref{fig1}b. The thermostated sample was mounted such that the nematic director \textbf{n} on the input mirror was aligned along the $x$-axis of the laboratory system of coordinates ($x, y, z$). An ac voltage was applied to the Cr-PS/LC cell using a function generator AHP-3122 (AKTAKOM). Voltage $U$ applied to the sample was detected with a digit multimeter 34465A (Keysight Technologies). The polarized transmittance and reflectance spectra of the Cr-PS/LC cell were recorded using an Ocean Optics HR4000 spectrometer at a constant temperature of $t = (23.0\pm0.2)^\circ$C. A Glan prism used as a polarizer was installed in front of the sample such that electric field vector \textbf{E} of the incident light wave was parallel to the director \textbf{n} in the nematic surface layer at the input mirror. In this case, the resonance transmittance and reflectance peaks corresponding to the optical \textit{re}-modes are detected. These modes correspond to the traveling extraordinary (\textit{e}) waves with the refractive index changing along the propagation direction \begin{equation} n_e(z)=\frac{n_\parallel n_\perp} {\sqrt{n^2_\parallel\cos^2\theta(z)+n^2_\perp\sin^2\theta(z)}} \label{eq1} \end{equation} Here, $n_\parallel$ and $n_\perp$ are refractive indices of the LC for the incident radiation polarizations parallel ($\parallel$) and perpendicular ($\perp$) to the director \textbf{n} of a homogeneous nematic layer and $\theta(z)$ is the angle between the wave vector $\textbf{k} \parallel z$ and the local direction of the director \textbf{n}. Since, during the structural transformations in the nematic, the distribution of the angle $\theta(z)$ and the effective refractive index of the LC medium change, then the \textit{re}-modes of the Cr-PS/LC cell, in contrast to the \textit{ro}-modes ($n_o=n_\perp=\text{const}$), become sensitive to the electric field. Below, we only consider the behavior of the modes of this type. The radiation of a broadband light source was introduced into the sample at an angle of $\sim4^\circ$ and coupled out to the spectrometer using optical fibers equipped with collimators. A small angle of incidence of the probe radiation ensured the simultaneous recording of the transmittance and reflectance spectra of the PS. \section{Results and Discussions} An ac voltage of different frequencies in the range of ($0-10$)~V applied to the sample induces a thresholdless reorientation of the director \textbf{n} in the LC bulk from the initial hybrid configuration shown in Fig.~\ref{fig2}a (top row). In particular, the low-frequency ($f = 1$~kHz) voltage $U_\text{L}$, due to the positive dielectric anisotropy of the nematic, reorients the director \textbf{n} along the electric field orthogonally to the multilayers and, at the maximum voltage of $U = 10$~V, the director configuration has the form shown in Fig.~\ref{fig2}b (top row). The reorientation of the director perpendicular to the electric field occurs at the high-frequency ($f = 50$~kHz) voltage $U_\text{H}$, because, at this frequency, the nematic has the negative dielectric anisotropy (Fig.~\ref{fig2}c, top row). The orientation effects modify the optical response of the Cr-PS/LC cell, which can be seen in microphotographs of the PS optical textures in the crossed polarizer geometry (Fig.~\ref{fig2}, bottom row). The texture in Fig.~\ref{fig2}b at a voltage of $U_\text{L}$ = 10~V corresponds to the quasi-homeotropic state and the texture in Fig.~\ref{fig2}c at a voltage of $U_\text{H}$ = 10~V corresponds to the quasi-planar state. Thus, the color of each texture is determined by the director configuration with allowance for the general feature of light transmission through the PS with the PBG. The homogeneity of the textures shown in Fig.~\ref{fig2} and the textures obtained at the intermediate voltages shows that the director alignment uniformly changes with increasing electric field in one plane perpendicular to the mirrors over the entire bulk of the LC layer. Figure~\ref{fig3} shows the experimental (Fig.~\ref{fig3}a) and calculated (Fig.~\ref{fig3}b) polarized transmittance $T(\lambda)$ and reflectance $R(\lambda)$ spectra of the Cr-PS/LC cell at $U = 0$~V, when the orientation of the MLC-2048 nematic mixture corresponds to the hybrid configuration of the director (Fig.~\ref{fig1}a). It can be seen that the spectra of the investigated structure are sets of resonance peaks within the coinciding rejection band and PBG. The $T$- and $R$-spectra simultaneously recorded in the experiment (Fig.~\ref{fig3}a) shows that each transmission peak coincides with the corresponding reflection peak. This means that both peaks correspond to the same $re$-mode, which resonates in the defect layer at a certain frequency. \begin{figure} \centerline{\includegraphics[width=6.4cm]{fig2.png}} \caption{ Configurations of LC director \textbf{n} (top row) and microphotographs of optical textures of the Cr-PS/LC cell in the crossed polarizers geometry (bottom row): (a) hybrid configuration, (b) quasi-homeotropic state ($f = 1$~kHz), and (c) quasi-planar state ($f = 50$~kHz). {\bf R} is the rubbing direction of the PVA film. Double arrows show the polarizer and analyzer directions. } \label{fig2} \end{figure} \begin{figure} \begin{center} \includegraphics[width=6.5cm]{fig3a.png} \includegraphics[width=6.5cm]{fig3b.png} \end{center} \caption{ Transmittance spectra $T(\lambda)$ (red dash-and-dot lines) and reflectance spectra $R(\lambda)$ (blue solid lines) of the Cr-PS/LC cell corresponding to the re-modes at zero voltage ($U = 0$~V): (a) experimental data and (b) results of the numerical simulation. } \label{fig3} \end{figure} The numerical simulation of the spectra of the investigated structure (Fig.~\ref{fig3}b) was carried out by the \ins{Berreman $4\times4$-matrix technique} \cite{17}. The simulation was carried out taking into account the dispersion properties of the Cr-PS/LC cell materials \cite{18,19,20,21,22} and with the parameters of the nematic mixture MLC-2048. In particular, splay and bend elastic constants of K$_{11} = (15\pm1)$~pN and K$_{33}=(20\pm1)$~pN obtained by the Frederiks transition measurements on pure MLC-2048 at $25^\circ$C \cite{23} were used. In this case, the distribution of the director \textbf{n} over the thickness of the hybrid cell with the nonsymmetric surface angles at a director angle $\theta$ from $0^\circ$ to $90^\circ$ and a ratio of K$_{33}/\text{K}_{11}\sim1.3$ between elastic constants of the nematic is almost linear \cite{24}. Therefore, the variable refractive index $n_\perp \leqslant n_e(z)\leqslant n_\parallel $ (Eq.~\ref{eq1}) was calculated using the linear angular distribution $\theta(z)$ between wave vector \textbf{k} and the local director \textbf{n}. It can be seen in Fig.~\ref{fig3} that the experimental and calculated spectral positions of the modes of the investigated structure are in good agreement with each other. \ins{It turned out that the observed discrepancy between the distributions of amplitudes of the experimental and calculated reflection peaks cannot be eliminated by taking into account losses caused by imperfect multilayer structure of the mirrors, presence of ITO and alignment layers. Probably, the presence of hybrid states of broadband TPP and microcavity modes leads to the appearance of some loss factors which are difficult to reveal.} The Cr-PS/LC cell is, in fact, an asymmetric microcavity with an ultrathin metallic film on the input mirror; therefore, the resonance condition known from the Fabry–P\'{e}rot theory \begin{equation} \lambda_e=\frac{2\langle n_e\rangle d} {m_e} \label{eq2} \end{equation} \begin{figure} \begin{center} \includegraphics[width=6.5cm]{fig4a.png} \includegraphics[width=6.5cm]{fig4b.png} \end{center} \caption{ Distribution of the light field intensity at a plasmon-polariton wavelength of 529.4~nm (shown by the arrow) normalized to the input intensity (red lines) and spatial distribution of the refractive index of the structure layers (black lines) under illumination of the Cr-PS/LC cell from (a) the metallic film side and (b) from the Bragg mirror side. Insets: corresponding calculated reflectance profiles $R(\lambda)$. } \label{fig4} \end{figure} is applicable to the investigated structure. Here, $\langle n_e\rangle=(1/d)\int_0^d n_e (z)dz$ is the effective refractive index of the LC medium (angle brackets indicate averaging over the layer thickness); integers me denote the numbers of defect modes corresponding to the number of standing wave antinodes in the cavity. For the sake of simplicity, the phase change upon reflection from the mirrors is ignored here \cite{25}. Thus, the profile of the transmittance spectrum of the Cr-PS/LC cell almost does not differ from that for the all-dielectric Fabry––P\'{e}rot cavity. On the contrary, the reflectance spectrum of the Cr-PS/LC cell has the features: instead of a wide reflection band, a rejection band is formed, in which narrow reflection peaks appear at the resonant frequencies. \ins{The rejection band} originates from the \ins{formation} of a broadband TPP at the metal/Bragg mirror interface when the structure is illuminated from the metal side \cite{16}. \ins{Figure~\ref{fig4}a shows the light field intensity distribution at a plasmon-polariton wavelength of 529.4~nm under illumination of the Cr-PS/LC cell from the metallic film side.} The plasmon-polariton wavelength is determined from the phase matching condition $|r_m|e^{i\phi_{m}}|r_{bm}|e^{i\phi_{bm}}=1$ \cite{14}, where $\phi_m$, $r_m$, and $\phi_{bm}$, $r_{bm}$ are the phases and amplitude coefficients of the waves reflected from the metallic film and the Bragg mirror, respectively. \ins{As can be seen in Fig.~\ref{fig4}a, TPP wavelength is located} within the free spectral range in the vicinity of the rejection band center. The calculated distribution of the light field intensity at this wavelength shown in Fig.~\ref{fig4}a \ins{demonstrates} the field localization at the metal/Bragg mirror interface ($z\approx0~\mu$m) \ins{that evidences for TPP excitation.} \begin{figure} \begin{center} \includegraphics[width=6.5cm]{fig5a.png} \includegraphics[width=6.5cm]{fig5b.png} \end{center} \caption{Distribution of the light field intensity for arbitrarily selected modes of photonic structure with (a) and without (b) chromium film, and spatial distribution of the refractive index of the structure layers (black lines).} \label{fig5} \end{figure} The strong absorption induced by the metal provides a fairly high rejection level in reflection of the off-resonant radiation in a wide spectral range, which almost coincides with the PBG. \ins{For comparison, Fig.~\ref{fig4}b shows the light field intensity distribution (at the same wavelength $\lambda =529.4$~nm) calculated under illumination of the structure from the Bragg mirror side. In this case, despite the presence of a metallic film, the TPP is not excited and the reflectance spectrum of the structure becomes complementary to the transmittance spectrum: instead of peaks, there are dips against the background of a wide reflection band.} On the other hand, the appearance of reflection peaks in the rejection band at the resonance frequencies is \ins{caused by hybridization of TPP and microcavity modes which leads to the specific spatial distribution of wave fields (Fig.~\ref{fig5}).} \ins{ For such distribution the nodes of standing waves of presented modes are localized on the metal layer (Fig.~\ref{fig5}a). In this case, the absorption factor of the reflected radiation is negligible \cite{6}. Hybridization of TPP and microcavity modes lead to a drastic change in the reflection spectrum of the photonic structure containing metal layer. In particular, the reflection peaks are observed instead of dips in the rejection band at the resonance frequencies. Obviously, this situation will be realized for any mode in the band gap. Such a specific wave field distribution at the resonance frequencies is apparently retained even upon variation in the refractive index of the LC medium, which makes it possible, as we show below, to implement the synchronous tuning of the reflection and transmission peaks during the structural transformations in the nematic. In the absence of the metal film, localization of the standing wave nodes at $z\approx0~\mu$m disappears (Fig.~\ref{fig5}b). } \begin{figure} \begin{center} \includegraphics[width=4.2cm]{fig6a.png} \includegraphics[width=4.2cm]{fig6b.png} \includegraphics[width=4.2cm]{fig6c.png} \includegraphics[width=4.2cm]{fig6d.png} \end{center} \caption{ 3D-patterns of the Cr-PS/LC cell consisting of a set of (a, b) transmittance spectra $T(\lambda)$ and (c, d) reflectance spectra $R(\lambda)$. $T$- and $R$-spectra were recorded simultaneously at (a, c) the low-frequency voltage $U_\text{L}$ or (b, d) the high-frequency voltage $U_\text{H}$ in the range of ($0 \div 7$)~V with a step of 0.25~V. } \label{fig6} \end{figure} Figure~\ref{fig6} shows 3D patterns of the transformation of the experimental transmittance $T(\lambda)$ and reflectance $R(\lambda)$ spectra of the Cr-PS/LC cell with a dual-frequency nematic mixture under the low-frequency voltage $U_\text{L}$ (Fig.~\ref{fig6}a, c) and the high-frequency voltage $U_\text{H}$ (Fig.~\ref{fig6}b, d). Each pair of the $T$- and $R$-spectra was recorded simultaneously, first at the low-frequency and then at the high-frequency voltage in the range of ($0 \div 7$)~V with a step of 0.25~V. Depending on the frequency of the applied voltage, a smooth reorientation of the director \textbf{n} by an angle of up to $90^\circ$ ($U = U_\text{L}$) or by an angle of up to $0^\circ$ ($U = U_\text{H}$) in the ($xz$) plane occurs in the defect LC layer. In the first case, the effective refractive index of the LC medium decreases $\langle n_e\rangle \to n_\perp$ and, in the second case, increases $\langle n_e\rangle \to n_\parallel$. Then, according to resonance condition Eq.~\ref{eq2}, the transmission and reflection peaks will shift to the short-wavelength (at $U_\text{L}$) or the long-wavelength (at $U_\text{H}$) spectral range with increasing voltage. Since each pair of the $T$- and $R$-peaks coinciding in wavelength corresponds to one mode with certain number $m_e$, the peaks will shift synchronously. It can be seen in Fig.~\ref{fig6} that the transformation of the $T$- and $R$-spectra fully shows the above-mentioned features of both the resonance properties of the entire PS and the electrically induced structural transformations in the defect LC layer. When the voltage is switched off, the nematic layer returns to its initial state corresponding to the hybrid configuration. At that, the $T$- and $R$-modes occupy their initial spectral positions. \begin{figure} \centerline{\includegraphics[width=6cm]{fig7.png}} \caption{ Spectral positions of the maxima of the coinciding $T$- and $R$-peaks for three $re$-modes at wavelengths of 505.7~nm (open and closed squares), 515.3~nm (open and closed diamonds), and 525.5~nm (open and closed circles) as a function of the applied low-frequency (blue symbols) and high-frequency (red symbols) voltage. } \label{fig7} \end{figure} Figure~\ref{fig7} shows field dependencies of the spectral positions of the maxima of three re-modes of the Cr-PS/LC cell at the PBG center in the voltage range of ($0 \div 7$)~V. Here, each point corresponds to two experimental wavelengths for the transmission and reflection peaks (Fig.~\ref{fig6}). Good agreement between these values at each step of the applied voltage is indicative of synchronous tuning of the $T$- and $R$-peaks. The form of these dependencies reflects the thresholdless character of the structural transformations “hybrid configuration $\to$ quasi-homeotropic state” and “hybrid configuration $\to$ quasi-planar state” in the nematic defect layer of the PS. In the considered voltage range, each pair of peaks shifts from its initial position to the red (upper curves) or blue (lower curves) spectral regions, depending on frequency. The shift value varies from zero to $\sim 26$~nm, which corresponds to approximately two and a half free spectral range. It can be seen that the $\lambda(U)$ curves symmetrically diverge with increasing voltage. In this case, the interval between positions of the same mode at a fixed low- or high-frequency voltage is doubled. For example, the central mode 515.3~nm ($U = 0$) occupies the 490~nm position at the low frequency ($U_\text{L} = 7$~V) or the 542~nm position at the high frequency ($U_\text{H} = 7$~V). Thus, the monotonic behavior of the $\lambda(U)$ dependencies allows one to smoothly adjust the range of synchronous switching of the transmission and reflection peaks corresponding to any mode by changing a value of voltage $U$ applied to the sample and to implement the switching itself by changing the operating frequency (1~kHz~$\leftrightarrow$~50~kHz). \section{Conclusion} The electro-optical properties of the Fabry–P\'{e}rot cavity-type multilayer photonic structure based on the distributed Bragg mirrors with an ultrathin chromium film were studied. The dual-frequency nematic mixture MLC-2048 with the hybrid configuration of the director in the initial state was used as a defect in the ZrO$_2$/SiO$_2$ periodic structure. In contrast to the transmission, the reflectivity of the structure substantially depends on the incident radiation direction. Under illumination from the side of the second Bragg mirror, the profile of the reflection spectrum $R(\lambda)$ is typical of the all-dielectric Fabry–P\'{e}rot cavity. In particular, a broad reflection band with narrow dips at the defect mode frequencies within the PBG is observed. Under illumination from the metallic film side, the reflectance spectrum $R(\lambda)$ is reversed. Specifically, instead of the reflection band, a rejection band with the resonance $R$-peaks appears. Thus, the profile of the reflectance spectrum of the structure becomes similar to the profile of the $T(\lambda)$ spectrum. In this case, the spectral positions of the transmission and reflection peaks corresponding to the same mode coincide. The calculation of the wave field distribution for both cases showed that the reversal of the reflection spectrum in the second case is caused by the excitation of a broadband Tamm plasmon-polariton at the metal/Bragg mirror interface. Based on the electric field-induced structural transformations “hybrid~$\to$~quasi-homeotropic state” and “hybrid~$\to$~quasi-planar state” in the nematic defect layer, a synchronous tuning of the transmission and reflection peaks of the photonic structure was implemented. The ratio between the amplitudes of the coinciding $T$- and $R$-peaks is tuned by changing the number of periods in the (LH)$^m$/(HL)$^n$ structure \cite{6}. Depending on the frequency of the voltage applied to the sample, the peaks shift to both the blue and red spectral regions. The monotonicity of the $\lambda(U)$ dependencies reflects the process of structural transformations in the nematic layer. The shift value is controlled by the value of the applied voltage. At a maximum operating voltage of 7~V, the shift of the peaks in both directions for used defect layer thickness is about two and a half free spectral range. In addition, switching the frequency 1~kHz~$\leftrightarrow$~50~kHz of any fixed voltage makes it possible to implement synchronous switching of the spectral position of the photonic structure modes in the transmittance and reflectance spectra. In this case, the width of the interval for switching modes from one extreme position to another depends on the value of the applied voltage.
proofpile-arXiv_068-1517
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} Mathematical Music Theory is the study of Music from a mathematical point of view. Many connections have been discovered, some of which have a long tradition, but they seem to be still offering new problems and ideas to researchers, whether they be music composers or computer scientists. The first attempt to produce music through a computational model dates back to $1957$, when the authors composed a string quartet, also known as the Illiac Suite, through random number generators and Markov chains \cite{hiller1957musical}. Since then, a plethora of other works have explored how computer science and music can interact: to compose music \cite{wiggins1989representing,zimmermann2001modelling}, to analyse existing compositions and melodies \cite{chemillier2001two,courtot1990constraint,ebciouglu1988expert}, or even to represent human gestures of the music performer \cite{radicioni2007constraint}. In particular, Constraint Programming has been used to model harmony, counterpoint and other aspects of music (e.g., see \cite{anders2011constraint}), to compose music of various genres as described in the book \cite{anders2018compositions}, or to impose musical harmonization constraints in \cite{pachet2001musical}. In this paper, we deal with Tiling Rhythmic Canons, that are purely rhythmic contrapuntal compositions. For a fixed period $n$, a tiling rhythmic canon is a couple of sets $A,B\subset\{0,1,2,\dots,n-1\}$ such that at every instant there is exactly one voice playing; $A$ defines the sequence of beats played by every voice, $B$ the instants at which voices start to play. If one of the sets, say $A$, is given, it is well-known that the problem of finding a \emph{complement} $B$ has in general no unique solution. It is very easy to find tiling canons in which at least one of the set is \emph{periodic}, i.e. it is built repeating a shorter rhythm. From a mathematical point of view, the most interesting canons are therefore those in which both sets are \emph{aperiodic} (the problem can be equivalently rephrased as a research of tessellations of a special kind). Enumerating all aperiodic tiling canons has to face two main hurdles: on one side, the problem lacks the structure of other algebraic ones, such as ring or group theory; on the other side, the combinatorial size of the domain becomes enormous very soon. Starting from the first works in the 1940s, research has gradually shed some light on parts of the problem from a theoretical point of view, and several heuristics and algorithms that allow to compute tiling complements have been introduced, but a complete solution appears to still be out of reach. \paragraph{Contributions.} The main contributions of this paper are the Integer Linear Programming (ILP) model and the SAT Encoding to solve the Aperiodic Tiling Complements Problem presented in Section 3. Using a modern SAT solver we are able to compute the complete list of aperiodic tiling complements of a class of Vuza rhythms for periods $n = \{ 180, 420, 900\}$. \paragraph{Outline.} The outline of the paper is as follows. Section \ref{sec:basic_notions} reviews the main notions on Tiling Rhythmic Canons and defines formally the problem we tackle. In Section \ref{sec:our_contribution}, we introduce an ILP model and a SAT Encoding of the Aperiodic Tiling Complements Problem expressing the tiling and the aperiodicity constraints in terms of Boolean variables. Finally, in Section \ref{sec:final}, we include our computational results to compare the efficiency of the aforementioned ILP model and SAT Encoding with the current state-of-the-art algorithms. \section{The Aperiodic Tiling Complements Problem} \label{sec:basic_notions} We begin fixing some notation and giving the main definitions. In the following, we conventionally denote the cyclic group of remainder classes modulo $n$ by $\mathbb{Z}_n$ and its elements with the integers $\{0, 1, \dots, n - 1 \}$, i.e. identifying each class with its least non-negative member. \begin{definition}\label{directsum} Let $A, B \subset \mathbb{Z}_n$. Let us define the application \[\sigma:A \times B \rightarrow \mathbb{Z}_n, (a, b) \mapsto a + b.\] We set $A + B: = \mbox{Im}(\sigma)$; if $\sigma$ is bijective we say that $A$ and $B$ {\bf are in direct sum}, and we write \[A \oplus B: = \mbox{Im}(\sigma).\] If $\mathbb{Z}_n = A\oplus B$, we call $(A, B)$ a {\bf tiling rhythmic canon} of {\bf period $n$}; $A$ is called the {\bf inner voice} and $B$ the {\bf outer voice} of the canon. \end{definition} % \begin{remark} It is easy to see that the tiling property is invariant under translations, i.e. if $A$ is a tiling complement of some set $B$, also any translate $A + z$ of $A$ is a tiling complement of $B$ (and any translate of $B$ is a tiling complement of $A$). In fact, suppose that $A \oplus B = \mathbb{Z}_n$; for every $k, z \in \mathbb{Z}_n$ by definition there exists one and only one pair $(a,b) \in A\times B$ such that $k - z = a + b$. Consequently, there exists one and only one pair $(a + z, b) \in (A + z)\times B$ such that $k = (a + z) + b$, that is $(A + z)\oplus B =\mathbb{Z}_n$. In view of this, without loss of generality, we shall limit our investigation to rhythms containing 0 and consider equivalence classes under translation. \end{remark} \input{inner-outer-image} \begin{example} We consider a period $n=9$, and the two rhythms $A=\{0,1,5\} \subset \mathbb{Z}_9$ and $B=\{0,3,6\} \subset \mathbb{Z}_9$ in Figure \ref{fig:A} and Figure \ref{fig:B}. They provide the canon $A \oplus B = \mathbb{Z}_9$, since $\{0,1,5\} \oplus \{0,3,6\} = \{0, 3, 6, 1, 4, 7, 5, 8, 2\}$, where the last number is obtained by $(5+6) \mod 9 = 2$. \end{example} \begin{definition}\label{def:period} A rhythm $A \subset\mathbb{Z}_n$ is {\bf periodic (of period $z$)} if and only if there exists an element $z \in \mathbb{Z}_n$, $z\neq 0$, such that $z + A = A$. In this case, $A$ is also called periodic modulo $z\in\mathbb{Z}_n$. A rhythm $A\subset\mathbb{Z}_n$ is {\bf aperiodic} if and only if it is not periodic. \end{definition} Coming back to Example 1, it is easy to note the periodicity $z = 3$ in rhythm $B=\{0, 3, 6\}$: indeed, $3 + B = B$. Notice that if $A$ is periodic of period $z$, $z$ must be a strict divisor of the period $n$ of the canon. Tiling rhythmic canons can be characterised using polynomials, as follows. \begin{lemma} \label{lm:pol_equivalence} Let $A$ be a rhythm in $\mathbb{Z}_n$ and let $p_A(x)$ be the {\bf characteristic polynomial} of $A$, that is, $p_A(x)=\sum_{k\in A}x^{k}$. Given $B\subset\mathbb{Z}_n$ and its characteristic polynomial $p_B(x)$, we have that \begin{equation}\label{eq:pol_form} p_A (x)\cdot p_B (x)\equiv \sum_{k=0}^{n-1} x^k,\quad\quad\mod (x^{n} - 1) \end{equation} if and only if $p_A (x), p_B (x)$ are polynomials with coefficients in $\{0,1\}$ and $A\oplus B = \mathbb{Z}_n$. \end{lemma} \begin{definition} A tiling rhythmic canon $(A,B)$ in $\mathbb{Z}_{n}$ is a {\bf Vuza canon} if both $A$ and $B$ are aperiodic. \end{definition} \begin{remark} \label{aperiodic} Note that a set $A$ is periodic modulo $z$ if and only if it is periodic modulo all the non-trivial multiples of $z$ dividing $n$. % For this reason, when it comes to check whether $A$ is periodic or not, it suffices to check if $A$ is periodic modulo $m$ for every $m$ in the set of maximal divisors of $n$. % We denote by $\mathcal{D}_n$ this set \begin{equation*} \mathcal{D}_n:=\big\{n/ p \mid p \mbox{ is a prime factor of } n\big\}. \end{equation*} We also denote with $k_n$ the cardinality of $\mathcal{D}_n$, so that $n=p_1^{\alpha_1}p_2^{\alpha_2}\dots p_{k_n}^{\alpha_{k_n}}$ is the unique prime factorization of $n$, where $\alpha_1,\dots,\alpha_{k_n}\in\mathbb{N^+}$ . \end{remark} For a complete and exhaustive discussion on tiling problems, we refer the reader to \cite{amiot2011structures}. In this paper, we are interested in the following tiling problem. \begin{definition} Given a period $n\in\mathbb{N}$ and a rhythm $A \subset \mathbb{Z}_n$, the {\bf Aperiodic Tiling Complements Problem} consists in finding all aperiodic complements $B$ i.e., subsets $B$ of $\mathbb{Z}_n$ such that $A \oplus B = \mathbb{Z}_n$. \end{definition} Some problems very similar to the decision of tiling (i.e., the tiling decision problem DIFF in \cite{Matolcsi}) have been shown to be NP-complete; a strong lower bound for computational complexity of the tiling decision problem is to be expected, too. \section{A SAT Encoding } \label{sec:our_contribution} In this section, we present in parallel an ILP model and a new SAT Encoding for the Aperiodic Tiling Complements Problem that are both used to enumerate all complements of $A$. We define two sets of constraints: (i) the {\it tiling constraints} that impose the condition $A \oplus B = \mathbb{Z}_n$, and (ii) the {\it aperiodicity constraints} that impose that the canon $B$ is aperiodic. \paragraph{Tiling constraints.} Given the period $n$ and the rhythm $A$, let $\bm a=[a_0,\dots,a_{n-1}]^\intercal$ be its characteristic (column) vector, that is, $a_i=1$ if and only if $i \in A$. Using vector $\bm a$ we define the circulant matrix $T \in \{0,1\}^{n \times n}$ of rhythm $A$, that is, each column of $T$ is the circular shift of the first column, which corresponds to vector $\bm a$. Thus, the matrix $T$ is equal to \[ T= \begin{bmatrix} a_{0} & a_{n-1} & a_{n-2} & \dots & a_{1} \\ a_{1} & a_{0} & a_{n-1} & \dots & a_{2} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n-1} & a_{n-2} & a_{n-3} & \dots & a_{0} \end{bmatrix}. \] We can use the circulant matrix $T$ to impose the tiling conditions as follows. Let us introduce a literal $x_i$ for $i=0,\dots,n-1$, that represents the characteristic vector of the tiling rhythm $B$, that is, $x_i = 1$ if and only if $i \in B$. Note that a literal is equivalent to a 0--1 variable in ILP terminology. Then, the tiling condition can be written with the following linear constraint: \begin{equation}\label{complementary} \sum_{i \in \{0, \dots, n-1\}} T_{ij} x_i = 1, \quad \forall j = 0, \dots, n-1. \end{equation} Notice that the set of linear constraints \eqref{complementary} imposes that exactly one variable (literal) in the set $\{x_{{n+i-j \mod n}}\}_{j\in A}$ is equal to one. Hence, we encode this condition as an {\tt Exactly-one} constraint, that is, exactly one literal can take the value one. The {\tt Exactly-one} constraint can be expressed as the conjunction of the two constraints {\tt At-least-one} and {\tt At-most-one}, for which standard SAT encoding exist (e.g., see \cite{bailleux2003efficient,philipp2015pblib}). Hence, the tiling constraints \eqref{complementary} are encoded with the following set of clauses depending on $i=0, \dots, n-1$: \begin{equation}\label{sat:compl} \bigvee_{j \in A}\left(x_{n-(j-i) \mod n}\right) \bigwedge_{k,l \in A,k \neq l}\left(\lnot x_{{n-(k-i) \mod n}} \lor \lnot x_{{n-(l-i) \mod n}}\right). \end{equation} \paragraph{Aperiodicity constraints.} In view of Definition \ref{def:period}, if there exists a $b \in B$ such that $(d + b) \mod n \neq b$, then the canon $B$ is not periodic modulo $d$. Notice that by Remark \ref{aperiodic} we need to check this condition only for the values of $d \in \mathcal{D}_n$. We formulate the aperiodicity constraints introducing auxiliary variables $y_{d,i},z_{d,i},u_{d,i} \in \{0,1\}$ for every prime divisor $d \in \mathcal{D}_n$ and for every integer $i = 0,\dots,d-1$. We set \begin{equation} \label{implications} u_{d,i} = 1 \; \Leftrightarrow \; \left(\sum_{k=0}^{n/d-1} x_{i+kd} = \frac{n}{d}\right) \vee \left(\sum_{k=0}^{n/d-1} x_{i+kd} = 0\right), \end{equation} for all $d \in \mathcal{D}_n$, $i=0,\dots,d-1$, with the condition \begin{equation} \label{sumdivisor} \sum_{i=0}^{d-1} u_{d,i} \leq d-1, \quad \forall d \in \mathcal{D}_n. \end{equation} Similarly to \cite{auricchio2021integer}, the constraints \eqref{implications} can be linearized using standard reformulation techniques as follows: \begin{align} \label{y:1} & 0 \leq \sum_{k=0}^{n/d} x_{i+kd} - \frac{n}{d}y_{d,i}\leq \frac{n}{d} - 1 & \forall d \in \mathcal{D}_n,\;\; i=0,\dots,d-1, \\ \label{z:-1} & 0 \leq \sum_{k=0}^{n/d} (1-x_{i+kd}) - \frac{n}{d}z_{d,i} \leq \frac{n}{d} - 1 & \forall d \in \mathcal{D}_n, \; \; i=0,\dots,d-1,\\ \label{U} & y_{d,i} + z_{d,i} = u_{d,i} & \forall d \in \mathcal{D}_n, \;\; i=0,\dots,d-1. \end{align} \noindent Notice that when $u_{d,i}=1$ exactly one of the two incompatible alternatives in the right hand side of \eqref{implications} is true, while whenever $u_{d,i}=0$ the two constraints are false. Correspondingly, the constraint \eqref{U} imposes that the variables $y_{d,i}$ and $z_{d,i}$ cannot be equal to $1$ at the same time. On the other hand, constraint \eqref{sumdivisor} imposes that at least one of the auxiliary variables $u_{d,i}$ be equal to zero. Next, we encode the previous conditions as a SAT formula. To encode the if and only if clause, we make use of the logical equivalence between $C_1 \Leftrightarrow C_2$ and $(\lnot C_1 \lor C_2) \land (C_1 \lor \lnot C_2)$. The clause $C_1$ is given directly by the literal $u_{d,i}$. The clause $C_2$, expressing the right hand side of \eqref{implications}, i.e. the constraint that the variables must be either all true or all false, can be written as \[ C_2 = \left(\bigwedge_{k=0}^{n/d} x_{i+kd}\right) \vee \left(\bigwedge_{k=0}^{n/d} \bar{x}_{i+kd}\right), \quad \forall d \in \mathcal{D}_n. \] Then, the linear constraint \eqref{sumdivisor} can be stated as the SAT formula: \[ \lnot \left(u_{d,0} \land u_{d,1} \land \dots \land u_{d,(d-1)}\right) = \bigvee_{l=0}^{d-1} \bar{u}_{d,l}, \quad \forall d \in \mathcal{D}_n. \] Finally, we express the aperiodicity constraints using \begin{equation}\label{sat:apreriodic} \bigwedge\limits_{i = 0}^{d-1} \left[\left( \lnot C_2 \lor u_{d,i} \right)\land \left( C_2 \lor \bar{u}_{d,i} \right) \right] \land \bigvee_{l=0}^{d-1} \bar{u}_{d,l},\, \forall d \in \mathcal{D}_n. \end{equation} Note that joining \eqref{complementary}, \eqref{y:1}--\eqref{U} with a constant objective function gives a complete ILP model, which can be solved with a modern ILP solver such as Gurobi to enumerate all possible solutions. At the same time, joining \eqref{sat:compl} and \eqref{sat:apreriodic} into a unique CNF formula, we get our complete SAT Encoding of the Aperiodic Tiling Complements Problem. (see Section 4 for computational results). \subsection{Existing solution approaches} For the computation of all the aperiodic tiling complements of a given rhythm the two most successful approaches already known are the \emph{Fill-Out Procedure} \cite{kolountzakis2009algorithms} and the {\it Cutting Sequential Algorithm} \cite{auricchio2021integer}. \paragraph{The Fill-Out Procedure.} The \emph{Fill-Out Procedure} is the heuristic algorithm introduced in \cite{kolountzakis2009algorithms}. The key idea behind this algorithm is the following: given a rhythm $A\subset\mathbb{Z}_n$ such that $0\in A$, the algorithm sets $P=\{0\}$ and starts the search for possible expansions of the set $P$. The expansion is accomplished by adding an element $\alpha\in\mathbb{Z}_n$ to $P$ according to the reverse order induced by a ranking function $r(x, P)$, which counts all the possible ways in which $x$ can be covered through a translation of $A$. This defines a new set, $\Tilde{P}\supset P$, which is again expanded until either it can no longer be expanded or the set becomes a tiling complement. The search ends when all the possibilities have been explored. The algorithm finds also periodic solutions that must removed in post-processing, as well as multiple translations of the same rhythm. \paragraph{The Cutting Sequential Algorithm (CSA).} In \cite{auricchio2021integer}, the authors formulate the Aperiodic Tiling Complements Problem using an Integer Linear Programming (ILP) model that is based on the polynomial characterization of tiling canons. The ILP model uses auxiliary 0--1 variables to encode the product $p_A (x) \cdot p_B(x)$ which characterizes tiling canons. The aperiodicity constraint is formulated analogously to what done above. The objective function is equal to a constant and does not impact the solutions found by the model. The ILP model is used within a sequential cutting algorithm that adds a no-good constraint every time a new canon $B$ is found to prevent finding solutions twice. In addition, the sequential algorithm sets a new no-good constraints for every translation of $B$; hence, in contrast to the \emph{Fill-Out Procedure}, the \emph{CSA Algorithm} does not need any post-processing. \section{Computational Results} \label{sec:final} First, we compare the results obtained using our ILP model and SAT Encoding with the runtimes of the \emph{Fill-Out Procedure} and of the \emph{CSA Algorithm}. We use the canons with periods 72, 108, 120, 144 and 168 that have been completely enumerated by Vuza \cite{Vuza}, Fripertinger \cite{fripertinger2005remarks}, Amiot \cite{amiot2009new}, Kolountzakis and Matolcsi \cite{kolountzakis2009algorithms}. Table \ref{tab1} shows clearly that the two new approaches outperform the state-of-the-art, and in particular, that SAT provides the best solution approach. We then choose some periods $n$ with more complex prime factorizations, such as $n = p^2q^2r=180$, $n = p^2qrs=420$, and $n = p^2q^2r^2=900$. To find aperiodic rhythms $A$, we apply Vuza's construction \cite{Vuza} with different choices of parameters $p_1$, $p_2$, $n_1$, $n_2$, $n_3$. Thus, having $n$ and $A$ as inputs, we search for all the possible aperiodic complements and then we filter out the solutions under translation. Since the post-processing is based on sorting canons, it requires a comparatively small amount of time. We report the results in Table \ref{tab2}: the solution approach based on the SAT Encoding is the clear winner (from a Music theory perspective, it is also noteworthy that this is the first time that all the tiling complements, whose number is reported in the last column of the two tables, of the studied rhythms are computed). \paragraph{Implementation Details.} We have implemented in Python the ILP model and in PySat \cite{imms-sat18} the SAT Encoding discussed in Section 3. We use Gurobi 9.1.1 as ILP solver and Maplesat \cite{phdthesis} as SAT solver. The experiments are run on a Dell Workstation with a Intel Xeon W-2155 CPU with 10 physical cores at 3.3GHz and 32 GB of RAM. In case of acceptance, we will release the source code and the instances on GitHub. \subsubsection{Conclusions and Future Work.} It is thinkable to devise an algorithm that, for a given $n$, finds all the pairs $(A, B)$ that give rise to a Vuza canon of period $n$. This could provide in-depth information on the structure of Vuza canons. \subsubsection{Acknowledgements.} This research was partially supported by: Italian Ministry of Education, University and Research (MIUR), Dipartimenti di Eccellenza Program (2018--2022) - Dept. of Mathematics ``F. Casorati'', University of Pavia; Dept. of Mathematics and its Applications, University of Milano-Bicocca; National Institute of High Mathematics (INdAM) ``F. Severi''; Institute for Advanced Mathematical Research (IRMA), University of Strasbourg. \begin{table} \caption{Aperiodic tiling complements for periods $n\in\{72,108,120,144,168\}$.} \label{tab1} \vspace{.2cm} \centering \begin{adjustbox}{width=0.9\textwidth} \begin{tabular}{|c|c|c|c|c|c|c|r|r|r|r|r|} \hline \multirow{2}{*}{$n$} &\multirow{2}{*}{$\mathcal{D}_n$}&\multirow{2}{*}{$p_1$}&\multirow{2}{*}{$n_1$}&\multirow{2}{*}{$p_2$}&\multirow{2}{*}{$n_2$}&\multirow{2}{*}{$n_3$}&\multicolumn{4}{c|}{runtimes (s)}& \multirow{2}{*}{$\# B$}\\ \cline{8-11} &&&&&&& \emph{FOP}& \emph{CSA}& \emph{SAT} & \emph{ILP} & \\ \hline \hline 72&$\{24,36\}$&2&2&3&3&2& 1.59 &0.10 &$ < 0.01$ & 0.03 &6\\ \hline \hline 108&$\{36,54\}$&2&2&3&3&3& 896.06 &7.84 &0.09 & 0.19 & 252\\ \hline \hline \multirow{2}{*}{120}&\multirow{2}{*}{$\{24,40,60\}$}&2&2&5&3&2&24.16&0.27&0.02& 0.04 &18\\ &&2&2&3&5&2&10.92&0.14&0.01& 0.04 &20\\ \hline \hline \multirow{4}{*}{144}&\multirow{4}{*}{$\{48,72\}$}&4&2&3&3&2&82.53&2.93&0.02& 0.11 &36\\ &&2&2&3&3&4&$> 10800.00$ &$> 10800.00$&11.04& 46.96 &8640\\ &&2&2&3&3&4&7.13&0.10&$< 0.01$& 0.05 &6\\ &&2&4&3&3&2&80.04&0.94&0.02& 0.08 &60\\ \hline \hline \multirow{2}{*}{168}&\multirow{2}{*}{$\{24,56,84\}$}&2&2&7&3&2&461.53&17.61&0.04& 0.20 &54\\ &&2&2&3&7&2&46.11&0.91&0.02& 0.07 &42\\ \hline \end{tabular} \end{adjustbox} \vspace{1cm} \end{table} \begin{table} \caption{Aperiodic tiling complements for periods $n\in\{180,420,900\}$.} \label{tab2} \vspace{.2cm} \centering \begin{adjustbox}{width=0.82\textwidth} \begin{tabular}{|c@{ }|c@{ }|c@{ }|c@{ }|c@{ }|c@{ }|c@{ }|r@{ }|r@{ }|r@{ }|} \hline \multirow{2}{*}{$n$} &\multirow{2}{*}{$\mathcal{D}_n$}&\multirow{2}{*}{$p_1$}&\multirow{2}{*}{$n_1$}&\multirow{2}{*}{$p_2$}&\multirow{2}{*}{$n_2$}&\multirow{2}{*}{$n_3$}&\multicolumn{2}{c|}{runtimes (s)}& \multirow{2}{*}{$\# B$}\\ \cline{8-9} &&&&&&& \emph{SAT} & \emph{ILP} & \\ \hline \hline \multirow{5}{*}{180}&\multirow{5}{*}{$\{36,60,90\}$}&2&2&5&3&3& 2.57 & 5.62 &2052\\ \cline{3-10} &&3&3&5&2&2 &0.07 & 0.14 &96\\ \cline{3-10} &&2&2&3&5&3 & 1.25 & 2.23 &1800\\ \cline{3-10} &&2&5&3&3&2 & 0.05& 0.16 & 120\\ \cline{3-10} &&2&2&3&3&5& 8079.07 & $> 10800.00$ & 281232\\ \hline \hline \multirow{12}{*}{420} &\multirow{12}{*}{$\{60,84,140,210\}$}&7&5&3&2&2 &2.13 & 3.57 &720 \\ \cline{3-10} &&5&7&3&2&2 & 1.52 & 4.08 &672 \\ \cline{3-10} &&7&5&2&3&2& 7.73 & 16.11 & 3120 \\ \cline{3-10} &&5&7&2&3&2 & 1.63 & 4.18 & 1008 \\ \cline{3-10} &&7&3&5&2&2 & 4.76 & 7.45 & 864 \\ \cline{3-10} &&3&7&5&2&2 & 12.78 & 32.19 & 6720 \\ \cline{3-10} &&7&3&2&5&2& 107.83 & 1186.21 & 33480 \\ \cline{3-10} &&3&7&2&5&2&0.73 & 2.36 & 840\\ \cline{3-10} & &7&2&5&3&2 & 11.14 & 21.19 & 1872 \\ \cline{3-10} &&2&7&5&3&2& 17.31& 52.90 & 10080 \\ \cline{3-10} &&7&2&3&5&2& 89.97 & 691.56 & 22320 \\ \cline{3-10} &&2&7&3&5&2 & 1.17 & 4.13 & 1120 \\ \hline \hline \multirow{5}{*}{900}&\multirow{5}{*}{$\{180,300,450\}$}&2&25&3&3&2 & 43.60 & 110.65 & 15600 \\ \cline{3-10} &&5&10&3&3&2 & 107.36 & 741.79 & 15840 \\ \cline{3-10} & &2&9&5&5&2 & 958.58 & $> 10800.00$ & 118080 \\ \cline{3-10} & & 6&3&5&5&2 &5559.76 &$> 10800.00$ &123840 \\ \cline{3-10} &&3 & 6&5&5&2&486.39 & 8290.35 & 62160\\ \hline \end{tabular} \end{adjustbox} \end{table} \pagebreak \bibliographystyle{splncs04}
proofpile-arXiv_068-1533
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} General Relativity (GR) \cite{Misner:1974qy,Wald:1984rg,DeSabbata:1986sv} governs the motion of particles near massive compact macroscopic objects by deforming the spacetime around it. Moreover, GR is also responsible to describe the causal structure of spacetime. When we are dealing with the weak field limit, particles upon this geometry are traveling at low velocities and we can describe their motion by Newton's theory of gravity, a non-relativistic weak field limit of GR. The covariant version of this regime is typically known as Newton-Cartan or Galilei gravity, see for instance \cite{Cartan:1923zea,Cartan:1924yea,Trautman:1963aaa,Havas:1964zza,Trautman:1965aaa,Kunzle:1972aaa,Dixon:1975fy,Banerjee:2016laq,Bergshoeff:2017dqq,Hansen:2020pqs,Guerrieri:2020vhp}. In contrast, it is also possible to take the ultra-relativistic (UR) limit of gravity and describe the motion of particles with extremely high energies traveling very close to the speed of light, at a strong field regime. Such regime is known as Carroll gravity, see \cite{Hartong:2015xda, Bergshoeff:2017btm,Ciambelli:2018xat, Bergshoeff:2014jla, Bergshoeff:2020xhv}. One efficient and systematic way to attain such limits in a gravity theory is to look to some inherent symmetries of gravity. Perhaps the most interesting symmetry to exploit is the local spacetime isometries because they are directly related to the equivalence principle. Moreover, the local isometries ensure a gauge theoretical character for gravity. In fact, gravity can be thought as a gauge theory of the the Poincar\'e group\footnote{More precisely, only the Lorentz sector is gauged, since the Poincar\'e group is not a semi-simple Lie group due to the orthogonal Abelian annex of translations. Nevertheless, the translational sector is still present since it is associated with a fundamental representation of The Lorentz group.}, describing all local inertial frames \cite{Utiyama:1956sy,Kibble:1961ba,Sciama:1964wt,Mardones:1990qc,Zanelli:2005sa}. Such description of gravity is also known as first order formalism of gravity, since the field equations are of first order in the derivatives of the fundamental geometrical fields, \emph{i.e.}, the vierbein and the Lorentz connection. In this scenario, Galilei and Carroll gravities are obtained from suitable contractions of the Poincar\'e group known as Inönü-Wigner contractions \cite{Inonu:1953sp}. Galilei gravity is obtained considering the limit where the relative velocities between the local inertial frames are small in comparison with the light speed \cite{Cartan:1923zea,Cartan:1924yea,Trautman:1963aaa,Havas:1964zza,Trautman:1965aaa,Kunzle:1972aaa,Dixon:1975fy,Banerjee:2016laq,Bergshoeff:2017dqq,Hansen:2020pqs,Guerrieri:2020vhp}. The result is a gauge theory for the Galilei group. Under suitable assumptions \cite{Christensen:2013rfa,Banerjee:2014nja,Afshar:2015aku,Abedini:2019voz}, Galilei gravity is equivalent to Newton's theory of gravity. Hence, the Galilei limit assumes the full opening of the light cones. On the other hand, Carroll gravity arises by taking the relative velocities between frames to approach to the light speed while the light speed tends to zero \cite{Hartong:2015xda, Bergshoeff:2017btm,Ciambelli:2018xat, Bergshoeff:2014jla, Bergshoeff:2020xhv}. Therefore, the Carroll limit assumes the full closing of the light cones. The result is a gauge theory for the Carroll group \cite{Leblond, Bacry1968, Duval:2014lpa}. Interestingly, Carrollian spacetimes describe the geometry of null hypersurfaces in Lorentzian spacetimes defined with an extra dimension \cite{Ciambelli:2019lap}. It was noticed in \cite{Barducci:2018wuj} that, in Carroll gravity, no interactions between spatially separated events occur. However, when isolated, these objects, initially viewed as immobile, show evolution in time (This effect is known as Carroll causality). For this reason, the Carroll limit is also known as the ultra local approximation of gravity. Recently, it was discovered that the UR gravity has properties associated with the strong force \cite{Fokas:2019zvp} and Carroll algebra plays an important role in flat space holography \cite{Ciambelli:2018wre, Duval:2014uva} and in Bondi-van der Burg-Metzner-Sachs symmetry \cite{Bondi:1962px,Sachs:1962zza,Duval:2014uva,Grumiller:2017sjh}. In this work we proceed in two parts. In the first part, we study the Carroll limit of the Einstein-Hilbert (EH) action in the first order formalism and study the field equations at formal level. Thence, we consider the four dimensional Mardones-Zanelli (MZ) action \cite{Mardones:1990qc} describing Lovelock-Cartan (LC) gravity, which is just Lovelock's gravity improved with torsional terms. LC gravity is also a first order theory. In fact, in any spacetime dimension, MZ actions are polynomially local in the fields and their derivatives, locally Lorentz invariant and explicitly metric independent. Thence, the Carrol limit of the MZ action and the corresponding field equations are explored at formal level. In the case of the UR limit of EH action, we were able to find some novel results. First of all, we identify the emergence of a global scale symmetry similar to the accidental scale symmetry of Galilei gravity \cite{Guerrieri:2020vhp}. This symmetry imposes a restricted form for the matter action coupled to Carroll gravity, which facilitates the analysis. Hence, we are able to find a quite general solution for the curvatures and torsions in the presence of matter. Latter, we find the constraints on the matter content in order to the theory accept Riemannian-like geometries (Vanishing torsions and non-vanishing curvatures). The same analysis is performed for Weitzenböck-like manifolds (Vanishing curvatures and non-vanishing torsions). We also develop a non-trivial solution where space curvature and time torsion are the only non-trivial field strengths. In that particular example, we were able to compute the lapse and proper time as functions of the coordinate time. Finally, we confirm the validity of Birkhoff's theorem in the Carroll limit of the EH action. The case of the UR limit of MZ action is then considered in the second part of the paper. A generalized Carroll action, called here by Carroll-Cartan gravity, is obtained and the corresponding field equations are derived. The first property of the Carroll-Cartan gravity we find is that the scale symmetry is not present anymore due to the torsional terms. The symmetry, however, can be restored if we extend it to include extra transformations for the torsional coupling parameters. The matter action is assumed to be of the same form of the Carroll case. In vacuum, Birkhoff's theorem is shown to be valid again. In the presence of matter, a quite general solution is obtained as a generalization of the general solution obtained for the Carroll case. Finally, we show that Riemannian-like and Weitzenböck-like spacetimes are acceptable in Carroll-Cartan gravity. The paper is organized as follows: In Section \ref{LC} we construct the LC theory of gravity restricting ourselves to four dimensions. In Section \ref{IW} we obtain the Carroll group from a IW contraction of the Poincaré group and implement the contraction effects on the fields. In Section \ref{EH}, Carroll gravity is explored at formal level. Then, in Section \ref{GC}, Carroll-Cartan gravity is studied. Finally, our conclusions are displayed in Section \ref{FINAL}. \section{Lovelock-Cartan gravity}\label{LC} Our starting point is the MZ action \cite{Mardones:1990qc,Zanelli:2005sa}, describing a gravity theory over a four-dimensional Riemann-Cartan manifold (the spacetime) $M$, \begin{eqnarray} S_{MZ}&=&\kappa\int\;\epsilon_{ABCD}\left(R^{AB}e^Ce^D+\frac{\Lambda}{2}e^Ae^Be^Ce^D\right)+\int\left(z_1R^{AB}e_Ae_B+z_2T^AT_A\right)+\nonumber\\ &+&\int\left(z_3\epsilon_{ABCD}R^{AB}R^{CD}+z_4R^{AB}R_{AB}\right)+S_m\;.\label{mz1} \end{eqnarray} In the action \eqref{mz1}, $\kappa=1/8\pi G$, with $G$ being the Newton constant and $\Lambda$ is recognized as the cosmological constant. The constants $z_1$, $z_2$, $z_3$, and $z_4$ are free parameters with no correspondence with GR. The matter content coupled to gravity is denoted by $S_m$. The Lorentz indices (frame indices), denoted by Latin capital letters, run through $A,B,C\ldots\in\{$\underline{0},$\underline{1},$\underline{2},$\underline{3}\}$ (underlined numbers will denote frame indices). and can be raised and lowered with the help of the local Minkowski metric $\eta_{AB}=\eta^{AB}\equiv\mathrm{diag}(-,+,+,+)$. The totally antisymmetric object $\epsilon_{ABCD}$ stands for the Levi-Civita symbol in four dimensions. The fields $e^A$ and $e_A$ stands for the vierbein 1-form and its inverse. The 2-form fields $R^{AB}$ and $T^A$ are curvature and torsion, respectively given by, \begin{eqnarray} R^{AB}&=&d\omega^{AB}+\omega^A_{\phantom{A}C}\omega^{CB}\;,\nonumber\\ T^A&=&\nabla e^A\;\;=\;\;de^A+\omega^A_{\phantom{A}B}e^B\;,\label{2forms0} \end{eqnarray} with $\omega^{AB}=-\omega^{BA}$ being the Lorentz connection while $\nabla$ is the Lorentz covariant derivative. The Bianchi identities are easily derived, \begin{eqnarray} T^A&=&\nabla e^A\;,\nonumber\\ \nabla T^A&=&R^A_{\phantom{A}B} e^B\;,\nonumber\\ \nabla R^A_{\phantom{A}B}&=&0 \;.\label{hier0} \end{eqnarray} The vierbein and its inverse obey the following relations \begin{eqnarray} e^A_\mu e^\mu_B&=&\delta^A_B\;,\nonumber\\ e^A_\mu e_A^\nu&=&\delta_\mu^\nu\;,\label{inv0} \end{eqnarray} with lower case Greek indices (world indices) running through $\alpha,\beta,\gamma\ldots\in\{0,1,2,3\}$. The vierbein naturally induces a metric $g_{\mu\nu}$ (and its inverse $g^{\mu\nu}$) in $M$ through the relations: \begin{eqnarray} g_{\mu\nu}&=&e^A_\mu e^B_\nu\eta_{AB}\;,\nonumber\\ g^{\mu\nu}&=&e_A^\mu e_B^\nu\eta^{AB}\;,\nonumber\\ \eta^{AB}&=&e^A_\mu e^B_\nu g^{\mu\nu}\;,\nonumber\\ \eta_{AB}&=&e_A^\mu e_B^\nu g_{\mu\nu}\;.\label{metrics0} \end{eqnarray} The action \eqref{mz1} is actually the most general gravity action in four dimensions which is polynomially local, explicitly metric independent, depending only on first order derivatives, and gauge invariant under infinitesimal $SO(1,3)$ local Lorentz gauge transformations of the form\footnote{The action \eqref{mz1} is also invariant under finite gauge transformations. Nevertheless, we will not use such transformations in the present study.} \begin{eqnarray} \delta\omega^{AB}&=&\nabla\alpha^{AB}\;,\nonumber\\ \delta e^A&=&\alpha^A_{\phantom{A}B}e^B\;.\label{gt0} \end{eqnarray} with $\alpha^{AB}=-\alpha^{BA}$ being an infinitesimal local parameter. Finally, we identify each term in the action \eqref{mz1}: The first terms are, clearly, the EH action followed by the cosmological constant term. The terms in $z_1$ and $z_2$ are essentially equivalent, up to a surface term. Moreover, $z_1$ and $z_2$ have mass squared dimension. The last two terms are of topological nature. In fact, the term in $z_3$ is the Gauss-Bonnet action and the term in $z_4$ is recognized as the Pontryagin term \cite{Mardones:1990qc,Zanelli:2005sa,Kobayashi,Nakahara:1990th}. Thence, these two last terms do not contribute to the field equations and the parameters $z_3$ and $z_4$ are dimensionless topological parameters. At last, it is interesting to notice that in the particular case when $z_2=-z_1$, the related terms become the Nieh-Yan topological term \cite{Mardones:1990qc,Zanelli:2005sa,Nieh:1981ww,Nieh:2007zz,Nieh:2018rlg}. It is worth mentioning that the MZ action generalizes the four-dimensional Lovelock gravity \cite{Lovelock:1971yv} by including torsional terms in the action. The field equations can be easily derived for the fundamental fields $e^A$ and $\omega^{AB}$, providing, \begin{eqnarray} \kappa\epsilon_{ABCD}\left(R^{BC}e^D+\Lambda e^Be^Ce^D\right)+(z_1+z_2)R_{AB}e^B&=&-\frac{1}{2}\frac{\delta S_m}{\delta e^A}\;,\nonumber\\ \kappa\epsilon_{ABCD}T^Ce^D+\frac{(z_1+z_2)}{2}\left(T_Ae_B-e_AT_B\right)&=&-\frac{1}{2}\frac{\delta S_m}{\delta\omega^{AB}}\;.\label{feq0} \end{eqnarray} It can be verified by simple calculations that the typical asymptotic vacuum solution of these equations is a torsionless maximally symmetric spacetime \cite{Guerrieri:2020vhp} \begin{eqnarray} R^{AB}_0&=&-\Lambda e^Ae^B\;,\nonumber\\ T_0^A&=&0\;.\label{dS0} \end{eqnarray} Clearly, such solution leads to de Sitter or anti-de Sitter spacetimes, depending on the sign of the cosmological constant. An important property of the LC gravity \eqref{mz1} is the validity of Birkhoff`s theorem \cite{Wald:1984rg}. To show that, one sets vanishing torsion and imposes a spherically symmetric form for the line element. Looking at the field equations \eqref{feq0} in vacuum, vanishing torsion automatically satisfies the second equation. Moreover, due to the second Bianchi identity in \eqref{hier0}, the term proportional to $(z_1+z_2)$ in the first equation also vanishes. Hence, at the level of the field equations, vanishing torsion implies on the reduction of LC gravity to the EH gravity with cosmological constant. It follows that Birkhoff's theorem is thus valid, providing a static Schwarzschild-de Sitter geometry. See also \cite{Obukhov:2020hlp} and references therein. \section{Poincar\'e and Carroll algebras}\label{IW} In this section, we discuss the UR limit of the Poincar\'e group in order to obtain the Carrol group. In fact, the gauge symmetry \eqref{gt0} of the MZ action \eqref{mz1} can be described by the Poincar\'e group $ISO(1,3)=SO(1,3)\times\mathbb{R}^{1,3}$, instead of the smaller Lorentz group $SO(1,3)$. Such description is allowed because the translational generators $\Pi_A$ of the sector $\mathbb{R}^{1,3}$ are also generators of the Lorentz group in the fundamental representation. The Lorentz sector itself has generators denoted by $L_{AB}$, with $L_{AB}=-L_{BA}$. The Poincar\'e algebra is then given by \begin{eqnarray} \left[L_{AB},L_{CD}\right]&=&\frac{1}{2}\left(\eta_{AD}L_{BC}-\eta_{AC}L_{BD}+\eta_{BC}L_{AD}-\eta_{BD}L_{AC}\right)\;,\nonumber\\ \left[L_{AB},\Pi_C\right]&=&\frac{1}{2}\left(\eta_{BC}\Pi_A-\eta_{AC}\Pi_B\right)\;,\nonumber\\ \left[\Pi_A,\Pi_B\right]&=&0\;.\label{poincalg1} \end{eqnarray} The first step towards the UR limit of the algebra \eqref{poincalg1} is to decompose the Poincar\'e group into space and time sectors, namely $ISO(1,3)=SO(3)\times L(3)\times\mathbb{R}_s^3\times\mathbb{R}_t$. Obviously, $\mathbb{R}_s^3$ stands for spatial translations and $\mathbb{R}_t$ for time translations. Thence, \begin{eqnarray} L_{AB}&\equiv&\left(L_{ab},L_{a\underline{0}}\right)\;\;=\;\;\left(L_{ab},L_a\right)\;,\nonumber\\ \Pi_A&\equiv&\left(\Pi_a,\Pi_{\underline{0}}\right)\;\;=\;\;\left(\Pi_a,\Pi\right)\;,\label{poincalg2} \end{eqnarray} where lowercase Latin indices run through $a,b,c\dots h\in\{\underline{1},\underline{2},\underline{3}\}$. Hence, the Poincar\'e algebra \eqref{poincalg1} decomposes as \begin{eqnarray} \left[L_{ab},L_{cd}\right]&=&\frac{1}{2}\left(\delta_{ad}L_{bc}-\delta_{ac}L_{bd}+\delta_{bc}L_{ad}-\delta_{bd}L_{ac}\right)\;,\nonumber\\ \left[L_{ab},L_c\right]&=&\frac{1}{2}\left(\delta_{bc}L_a-\delta_{ac}L_b\right)\;,\nonumber\\ \left[L_a,L_b\right]&=&-\frac{1}{2}L_{ab}\;,\nonumber\\ \left[L_{ab},\Pi_c\right]&=&\frac{1}{2}\left(\delta_{bc}\Pi_a-\delta_{ac}\Pi_b\right)\;,\nonumber\\ \left[L_a,\Pi_b\right]&=&-\frac{1}{2}\delta_{ab}\Pi\;,\nonumber\\ \left[L_a,\Pi\right]&=&\frac{1}{2}\Pi_a\;,\label{poincalg3} \end{eqnarray} and zero for all other commutators. Consequently, the algebra-valued forms follow the same decomposition, namely \begin{eqnarray} e^A\Pi_A&=&e^a\Pi_a+q\Pi\;,\nonumber\\ \omega^{AB}L_{AB}&=&\omega^{ab}L_{ab}+\theta^aL_a\;,\nonumber\\ T^A\Pi_A&=&T^a\Pi_a+\mathcal{Q}\Pi\;,\nonumber\\ R^{AB}L_{AB}&=&\Omega^{ab}L_{ab}+S^aL_a\;,\label{fdecomp1} \end{eqnarray} with \begin{eqnarray} T^a&=&De^a-\frac{1}{2}q\theta^a\;,\nonumber\\ \mathcal{Q}&=&dq-\frac{1}{2}\theta_ae^a\;,\nonumber\\ \Omega^a_{\phantom{a}b}&=&R^a_{\phantom{a}b}-\frac{1}{4}\theta^a\theta_b\;,\nonumber\\ S^a&=&D\theta^a\;,\label{fdecomp2} \end{eqnarray} where the covariant derivative $D$ is assumed to be taken with respect to the $SO(3)$ sector by means of $D\cdot^a= d\cdot^a+\omega^a_{\phantom{a}b}\cdot^b$, and we have defined $R^{ab}=d\omega^a_{\phantom{a}b}+\omega^a_{\phantom{a}c}\omega^c_{\phantom{c}b}$. The nomenclature of all fields are now in order: $q$ - \emph{time vierbein}; $e$ - \emph{space vierbein}; $\theta$ - \emph{boost connection}; $\omega$ - \emph{spin connection}; $\mathcal{Q}$ - \emph{time torsion}; $T$ - \emph{space torsion}; $S$ - \emph{boost curvature}; $\Omega$ - \emph{space curvature}. To achieve the UR limit of the Poincar\'e group, we rescale the group generators and fields as (See, for instance, reference \cite{Bergshoeff:2017btm}) \begin{eqnarray} L_a&\longmapsto&\chi L_a\;,\nonumber\\ \Pi&\longmapsto&\chi\Pi\;,\nonumber\\ \theta^a&\longmapsto&\chi^{-1}\theta^a\;,\nonumber\\ q&\longmapsto&\chi^{-1} q\;.\label{res0} \end{eqnarray} These rescalings keep the first two expressions (fundamental fields decompositions in space and time) in \eqref{fdecomp1} unchanged. The UR limit is then achieved by $\chi\longrightarrow\infty$ at the Poincar\'e algebra \eqref{poincalg3}, at leading order. The result is the so called Carroll algebra, \begin{eqnarray} \left[L_{ab},L_{cd}\right]&=&\frac{1}{2}\left(\delta_{ad}L_{bc}-\delta_{ac}L_{bd}+\delta_{bc}L_{ad}-\delta_{bd}L_{ac}\right)\;,\nonumber\\ \left[L_{ab},\Pi_c\right]&=&\frac{1}{2}\left(\delta_{bc}\Pi_a-\delta_{ac}\Pi_b\right)\;,\nonumber\\ \left[L_{ab},L_c\right]&=&\frac{1}{2}\left(\delta_{bc}L_a-\delta_{ac}L_b\right)\;,\nonumber\\ \left[\Pi_a,L_b\right]&=&\frac{1}{2}\delta_{ab}\Pi\;,\label{carr1} \end{eqnarray} and zero otherwise. The contraction of the Poincaré algebra \eqref{poincalg3} to the Carroll one \eqref{carr1} is an example of the general procedure known as In\"on\"u-Wigner contraction \cite{Inonu:1953sp}. In the present case, $ISO(1,3)\longrightarrow C(1,3)=SO(3)\times C(3)\times \mathbb{R}_s^3\times \mathbb{R}_t$ where $C(3)$ represents the Carrollian boosts \cite{Bergshoeff:2017btm}. The algebra \eqref{carr1} is clearly not semi-simple. Thence, it implies that Carrollian metrics are degenerate \cite{Ciambelli:2018ojf,Ciambelli:2019lap}. Moreover, relations \eqref{inv0} and \eqref{metrics0} imply on \begin{eqnarray} e^a_\mu e_b^\mu&=&\delta^a_b\;,\nonumber\\ q^\mu q_\mu&=&1\;,\nonumber\\ q^\mu e^a_\mu&=&q_\mu e_a^\mu\;=\;0\;,\nonumber\\ e^a_\mu e_a^\nu&=&\delta_\mu^\nu-q_\mu q^\nu\;.\label{inv1} \end{eqnarray} Besides the fact that $e^AP_A$ and $\omega^{AB}L_{AB}$ are kept unchanged under the rescalings \eqref{res0}, this is not true for the 2-form fields in \eqref{fdecomp1}. In fact, the 2-form fields will reduce to \begin{eqnarray} T^a&=&D^ae^a\;,\nonumber\\ \mathcal{Q}&=&dq-\frac{1}{2}\theta_a e^a\;,\nonumber\\ \Omega^a_{\phantom{a}b}&=&R^a_{\phantom{a}b}\;,\nonumber\\ S^a&=&D\theta^a\;.\label{fdecomp3} \end{eqnarray} For the gauge transformations \eqref{gt0} one also needs to consider $\alpha^{AB}L_{AB}=\alpha_{ab}\Sigma_{ab}+\alpha^aG_a$ with $\alpha^a\longrightarrow\chi^{-1}\alpha^a$. Thus, Carroll gauge transformations attain the form, \begin{eqnarray} \delta\omega^a_{\phantom{a}b}&=&D\alpha^a_{\phantom{a}b}\;,\nonumber\\ \delta\theta^a&=&D\alpha^a-\alpha^a_{\phantom{a}b}\theta^b\;,\nonumber\\ \delta e^a&=&-\alpha^a_{\phantom{a}b}e^b\;,\nonumber\\ \delta q&=&\frac{1}{2}e^b \alpha_b\;,\label{gt1} \end{eqnarray} Finally, one can easily check that UR limit of the hierarchy relations \eqref{hier0} is given by \begin{eqnarray} d\mathcal{Q}&=&\frac{1}{2}\left(\theta_aT^a-e_aS^a\right)\;,\nonumber\\ DR^a_{\phantom{a}b}&=&0\;,\nonumber\\ DT^a&=&R^a_{\phantom{a}b}e^b\;,\nonumber\\ DS^a&=&R^a_{\phantom{a}b}\theta^b\;.\label{hier1} \end{eqnarray} In the next sections we investigate some formal consequences of the UR limit of Lovelock-Cartan gravity, starting with the particular case of the EH action. \section{Carroll gravity}\label{EH} We start reviewing the usual UR limit of the EH action. The resulting gravity theory is typically known as \emph{Carroll gravity} \cite{Bergshoeff:2017btm,Ciambelli:2018ojf,Ciambelli:2019lap}. \subsection{Action and field equations} By imposing $\Lambda=z_1=z_2=z_3=z_4=0$ in the MZ action \eqref{mz1}, the EH action is obtained, \begin{equation} S_{EH}=\kappa\int\epsilon_{ABCD}R^{AB}e^Ce^D+S_m\;.\label{eh0} \end{equation} The EH action \eqref{eh0}, in terms of the decompositions \eqref{fdecomp1} and \eqref{fdecomp2}, reads \cite{Bergshoeff:2017btm} \begin{equation} S_{EH}=\kappa\int\epsilon_{abc}\left(2qR^{ab}e^c-\frac{1}{2}q\theta^a\theta^be^c-S^ae^be^c\right)+S_m\;,\label{eh1} \end{equation} with $\epsilon_{abc}=\epsilon_{\underline{0}abc}$. Rescaling the fields according to \eqref{res0}, together with $\kappa\longrightarrow\chi\kappa$, one gets \begin{equation} S_{EH}=\kappa\int\epsilon_{abc}\left(2qR^{ab}e^c-\frac{1}{2}\chi^{-2}q\theta^a\theta^be^c-S^ae^be^c\right)+S_m\;.\label{ehaction2} \end{equation} and the UR limit is attained from $\xi\longrightarrow\infty$. The result is the Carroll gravity action\footnote{Obviously, the corresponding limit of the matter action $S_m$ must be consistent as well.} \cite{Bergshoeff:2017btm,Bergshoeff:2019ctr}, \begin{equation} S_C=\kappa\int\epsilon_{abc}\left(2qR^{ab}e^c-S^ae^be^c\right)+S_m\;.\label{carrollaction1} \end{equation} The field equations can be easily derived by varying the action \eqref{carrollaction1} with respect to $q$, $e$, $\omega$, and $\theta$ (in this order): \begin{eqnarray} \epsilon_{abc}R^{ab}e^c&=&-\frac{1}{2\kappa}\tau\;,\nonumber\\ qR^{ab}-\frac{1}{2}\left(S^ae^b-S^be^a\right)&=&\frac{1}{4\kappa}\epsilon^{abc}\tau_c\;,\nonumber\\ \mathcal{Q}e^a-qT^a&=&-\frac{1}{4\kappa}\epsilon^{abc}\sigma_{bc}\;,\nonumber\\ T^ae^b-T^be^a&=&\frac{1}{2\kappa}\epsilon^{abc}\sigma_c\;,\label{ehfeq1} \end{eqnarray} where we have defined \begin{eqnarray} \tau&=&\frac{\delta S_m}{\delta q}\;,\nonumber\\ \tau_a&=&\frac{\delta S_m}{\delta e^a}\;,\nonumber\\ \sigma_{ab}&=&\frac{\delta S_m}{\delta\omega^{ab}}\;,\nonumber\\ \sigma_a&=&\frac{\delta S_m}{\delta\theta^a}\;.\label{ehsour1} \end{eqnarray} The sources $\tau$ and $\tau_a$ are related to the relativistic energy-momentum tensor while $\sigma_{ab}$ and $\sigma_a$ with spin density. All sources defined in \eqref{ehsour1} are 3-forms fields. One interesting feature of Carroll gravity is that it carries a Weyl symmetry which is not present in the EH gravity. In fact, considering the pure Carroll gravity action $S_{pC}=S_C-S_m$, one can easily check that it is invariant under a global scale transformation of the form \begin{eqnarray} e^a&\longmapsto&\mathrm{exp}(\zeta)e^a\;,\nonumber\\ q&\longmapsto&\mathrm{exp}(-\zeta)q\;,\nonumber\\ \theta^a&\longmapsto&\mathrm{exp}(-2\zeta)\theta^a\;.\label{weyl1} \end{eqnarray} with $\zeta$ being a global parameter. In functional form, the symmetry \eqref{weyl1} reads\begin{equation} \int\left(e^a\frac{\delta S_C}{\delta e^a}-q\frac{\delta S_C}{\delta q}-2\theta^a\frac{\delta S_C}{\delta\theta^a}\right)=0\;.\label{weyl2} \end{equation} This symmetry equips the fields $e$, $q$, $\theta$, and $\omega$ with a Weyl charge of $+1$, $-1$, $-2$, and $0$, respectively. A local and simpler version of this symmetry is also present in the non-relativistic limit of gravity, but with different charges for the fields, see for instance \cite{Guerrieri:2020vhp,Devecioglu:2018apj}. Another difference is that in the present case, the field equations are complete since the boost connection appear in the action \eqref{carrollaction1} and also from the fact that we have a complete set of field equations in \eqref{ehfeq1}. By applying the Weyl symmetry \eqref{weyl2} in the full action \eqref{carrollaction1}, it turns out that the matter action also needs to respect Weyl symmetry. Henceforth, it is not difficult to infer that $S_m$ must assume the more explicit general form \begin{equation} S_m=\int\left[M_aqe^a+\Sigma\epsilon_{abc}\theta^ae^be^c+\frac{1}{2}\left(\pi_a\theta_b-\pi_b\theta_a\right)e^a e^b+\rho_{ab}\omega^{ab}\right]\;.\label{sm0} \end{equation} The densities $M_a$, $\Sigma$, $\pi_a$, and $\rho_{ab}$ are allowed to depend only on $\omega^{ab}$. Moreover, $\Sigma$ and $\pi_a$ are 1-forms, $M_a$ is a 2-form, and $\rho_{ab}$ is a 3-form. Therefore, the quantities defined in \eqref{ehsour1} are now given by \begin{eqnarray} \tau&=&M_ae^a\;,\nonumber\\ \tau_a&=&-qM_a-2\Sigma\epsilon_{abc}\theta^be^c+\left(\pi_a\theta_b-\pi_b\theta_a\right)e^b\;,\nonumber\\ \sigma_{ab}&=&\rho_{ab}+\frac{\delta M_c}{\delta\omega^{ab}}qe^c+\frac{\delta\Sigma}{\delta\omega^{ab}}\epsilon_{cde}\theta^ce^de^e+\frac{\delta \pi_c}{\delta\omega^{ab}}\theta_d e^c e^d\;,\nonumber\\ \sigma_a&=&-\Sigma\epsilon_{abc}e^be^c-\pi_be^be_a\;.\label{ehsour2} \end{eqnarray} It is possible to extract some formal solutions from equations \eqref{ehfeq1}, provided \eqref{ehsour2}. For example, in vacuum, the trivial solution $R=S=T=\mathcal{Q}=0$ is accepted. Considering matter, in the form \eqref{ehsour2}, the first two equations in \eqref{ehfeq1} can be solved for $R^{ab}$ and $S^a$ to give \begin{eqnarray} R^{ab}&=&-\frac{1}{4\kappa}\epsilon^{abc}M_c\;,\nonumber\\ S^a&=&\frac{1}{\kappa}\left(\Sigma\delta^a_c+\frac{1}{2}\epsilon^{ab}_{\phantom{ab}c}\pi_b\right)\theta^c\;.\label{RS1} \end{eqnarray} The second equation in \eqref{RS1} can be seen as a eigenvalue equation for the covariant derivative with $\theta^a$ being the eigenvectors and the 1-form $\frac{1}{\kappa}\left(\Sigma\delta^a_c+\frac{1}{2}\epsilon^{ab}_{\phantom{ab}c}\pi_b\right)$ the eigenvalues. Space torsion $T^a$ can be obtained from the fourth equation in \eqref{ehfeq1}, \begin{equation} T^a=-\frac{1}{2\kappa}\left(\Sigma\delta^a_c+\epsilon^{ab}_{\phantom{ab}c}\pi_b\right)e^c\;.\label{T1} \end{equation} Just like boost curvature, this equation can also be seen as an eigenvalue equation for the covariant derivative with $e^a$ as eigenvectors but with eigenvalues given by the 1-form $-\frac{1}{2\kappa}\left(\Sigma\delta^a_c+\epsilon^{ab}_{\phantom{ab}c}\pi_b\right)$. For the third equation in \eqref{ehfeq1} it is convenient to set\footnote{Without such imposition, $\mathcal{Q}$ cannot be easily isolated.} $\rho_{ab}=0$. Thence, one can isolate $\mathcal{Q}$, \begin{equation} \mathcal{Q}=-\frac{1}{12\kappa}\left[q\left(6\Sigma-\epsilon^{abc}\frac{\delta M_c}{\delta\omega^{ab}}\right)+2\frac{\delta\Sigma}{\delta\omega^{ab}}\theta^ae^b+\frac{1}{2}\epsilon^{abc}\left(\frac{\delta \pi_c}{\delta\omega^{ab}}\theta_d-\frac{\delta \pi_d}{\delta\omega^{ab}}\theta_c\right)e^d\right]\;.\label{Q1} \end{equation} To obtain \eqref{Q1} the strategy is to isolate a space vierbein common to all terms. The symmetric part of the remaining terms imply on \eqref{Q1}, because it contains $\mathcal{Q}$. The antisymmetric part does not contain $\mathcal{Q}$ and gives the following relations \begin{eqnarray} \pi_a&=&-\frac{1}{4}\frac{\delta M^b}{\delta\omega^{ab}}\;,\nonumber\\ \frac{\delta\pi^b}{\delta\omega^{ab}}&=&0\;,\nonumber\\ \frac{\delta\Sigma}{\delta\omega^{ab}}&=&\frac{1}{4}\epsilon_{acd}\frac{\delta\pi^c}{\omega^{db}}\;.\label{mconst1} \end{eqnarray} These relations suggest (but do not imply) that $\Sigma$ and $\pi_a$ should not depend on the spin-connection. If so, time torsion \eqref{Q1} simplifies to \begin{equation} \mathcal{Q}=-\frac{1}{12\kappa}q\left(6\Sigma-\epsilon^{abc}\frac{\delta M_c}{\delta\omega^{ab}}\right)\;.\label{Q1a} \end{equation} \subsection{Carroll-Riemann and Carroll- Weitzenb\"ock manifolds} Two special geometries can be studied at formal level, namely, the Carroll-Riemann and the Carroll-Weitzenb\"ock geometries as solutions of the field equations \eqref{ehfeq1} for generic sources in the form \eqref{ehsour2}. The first one is defined by non-trivial curvatures and vanishing torsions. The second case is defined by vanishing curvatures and non-trivial torsions. \subsubsection{Carroll-Riemann manifolds} It is easy to infer, from \eqref{ehfeq1}, the conditions on the matter distributions in order to obtain a Carroll-Riemann geometry by setting $T^a=\mathcal{Q}=0$. The matter densities must satisfy then \begin{eqnarray} \Sigma\delta^a_c+\epsilon^{ab}_{\phantom{ab}c}\pi_b&=&0\;,\nonumber\\ \frac{\delta\Sigma}{\delta\omega^{ab}}+\frac{1}{4}\epsilon^{cde}\left(\frac{\delta \pi_e}{\delta\omega^{cd}}\delta_{ab}-\frac{\delta \pi_b}{\delta\omega^{cd}}\delta_{ae}\right)&=&0\;,\nonumber\\ 6\Sigma-\epsilon^{abc}\frac{\delta M_c}{\delta\omega^{ab}}&=&0\;,\nonumber\\ \rho_{ab}&=&0\;.\label{CR1} \end{eqnarray} Taking the trace of the first condition of \eqref{CR1} imply on the vanishing of $\Sigma$ and $\pi_a$. Thence, conditions \eqref{CR1} reduce to \begin{eqnarray} \Sigma&=&0\;,\nonumber\\ \pi_a&=&0\;,\nonumber\\ \frac{\delta M_c}{\delta\omega^{ab}}&=&0\;,\nonumber\\ \rho_{ab}&=&0\;.\label{CR2} \end{eqnarray} We point out that such conditions are generic in such a way that no assumption on the fundamental gravitational fields is made. The corresponding curvatures read \begin{eqnarray} R^{ab}&=&-\frac{1}{4\kappa}\epsilon^{abc}M_c\;,\nonumber\\ S^a&=&0\;.\label{RS2} \end{eqnarray} Thus, the only non-trivial object is space curvature $R^{ab}$. Therefore, if $S_m$ depend only on $M_a$ and $M_a$ does not depend on any gravitational field, the resulting spacetime is a Carroll-Riemann manifold. Solutions \eqref{RS2} must fit to the Bianchi identities. In fact, from $DS^a=R^a_{\phantom{a}b}\theta^b$, one attains the possible solution $\theta^a=0$. From $DT^a=R^a_{\phantom{a}b}e^b$ and from the fact that space vierbein and space curvature are not vanishing quantities, we gain a constraint, \begin{equation} \epsilon^{abc}e_bM_c=0\;.\label{constr00} \end{equation} Moreover, $DR^{ab}=0$ implies \begin{equation} DM_a=0\;.\label{constr01} \end{equation} The fact that $\theta^a=0$ (see \eqref{fdecomp3}) implies on a solution for the time vierbein according to $dq=0\Rightarrow q=d\mathbf{t}$, with $\mathbf{t}$ being an arbitrary scalar function. In Galilean gravity, such function can be identified with absolute Newtonian time because $\oint d\mathbf{t}=0$. Hence, any clock would measure the same time interval, independently of the path observers take. Moreover, $q$ is a gauge independent quantity, ensuring the observational character of $\mathbf{t}$. In Carroll-Riemann geometry however, $q$ is not gauge invariant (see \eqref{gt1}). This property spoils the tempting interpretation of $\mathbf{t}$ being a kind of UR absolute time coordinate. At least, not before any gauge fixing. In fact, one can achieve the same conclusion (the absence of a gauge invariant absolute time coordinate) by setting $\mathcal{Q}=\theta^a=0$ to solve the field equations \eqref{feqmz1} in a more general approach. \subsubsection{Carroll-Weitzenb\"ock manifolds} For the Carroll-Weitzenb\"ock solutions, vanishing curvatures ($R^{ab}=S^a=0$) imply on the generic conditions (See \eqref{RS1}), \begin{equation} M_c=\Sigma=\pi_b=0\;.\label{M1} \end{equation} The corresponding torsions obtained from equations \eqref{ehfeq1} read \begin{eqnarray} T^a&=&0\;,\nonumber\\ \mathcal{Q}e^a&=&-\frac{1}{4\kappa}\epsilon^{abc}\rho_{bc}\;.\label{TQ1} \end{eqnarray} In this case, the only non-trivial object is time torsion. Moreover, the matter action $S_m$ must depend only on $\rho_{ab}$ which depends only on the spin connection. Due to the non-triviality of $\mathcal{Q}$, a UR absolute time definition is also out of question in Carroll-Weitzeinb\"ock manifolds. Similarly to the Carroll-Riemann case, due to vanishing boost curvature, we can set $\theta^a=0$. Hence, all Bianchi identities are satisfied if the spin-density $\rho_{ab}$ obeys the constraint \begin{equation} D\rho_{ab}=0\;.\label{constr02} \end{equation} Moreover, we can choose a Weitzenb\"ock-type connection $\omega^{ab}=0$. Thence, we end up with the set of equations \begin{eqnarray} d\rho_{ab}&=&0\;,\nonumber\\ de^a&=&0\;,\nonumber\\ \mathcal{Q}&=&dq\;,\label{W1a} \end{eqnarray} to be solved together with the second of \eqref{TQ1}. The first equation in \eqref{W1a} says that the spin density can be written as an exact quantity, $\rho_{ab}=dX_{ab}$, with $X_{ab}$ being a 2-form. The second equation in \eqref{W1a} states that the space vierbein is also an exact form, \begin{equation} e^a=dn^a\;,\label{e1} \end{equation} with $n^a$ being a 0-form. Thus, the second equation \eqref{TQ1} becomes \begin{equation} d\left(dqn^a\right)=-\frac{1}{4\kappa}\epsilon^{abc}dX_{bc}\;\Rightarrow\;dqn^a=-\frac{1}{4\kappa}\epsilon^{abc}X_{bc}\;, \end{equation} Defining $n^an_a=n^2$, we get \begin{equation} q=-\frac{1}{4\kappa}\int\epsilon^{abc}\frac{n_aX_{bc}}{n^2}\;.\label{q1} \end{equation} Therefore, depending on the form of $X_{ab}$ and $n^a$, the final Weitzenb\"ock solution is given by the vierbeins \eqref{e1} and \eqref{q1} and vanishing connections. \subsection{A non-trivial example} A particularly interesting example appear if we set $\rho_{ab}=\Sigma=\pi_a=0$ in the field equations \eqref{ehfeq1} for the sources in the form \eqref{ehsour2}. Thence, $S^a=T^a=0$. Space curvature and time torsion remain non-trivial and read \begin{eqnarray} R^{ab}&=&-\frac{1}{4\kappa}\epsilon^{abc}M_c\;,\nonumber\\ \mathcal{Q}&=&\frac{1}{12\kappa}q\epsilon^{abc}\frac{\delta M_c}{\delta\omega^{ab}}\;.\label{RQ1} \end{eqnarray} Again, the fact that boost curvature vanishes allows to set $\theta^a=0$. Therefore, \begin{equation} \mathcal{Q}=dq\;,\label{Q00} \end{equation} and thus, \begin{equation} dq=qf\;,\label{dq1} \end{equation} with $f$ being the 1-form \begin{equation} f=\frac{1}{12\kappa}\epsilon^{abc}\frac{\delta M_c}{\delta\omega^{ab}}\;.\label{f1} \end{equation} For consistency, Bianchi identities \eqref{hier1} must be satisfied. Consequently, we need to impose \eqref{constr00}, \eqref{constr01}, and \begin{equation} d(qf)=0\;.\label{constr03} \end{equation} Equation \eqref{dq1} can be easily solved for $q$ if we consider $f=\mathrm{constant}$, resulting in \begin{equation} q=h\exp{(-f_\mu x^\mu)}+jf\;,\label{dq2} \end{equation} with $h$ being a constant 1-form and $j$ a constant 0-form. This is the same type of the solution found in \cite{Guerrieri:2020vhp} in Galilei gravity for the time torsion. Thus, following \cite{Guerrieri:2020vhp}, one can choose to work in ADM formalism in the temporal gauge, so $q=Ndt$ with $N$ being the lapse function. Moreover, without loss of generality, we can set $f=\mathbf{f}dt$, $h=\mathbf{h}dt$, and $j=\mathbf{f}^{-1}$. Thence, \begin{equation} N(t)=\mathbf{h}\exp{(-\mathbf{f}t)}+1\;.\label{N1} \end{equation} Noting that, in this case, $qf=0\Rightarrow\mathcal{Q}=dq=0$ and a consistent foliation can be defined in such a way causality is ensured. The lapse function characterizes the rate between proper time $T$ and coordinate time $t$, namely $N=dT/dt$. Consequently, proper time $T(t)$ is given by \begin{equation} T(t)=\frac{\mathbf{h}}{\mathbf{f}}\left[1-\exp{(-\mathbf{f}t)}\right]+t\;,\label{N2} \end{equation} where we have set $T(0)=0$. As the system evolves in time, we have $T(t)|_{t\rightarrow\infty}=t+\mathbf{h}/\mathbf{f}$ as $N(t)|_{t\rightarrow\infty}=1$. At this limit, proper time coincides with the coordinate time, up to a gap associated to time dilation. See Figure \ref{fig1}. \begin{figure}[ht] \centering \includegraphics[width=0.8\textwidth]{Carroll1.png} \caption{Illustration of the UR time $T(t)$ (red dashed line) and the lapse function $N(t)$ (blue solid line).}\label{fig1} \end{figure} The conclusion here is that we have a spacetime with only nontrivial geometrical property given by space curvature and a foliation of space-like surfaces evolving in time. In \cite{Guerrieri:2020vhp}, a similar solution was found for $\mathcal{Q}$, but with nontrivial space torsion and vanishing curvatures. \subsection{Spherically symmetric solution}\label{4.1} Birkhoff theorem \cite{Wald:1984rg,Ryder:2009zz}, in GR, establishes that any spherically symmetric solution of Einstein equation will be static as well. We now proceed to check if such property survives the UR limit of the EH theory. We have to solve the field equations \eqref{ehfeq1} within spherically symmetric imposition. For that, we consider the torsion free case, $T=\mathcal{Q}=0$, in vacuum, $\tau=\tau_a=\sigma_{ab}=\sigma_a=0$. Henceforth, the last two equations of \eqref{ehfeq1} are immediately satisfied. Consequently, $\omega^{ab}=\omega^{ab}(e)$ and $\theta^a=\theta^a(q,e)$. Therefore, the line element can be parameterized in the usual way \cite{Ryder:2009zz} \begin{equation} ds^2 = - e^{2\alpha(t,r)}dt^2 + e^{2\beta(t,r)}dr^2 + r^2 d\Omega^2\;,\label{ds} \end{equation} where $d\Omega^2= d\theta^2 + sen^2\theta d\phi^2$ is the solid angle element. Considering the labels $u,v=\{2,3\}$, the corresponding curvatures read \begin{eqnarray} S^1 &=& 2\left(e^{-2\beta}\;[(\alpha'-\beta')\alpha' + \alpha''] + e^{-2\alpha}[(\dot{\alpha}-\dot{\beta})\dot{\beta} - \ddot{\beta}]\right) q e^1\;,\nonumber\\ S^v &=& \frac{2}{r}e^{-\beta}\left[e^{-\beta}\alpha' q + e^{-\alpha}\dot{\beta}e^1\right]e^v\;, \nonumber\\ R^{1v}&=& \frac{1}{r}e^{-\beta}d\beta \;e^v\;,\nonumber\\ R^{uv}&=& \frac{1}{r^2}(1 - e^{-2\beta})e^u e^v\;. \label{csscs} \end{eqnarray} The first two equations in \eqref{ehfeq1} decompose as \begin{eqnarray} \dot{\beta}&=&0\;, \nonumber\\ \frac{1}{2r^2}\left(1-e^{-2\beta}\right) + \frac{\beta'}{r}e^{-2\beta}&=&0\;, \nonumber\\ \frac{1}{2r^2}\left(1-e^{-2\beta}\right) - \frac{\alpha'}{r}e^{-2\beta}&=&0\;, \nonumber\\ \alpha'' + [\alpha' - \beta']\left(\alpha' + \frac{1}{r}\right)&=&0 \;. \label{eqcrrl} \end{eqnarray} The first equation in \eqref{eqcrrl} ensures that $\beta$ must not depend on time, $\beta=\beta(r)$. The remaining equations lead to the usual solution, \begin{eqnarray} \alpha&=&-\beta\;,\nonumber\\ e^{2\alpha} &=& 1 - \frac{2M}{r}\;. \label{eb} \end{eqnarray} Therefore, \begin{equation} ds^2 = - \left(1 - \frac{2M}{r}\right)dt^2 + \left(1 - \frac{2M}{r}\right)^{-1}dr^2 + r^2 d\Omega^2\;, \label{schw} \end{equation} which coincides with the Schwarzschild solution of GR. The final form of the curvatures can be written as \begin{eqnarray} S^1 &=& \frac{4M}{r^3}\;q e^1\;,\nonumber\\ S^v &=& -\frac{2M}{r^3}\;qe^v\;, \nonumber\\ R^{1v}&=& \frac{M}{r^3}\;e^{1}e^v\;,\nonumber\\ R^{uv}&=& -\frac{2M}{r^3}\;e^u e^v\;. \end{eqnarray} Therefore, we have verified that Birkhoff theorem holds for the UR limit of gravity described by Carroll geometry. Although expected, this result is not evident because Carroll action \eqref{carrollaction1} lacks from a piece of the original EH action \eqref{eh1}. \section{Carroll-Cartan gravity}\label{GC} In this section we study the UR limit of the MZ action \eqref{mz1}. We derive the corresponding field equations and provide some formal solutions. \subsection{Action and field equations} The MZ action action \eqref{mz1}, in terms of decompositions \eqref{fdecomp1} and \eqref{fdecomp3}, reads \cite{Guerrieri:2020vhp} \begin{eqnarray} S_{MZ}&=&\kappa\int\left[\epsilon_{abc}\left(2qR^{ab}e^c-\frac{1}{2}q\theta^a\theta^be^c-S^ae^be^c\right)+2\Lambda\epsilon_{abc}qe^ae^be^c\right]+\nonumber\\ &+&\int\left[z_1\left(R^{ab}e_ae_b-qe_aS^a+\frac{1}{4}e_a\theta^ae_b\theta^b\right)+z_2\left(Q^2+T^aT_a\right)\right]+\nonumber\\ &-&\int\left[2z_3\epsilon_{abc}S^a\left(R^{bc}-\frac{1}{4}\theta^b\theta^c\right)-z_4\left(R^{ab}R_{ab}+\frac{1}{2}S^aS_a-\frac{1}{2}R^{ab}\theta_a\theta_b\right)\right]+S_m\;.\label{mz2} \end{eqnarray} Rescaling the fields according to \eqref{res0}, action \eqref{mz2} takes the form \begin{eqnarray} S_{MZ}&=&\kappa\int\left[\epsilon_{abc}\left(2\chi^{-1} qR^{ab}e^c-\frac{1}{2}\chi^{-3}q\theta^a\theta^be^c-\chi^{-1}S^ae^be^c\right)+2\chi^{-1}\Lambda\epsilon_{abc}qe^ae^be^c\right]+\;,\nonumber\\ &+&\int\left\{z_1\left(R^{ab}e_ae_b-\chi^{-2}qe_aS^a+\frac{1}{4}\chi^{-2}e_a\theta^ae_b\theta^b\right)+\right.\nonumber\\ &+&\left.z_2\left[\chi^{-2}\left(dq\right)^2+De^aDe_a-\chi^{-2}q\theta^aDe_a+\chi^{-2}e_a\theta^adq+\frac{1}{4}\chi^{-2}e_a\theta^ae_b\theta^b\right]\right\}+\nonumber\\ &-&\int\left[2z_3\epsilon_{abc}\left(\chi^{-1}S^aR^{bc}-\frac{1}{4}\chi^{-3}S^a\theta^b\theta^c\right)-z_4\left(R^{ab}R_{ab}+\frac{1}{2}\chi^{-2}S^aS_a-\frac{1}{2}\chi^{-2}R^{ab}\theta_a\theta_b\right)\right]+\nonumber\\ &+&S_m\;.\label{mz3} \end{eqnarray} For the coupling parameters, we consider the following rescalings \begin{eqnarray} \kappa&\longmapsto&\chi\;\kappa\;,\nonumber\\ \Lambda&\longmapsto&\chi^{-1}\Lambda\;,\nonumber\\ z_1&\longmapsto&z_1\;,\nonumber\\ z_2&\longmapsto&z_2\;,\nonumber\\ z_3&\longmapsto&\;z_3\;,\nonumber\\ z_4&\longmapsto&z_4\;.\label{res2} \end{eqnarray} Therefore, the UR limit of the MZ action \eqref{mz3} is achieved by taking $\chi\longrightarrow\infty$, at leading order. Thence, \begin{equation} S_{CMZ}=\kappa\int\epsilon_{abc}\left(2qR^{ab}e^c-S^ae^be^c\right)+\int\left(z_1\;R^{ab}e_ae_b+z_2T^aT_a+z_4R^{ab}R_{ab}\right)+S_m\;,\label{cmz1} \end{equation} with $T^a$ given in \eqref{fdecomp3}. The resulting gravity theory will be called \emph{Carroll-Cartan gravity}. The action \eqref{cmz1} is easily interpreted: The first two terms are identical to the UR limit of the EH action, see \eqref{carrollaction1}; Terms in $z_1$ and $z_2$ are torsional and become topological if $z_2=-z_1$ (In that case, we have an UR version of the Nieh-Yan topological term); The term in $z_4$ is topological (The UR version of the Pontryagin term) and does not contribute to the field equations. Moreover, the UR limit of the matter action is assumed to be well behaved as well. By direct inspection, one easily finds that Weyl symmetry \eqref{weyl2} is lost. Nevertheless, it can be restored by considering the possibility of rescaling the parameters $z_1$ and $z_2$ by means of \begin{eqnarray} z_1&\longmapsto&\exp{(-2\zeta)}z_1\;,\nonumber\\ z_2&\longmapsto&\exp{(-2\zeta)}z_2\;.\label{zs1} \end{eqnarray} Hence, since the matter action should not depend on such parameters, the form \eqref{sm0} of the matter action is still valid. The field equations generated by the action \eqref{cmz1} are (see \eqref{ehsour1} and \eqref{ehsour2}) \begin{eqnarray} \epsilon_{abc}R^{ab}e^c&=&-\frac{1}{2\kappa}\tau\;,\nonumber\\ qR^{ab}-\frac{1}{2}\left(S^ae^b-S^be^a\right) -\frac{1}{2\kappa}\left(z_1 + z_2 \right)\epsilon^{abc}R_{cd}e^d &=&\frac{1}{4\kappa}\epsilon^{abc}\tau_c\;,\nonumber\\ \mathcal{Q}e^a-qT^a+\frac{\left(z_1 + z_2\right)}{4\kappa}\epsilon^a_{\phantom{a}bc}D(e^be^c)&=&-\frac{1}{4\kappa}\epsilon^{abc}\sigma_{bc}\;,\nonumber\\ D(e^ae^b)&=&\frac{1}{2\kappa}\epsilon^{abc}\sigma_c\;.\label{feqmz1} \end{eqnarray} These equations differ from the EH case \eqref{ehfeq1} only by the terms in $(z_1+z_2)$. Moreover, the trivial vacuum solution $R=S=T=\mathcal{Q}=0$ is accepted as well. Similarly to the Carroll case, we can solve the field equations \eqref{feqmz1} outside a spherically symmetric object. Considering again vanishing torsions, $T=\mathcal{Q}=0$, in vacuum, $\tau=\tau_a=\sigma_{ab}=\sigma_a=0$, the last two equations in \eqref{feqmz1} are automatically satisfied. Considering again a spherically symmetric line element as in \eqref{ds} and curvature components as in \eqref{csscs}, we find that the first two equations in \eqref{feqmz1} can be rewritten exactly as \eqref{eqcrrl}. Hence, it naturally follows that Birkhoff's theorem also holds for Carroll-Cartan gravity. \subsection{Solutions in the presence of matter} We proceed in finding general formal solutions of equations \eqref{feqmz1}. \subsubsection{Almost general solution} In the presence of matter, we consider a matter action in the form \eqref{sm0}. Such choice is consistent with the fact that we can always couple to any gravity theory the usual matter distributions that we can couple to the EH action. Moreover, The extended Weyl symmetry \eqref{weyl1} and \eqref{zs1} is at our disposal. Thence, the matter densities appearing in equations \eqref{feqmz1} are given in \eqref{ehsour2}. First and fourth equations in \eqref{feqmz1} are exactly the same as the Carroll case \eqref{ehfeq1}. As a consequence, space curvature and space torsion do not change with respect to the Carroll case. Therefore, \begin{eqnarray} R^{ab}&=&-\frac{1}{4\kappa}\epsilon^{abc}M_c\;,\nonumber\\ T^a&=&-\frac{1}{2\kappa}\left(\Sigma\delta^a_c+\epsilon^{ab}_{\phantom{ab}c}\pi_b\right)e^c\;.\label{RT1} \end{eqnarray} The second equation in \eqref{feqmz1}, provided $R^{ab}$ in \eqref{RT1}, gives the boost curvature, \begin{equation} S^a=\frac{1}{\kappa}\left(\Sigma\delta^a_c+\frac{1}{2}\epsilon^{ab}_{\phantom{ab}c}\pi_b\right)\theta^c-\frac{(z_1+z_2)}{4\kappa^2}M^a\;.\label{S1} \end{equation} The boost curvature \eqref{S1} differs from the Carroll case only by the term in $(z_1+z_2)$, as expected. Finally, time torsion can be obtained from the third equation in \eqref{feqmz1}, provided $T^a$ in \eqref{RT1}. Nevertheless, for simplicity, we set $\rho_{ab}=0$. Thence, \begin{equation} \mathcal{Q}=-\frac{1}{12\kappa}\left[q\left(6\Sigma-\epsilon^{abc}\frac{\delta M_c}{\delta\omega^{ab}}\right)+2\frac{\delta\Sigma}{\delta\omega^{ab}}\theta^ae^b+\frac{1}{2}\epsilon^{abc}\left(\frac{\delta \pi_c}{\delta\omega^{ab}}\theta_d-\frac{\delta \pi_d}{\delta\omega^{ab}}\theta_c\right)e^d\right]-\frac{(z_1+z_2)}{12\kappa^2}\pi_ae^a\;,\label{Q2} \end{equation} which differs from the Carroll case \eqref{Q1} only by the term in $(z_1+z_2)$ -- also an expected result. Just like the Carroll case, the validity of \eqref{Q2} is subjected to some relations that must be satisfied, namely, \begin{eqnarray} \pi_a&=&-\frac{1}{4}\frac{\delta M^b}{\delta\omega^{ab}}\;,\nonumber\\ \pi_a&=&\frac{\kappa}{\left(z_1+z_2\right)}\left(2\frac{\delta\Sigma}{\delta\omega^{ab}}\theta^b+\epsilon^{abc}\frac{\delta\pi_c}{\delta\omega^{db}}\theta^d\right)\;,\nonumber\\ \Sigma&=&\frac{\kappa}{6\left(z_1+z_2\right)}\left(\frac{\delta\pi_a}{\delta\omega^{ab}}\theta^b+2\epsilon^{abc}\frac{\delta\Sigma}{\delta\omega^{ab}}\theta_c\right)\;.\label{rel2a} \end{eqnarray} \subsubsection{Carroll-Riemann and Carroll-Weitzenb\"ock manifolds} The Carroll-Riemann geometry can be defined as the the particular case of null torsions $T=\mathcal{Q}=0$ and non-trivial curvatures. To see if such kind of geometries are accepted by the field equations \eqref{feqmz1} (for a generic $S_m$), one can set directly $T=\mathcal{Q}=0$ in the field equations. These conditions imply on the vanishing of the spin densities $\sigma_{ab}=\sigma_a=0$ in such a way that the third and fourth equations in \eqref{feqmz1} are identically satisfied. These conditions imply on the constraints \eqref{CR2} again. Hence, the non-trivial field strengths are given by \begin{eqnarray} R^{ab}&=&-\frac{1}{4\kappa}\epsilon^{abc}M_c\;,\nonumber\\ S^a&=&-\frac{(z_1+z_2)}{4\kappa^2}M^a\;.\label{RS3} \end{eqnarray} However, such solution is inconsistent with the Bianchi identities \eqref{hier1}, unless the conditions \eqref{constr00}, \eqref{constr01} and \begin{equation} \epsilon^{abc}\theta_bM_c=0\;,\label{constr04} \end{equation} are satisfied. The Carroll-Weitzenb\"ock geometry would be obtained by setting null curvatures, $R=S=0$, and considering non-trivial torsions. From \eqref{RT1} and \eqref{S1}, null curvatures lead to $M_a=\Sigma=\pi_a=0$. Hence, space torsion also vanishes. The only non-trivial field strength is then time torsion, which should also be determined from the second equation in \eqref{TQ1}. In fact, the analysis leading to the solutions \eqref{e1}, \eqref{q1} and vanishing connections holds in the present case as well. \section{Conclusions}\label{FINAL} In this work we have generalized Carroll theory of gravity by allowing the existence of torsional terms. With that purpose, we considered Mardones-Zanelli action in four dimensions and the corresponding UR limit. The resulting theory of gravity, called Carroll-Cartan gravity, generalizes the UR limit of EH gravity with additional torsional terms. For the sake of completeness, we first studied the UR limit of the EH. The main results obtained are listed below: \begin{itemize} \item Carroll gravity (the UR limit of EH gravity) in the first order formalism was obtained. The action and the corresponding field equations are displayed in expressions \eqref{carrollaction1} and \eqref{ehfeq1}. \item We found that Carroll gravity enjoys a global scale symmetry given by \eqref{weyl1}. Such useful symmetry was employed to determine a quite general form for the matter action, see \eqref{sm0}. \item A quite general formal solution in the presence of matter was found in \eqref{RS1}, \eqref{T1} and \eqref{Q1}. The validity of such solutions is subjected to certain conditions on the matter content, namely $\rho_{ab}=0$ and relations \eqref{mconst1}. \item By defining a Carroll-Riemann manifold as a solution of Carroll gravity with vanishing torsions and non-trivial curvatures, we were able to find the solution \eqref{RS1} subjected to the conditions \eqref{CR2}. Moreover, the Bianchi identities require that \eqref{constr00} and \eqref{constr01} hold. The fact that boost curvature vanishes allows the choice $\theta^a=0$. It implies that $\mathcal{Q}=dq=0\;\Rightarrow q=d\mathbf{t}\;$. In Newton-Cartan gravity \cite{Guerrieri:2020vhp}, this condition permits the definition of a Newtonian absolute time $\mathbf{t}$. Inhere, we argued that absolute time takes no place because $q$ is not a gauge invariant quantity. Henceforth, before any gauge fixing, $\mathbf{t}$ cannot be associated to an observational quantity. In fact, this argumentation always holds in any case where $\mathcal{Q}=\theta^a=0$. \item By defining a Carroll-Weitzenb\"ock manifold as a solution of Carroll gravity with vanishing curvatures and non-trivial torsions, we were able to find the solution \eqref{TQ1} subjected to the conditions \eqref{M1}. Bianchi identities enforce \eqref{constr02}. The choice of vanishing connections are consistently allowed. The set of remaining equations could be exactly solved for the vierbeins. The solutions are displayed in \eqref{e1} and \eqref{q1}, for an arbitrary 0-form $n^a$ and the specific spin density $\rho_{ab}$ given in \eqref{W1a}. \item A solution with non-trivial curvatures and torsions was developed. Boost curvature and space torsion were set to zero and the non-trivial space curvature and time torsion are given in \eqref{RQ1}. For a specific condition, we were able to obtain the lapse function \eqref{N1} and the proper time \eqref{N2} as a function of the coordinate time $t$ (See Figure \ref{fig1}). Thence, time dilation is an explicit effect found for this solution. Moreover, for the particular solution we choose, time torsion also vanishes. \item Finally, we confirmed that Birkhoff's theorem remains valid in Carroll gravity. This is an expected result, but it is not trivial since the UR limit of the EH action lacks from a piece of the original EH action. \end{itemize} After this systematic study of the UR limit of the EH action, we proceed with the UR limit of the MZ action \eqref{mz1}. Our results are listed as follows: \begin{itemize} \item Carroll-Cartan gravity \eqref{cmz1} was obtained from the UR of the MZ action \eqref{mz1}. The corresponding field equations were displayed in \eqref{feqmz1}. \item The global Weyl symmetry \eqref{weyl1} is not present anymore. However, it can be restored by extending the scale transformations to the parameters $z_1$ and $z_2$ by means of \eqref{zs1}. \item Since we expect that the matter content couple to gravity in the same way in both theories (EH and LC), we consider that the matter action remains in the form \eqref{sm0}. \item Birkhoff's theorem is valid and was trivially verified. \item An almost general solution in the presence of matter was developed, see \eqref{RT1}, \eqref{S1}, and \eqref{Q2}. This solution generalizes the Carroll case obtained in \eqref{RS1}, \eqref{T1}, and \eqref{Q1}. In fact, the solution for space curvature and space torsion are the same. Time torsion and boost connection, however, carry contributions for the extra terms in the action \eqref{cmz1}. \item The existence of Carroll-Riemann manifolds (vanishing torsions and non-trivial curvatures) in the presence of matter was verified. The corresponding curvatures are proportional to $M^a$, see \eqref{RS3}. The Bianchi identities imply on the constraints \eqref{constr00}, \eqref{constr01}, and \eqref{constr04}. \item For Carroll-Weitzenb\"ock manifolds (vanishing curvatures and non-trivial torsions), we found that space torsion also vanishes and the only non-trivial field strength is the time torsion. In fact, the solution is exactly the same as the one found in Carroll gravity, \emph{i.e.}, vanishing connections and vierbeins given in \eqref{e1} and \eqref{q1}. \end{itemize} \section*{Acknowledgements} This study was financed by The Coordena\c c\~ao de Aperfei\c coamento de Pessoal de N\'ivel Superior - Brasil (CAPES) - Finance Code 001. \bibliographystyle{utphys2}
proofpile-arXiv_068-1552
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\subsubsection*{#1}} \pagestyle{headings} \markright{Reference sheet: \texttt{natbib}} \usepackage{shortvrb} \MakeShortVerb{\|} \begin{document} \thispagestyle{plain} \newcommand{\textsc{Bib}\TeX}{\textsc{Bib}\TeX} \newcommand{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}}{\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}} \begin{center}{\bfseries\Large Reference sheet for \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ usage}\\ \large(Describing version \fileversion\ from \filedate) \end{center} \begin{quote}\slshape For a more detailed description of the \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package, \LaTeX\ the source file \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.dtx}. \end{quote} \head{Overview} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is a reimplementation of the \LaTeX\ |\cite| command, to work with both author--year and numerical citations. It is compatible with the standard bibliographic style files, such as \texttt{plain.bst}, as well as with those for \texttt{harvard}, \texttt{apalike}, \texttt{chicago}, \texttt{astron}, \texttt{authordate}, and of course \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. \head{Loading} Load with |\usepackage[|\emph{options}|]{|\texttt{#1}\def\filedate{#2}\def\fileversion{#3}}|}|. See list of \emph{options} at the end. \head{Replacement bibliography styles} I provide three new \texttt{.bst} files to replace the standard \LaTeX\ numerical ones: \begin{quote}\ttfamily plainnat.bst \qquad abbrvnat.bst \qquad unsrtnat.bst \end{quote} \head{Basic commands} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package has two basic citation commands, |\citet| and |\citep| for \emph{textual} and \emph{parenthetical} citations, respectively. There also exist the starred versions |\citet*| and |\citep*| that print the full author list, and not just the abbreviated one. All of these may take one or two optional arguments to add some text before and after the citation. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. (1990)\\ |\citet[chap.~2]{jon90}| & Jones et al. (1990, chap.~2)\\[0.5ex] |\citep{jon90}| & (Jones et al., 1990)\\ |\citep[chap.~2]{jon90}| & (Jones et al., 1990, chap.~2)\\ |\citep[see][]{jon90}| & (see Jones et al., 1990)\\ |\citep[see][chap.~2]{jon90}| & (see Jones et al., 1990, chap.~2)\\[0.5ex] |\citet*{jon90}| & Jones, Baker, and Williams (1990)\\ |\citep*{jon90}| & (Jones, Baker, and Williams, 1990) \end{tabular} \end{quote} \head{Multiple citations} Multiple citations may be made by including more than one citation key in the |\cite| command argument. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90,jam91}| & Jones et al. (1990); James et al. (1991)\\ |\citep{jon90,jam91}| & (Jones et al., 1990; James et al. 1991)\\ |\citep{jon90,jon91}| & (Jones et al., 1990, 1991)\\ |\citep{jon90a,jon90b}| & (Jones et al., 1990a,b) \end{tabular} \end{quote} \head{Numerical mode} These examples are for author--year citation mode. In numerical mode, the results are different. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citet{jon90}| & Jones et al. [21]\\ |\citet[chap.~2]{jon90}| & Jones et al. [21, chap.~2]\\[0.5ex] |\citep{jon90}| & [21]\\ |\citep[chap.~2]{jon90}| & [21, chap.~2]\\ |\citep[see][]{jon90}| & [see 21]\\ |\citep[see][chap.~2]{jon90}| & [see 21, chap.~2]\\[0.5ex] |\citep{jon90a,jon90b}| & [21, 32] \end{tabular} \end{quote} \head{Suppressed parentheses} As an alternative form of citation, |\citealt| is the same as |\citet| but \emph{without parentheses}. Similarly, |\citealp| is |\citep| without parentheses. Multiple references, notes, and the starred variants also exist. \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citealt{jon90}| & Jones et al.\ 1990\\ |\citealt*{jon90}| & Jones, Baker, and Williams 1990\\ |\citealp{jon90}| & Jones et al., 1990\\ |\citealp*{jon90}| & Jones, Baker, and Williams, 1990\\ |\citealp{jon90,jam91}| & Jones et al., 1990; James et al., 1991\\ |\citealp[pg.~32]{jon90}| & Jones et al., 1990, pg.~32\\ |\citetext{priv.\ comm.}| & (priv.\ comm.) \end{tabular} \end{quote} The |\citetext| command allows arbitrary text to be placed in the current citation parentheses. This may be used in combination with |\citealp|. \head{Partial citations} In author--year schemes, it is sometimes desirable to be able to refer to the authors without the year, or vice versa. This is provided with the extra commands \begin{quote} \begin{tabular}{l@{\quad$\Rightarrow$\quad}l} |\citeauthor{jon90}| & Jones et al.\\ |\citeauthor*{jon90}| & Jones, Baker, and Williams\\ |\citeyear{jon90}| & 1990\\ |\citeyearpar{jon90}| & (1990) \end{tabular} \end{quote} \head{Forcing upper cased names} If the first author's name contains a \textsl{von} part, such as ``della Robbia'', then |\citet{dRob98}| produces ``della Robbia (1998)'', even at the beginning of a sentence. One can force the first letter to be in upper case with the command |\Citet| instead. Other upper case commands also exist. \begin{quote} \begin{tabular}{rl@{\quad$\Rightarrow$\quad}l} when & |\citet{dRob98}| & della Robbia (1998) \\ then & |\Citet{dRob98}| & Della Robbia (1998) \\ & |\Citep{dRob98}| & (Della Robbia, 1998) \\ & |\Citealt{dRob98}| & Della Robbia 1998 \\ & |\Citealp{dRob98}| & Della Robbia, 1998 \\ & |\Citeauthor{dRob98}| & Della Robbia \end{tabular} \end{quote} These commands also exist in starred versions for full author names. \head{Citation aliasing} Sometimes one wants to refer to a reference with a special designation, rather than by the authors, i.e. as Paper~I, Paper~II. Such aliases can be defined and used, textual and/or parenthetical with: \begin{quote} \begin{tabular}{lcl} |\defcitealias{jon90}{Paper~I}|\\ |\citetalias{jon90}| & $\Rightarrow$ & Paper~I\\ |\citepalias{jon90}| & $\Rightarrow$ & (Paper~I) \end{tabular} \end{quote} These citation commands function much like |\citet| and |\citep|: they may take multiple keys in the argument, may contain notes, and are marked as hyperlinks. \head{Selecting citation style and punctuation} Use the command |\bibpunct| with one optional and 6 mandatory arguments: \begin{enumerate} \item the opening bracket symbol, default = ( \item the closing bracket symbol, default = ) \item the punctuation between multiple citations, default = ; \item the letter `n' for numerical style, or `s' for numerical superscript style, any other letter for author--year, default = author--year; \item the punctuation that comes between the author names and the year \item the punctuation that comes between years or numbers when common author lists are suppressed (default = ,); \end{enumerate} The optional argument is the character preceding a post-note, default is a comma plus space. In redefining this character, one must include a space if one is wanted. Example~1, |\bibpunct{[}{]}{,}{a}{}{;}| changes the output of \begin{quote} |\citep{jon90,jon91,jam92}| \end{quote} into [Jones et al. 1990; 1991, James et al. 1992]. Example~2, |\bibpunct[; ]{(}{)}{,}{a}{}{;}| changes the output of \begin{quote} |\citep[and references therein]{jon90}| \end{quote} into (Jones et al. 1990; and references therein). \head{Other formatting options} Redefine |\bibsection| to the desired sectioning command for introducing the list of references. This is normally |\section*| or |\chapter*|. Define |\bibpreamble| to be any text that is to be printed after the heading but before the actual list of references. Define |\bibfont| to be a font declaration, e.g.\ |\small| to apply to the list of references. Define |\citenumfont| to be a font declaration or command like |\itshape| or |\textit|. Redefine |\bibnumfmt| as a command with an argument to format the numbers in the list of references. The default definition is |[#1]|. The indentation after the first line of each reference is given by |\bibhang|; change this with the |\setlength| command. The vertical spacing between references is set by |\bibsep|; change this with the |\setlength| command. \head{Automatic indexing of citations} If one wishes to have the citations entered in the \texttt{.idx} indexing file, it is only necessary to issue |\citeindextrue| at any point in the document. All following |\cite| commands, of all variations, then insert the corresponding entry to that file. With |\citeindexfalse|, these entries will no longer be made. \head{Use with \texttt{chapterbib} package} The \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ package is compatible with the \texttt{chapterbib} package which makes it possible to have several bibliographies in one document. The package makes use of the |\include| command, and each |\include|d file has its own bibliography. The order in which the \texttt{chapterbib} and \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ packages are loaded is unimportant. The \texttt{chapterbib} package provides an option \texttt{sectionbib} that puts the bibliography in a |\section*| instead of |\chapter*|, something that makes sense if there is a bibliography in each chapter. This option will not work when \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\ is also loaded; instead, add the option to \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}. Every |\include|d file must contain its own |\bibliography| command where the bibliography is to appear. The database files listed as arguments to this command can be different in each file, of course. However, what is not so obvious, is that each file must also contain a |\bibliographystyle| command, \emph{preferably with the same style argument}. \head{Sorting and compressing citations} Do not use the \texttt{cite} package with \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}; rather use one of the options \texttt{sort} or \texttt{sort\&compress}. These also work with author--year citations, making multiple citations appear in their order in the reference list. \head{Long author list on first citation} Use option \texttt{longnamesfirst} to have first citation automatically give the full list of authors. Suppress this for certain citations with |\shortcites{|\emph{key-list}|}|, given before the first citation. \head{Local configuration} Any local recoding or definitions can be put in \texttt{#1}\def\filedate{#2}\def\fileversion{#3}}\texttt{.cfg} which is read in after the main package file. \head{Options that can be added to \texttt{\char`\\ usepackage}} \begin{description} \item[\ttfamily round] (default) for round parentheses; \item[\ttfamily square] for square brackets; \item[\ttfamily curly] for curly braces; \item[\ttfamily angle] for angle brackets; \item[\ttfamily colon] (default) to separate multiple citations with colons; \item[\ttfamily comma] to use commas as separaters; \item[\ttfamily authoryear] (default) for author--year citations; \item[\ttfamily numbers] for numerical citations; \item[\ttfamily super] for superscripted numerical citations, as in \textsl{Nature}; \item[\ttfamily sort] orders multiple citations into the sequence in which they appear in the list of references; \item[\ttfamily sort\&compress] as \texttt{sort} but in addition multiple numerical citations are compressed if possible (as 3--6, 15); \item[\ttfamily longnamesfirst] makes the first citation of any reference the equivalent of the starred variant (full author list) and subsequent citations normal (abbreviated list); \item[\ttfamily sectionbib] redefines |\thebibliography| to issue |\section*| instead of |\chapter*|; valid only for classes with a |\chapter| command; to be used with the \texttt{chapterbib} package; \item[\ttfamily nonamebreak] keeps all the authors' names in a citation on one line; causes overfull hboxes but helps with some \texttt{hyperref} problems. \end{description} \end{document} \section{Introduction} \label{sec:intro} Over the last couple of decades, the successful observation of galaxy redshift surveys (e.g., Two Degree Field Galaxy Redshift Survey, 2dFGRS, \citealt{2003astro.ph..6581C}; the Sloan Digital Sky Survey, SDSS, \citealt{2000AJ....120.1579Y}; the Baryon Oscillation SpectroscopicSurvey, BOSS, \citealt{2011AJ....142...72E}; the VIMOS Public Extragalactic Redshift Survey, VIPERS, \citealt{2012PASP..124.1232G}) enable significant progress has been achieved on our understanding of galaxy formation and evolution \citep{2003MNRAS.344..847M,2006ApJS..167....1B,2011MNRAS.413..101G,2018ApJ...858...30G, 2021MNRAS.505.5117Z}, the galaxy-halo connection \citep{1998ApJ...494....1J, 2003MNRAS.339.1057Y, 2008ApJ...676..248Y,2012ApJ...752...41Y,2005ApJ...633..791Z,2009ApJ...707..554Z, 2004MNRAS.353..189V, 2021MNRAS.504.4667A,2018ARA&A..56..435W,2019MNRAS.488.3143B}, and the nature of gravity and dark energy ( \citealt{,2001Natur.410..169P,2013PhR...530...87W,2013MNRAS.429.1514S, 2021PhRvD.103h3533A} and reference therein). In the upcoming years, the next-generation surveys, such as the Dark Energy Spectroscopic Instrument (DESI; \citealt{2013arXiv1308.0847L, 2016arXiv161100036D,2016arXiv161100037D}), the Legacy Survey of Space and Time (LSST; \citealt{2012arXiv1211.0310L}), the space mission Euclid \citep{2013LRR....16....6A} and CSST \citep{2018MNRAS.480.2178C,2019ApJ...883..203G}, will map the 3D galaxy distribution in an unprecedentedly volume, leading to about an order of magnitude more extragalactic spectroscopic redshifts than that of SDSS, BOSS and eBOSS have achieved \citep{2021arXiv210613120Z,2022arXiv220212911Y,2022arXiv220808518M,2022arXiv220903585S}. Massive amounts of data from deeper in the sky will provide new insights into the physics of galaxy formation, as well as the nature of dark matter and dark energy \citep{2022arXiv220808512H}. Galaxy two-point statistics, being one of the most fundamental tools, will continue to play a crucial role in future data analysis \citep{2022arXiv220307491V,2022arXiv220307946A}, as they have in the past \citep{2011ApJ...736...59Z,2013MNRAS.432..743N,2014ApJ...784..128S, 2014MNRAS.439.3504S,2014MNRAS.441.2398G,2016A&A...594A..13P,2018ApJ...861..137S}. Due to different systematics, it is still difficult to reliably measure small-scale property-dependent galaxy clustering at the present time. These systematics include redshift-dependent completeness, the missing galaxies in observations \citep{2016MNRAS.455.1553R,2017MNRAS.472.1106B, 2020MNRAS.495.1511B}, the incorrect estimation of the radial selection model \citep{2012MNRAS.424..564R,2020RAA....20...54Y}, among others \citep{2021A&A...646A..40B,2021MNRAS.507.3187F,2021MNRAS.506.2503M}. Fortunately, the coming big data will considerably reduce random errors in clustering determination, but to reach the high accuracy of clustering analysis required by the next generation surveys, we must eliminate systematic errors in measurement \citep{2014MNRAS.443.1065B,2016MNRAS.455.1553R,2021MNRAS.503.3510G, 2021MNRAS.506.4667D}. In this study, the systematic bias produced by the radial selection model is investigated in greater detail. To measure the galaxy two-point correlation function (hereafter 2PCF), we must build a random catalog with the same angular and radial selection functions as the observed sample, but with a random distribution in the observed space \citep{1983ApJ...267..465D,1993ApJ...417...19H}. The angular selection function is easy to obtain from observation, but the radial selection function is difficult to estimate accurately. As the sample has a fixed number density and the redshift distribution of a random catalog is straightforward to construct \citep{2004PhRvD..69j3501T}, previous works often use a volume-limited sample for clustering analysis \citep{2002MNRAS.332..827N,2002ApJ...571..172Z, 2005ApJ...630....1Z,2011ApJ...736...59Z,2011ApJ...726...13M,2016ApJ...833..241S, 2018A&A...610A..59M}. However, due to the need of excluding a substantial number of galaxies, the statistical precision of the clustering measurement is reduced \citep{2005ApJ...630....1Z,2016MNRAS.460.3647X}. Alternatively, a flux-limited sample may optimize the utilization of observed galaxies, but since its radial selection function $\phi(z)$ changes with redshift, it is not easy to build the redshifts of random galaxies for a flux-limited sample unless we know the galaxy luminosity function (hereafter LF) $\Phi(M_{\rm r})$ \citep{2015MNRAS.451.1540L,2021arXiv210906136K}. The radial selection function for the flux-limited sample has been recovered using a number of ways. For instance, the smooth spline fit approach utilizes a `spline' model to fit the redshift distribution of a galaxy sample \citep{2010MNRAS.404...60R,2017MNRAS.472.2869W}. The ${V_{\rm max}}$\ method populates random galaxies within the maximum viewable volume of a real galaxy, which is dependent on the galaxy's observational limitations. The redshift `shuffled' technique is a commonly employed alternative \citep{2013ApJ...767..122G,2015MNRAS.454.1161Z,2021SCPMA..6489811W}. This approach chooses redshifts at random from the real galaxy sample and assigns them to the random galaxy catalog. Through clustering analysis of the VIPERS data, \cite{2013A&A...557A..54D} showed that the spline fit approach underestimates the predicted 2PCF in comparison to the ${V_{\rm max}}$\ method, particularly on scales larger than $3~{h^{-1}\rm Mpc}$. In the BOSS systematics investigation, \cite{2012MNRAS.424..564R} revealed that the shuffled technique had a minor bias in BAO measurement compared to the spline fit method \citep{2015MNRAS.449..835R}. However, \cite{2019JCAP...08..036D} demonstrated that the shuffled approach suffers from the `integral constraint' effect when measuring the power spectrum. Using mocks from a high-resolution simulation, \citet{2020RAA....20...54Y} (hereafter \citetalias{2020RAA....20...54Y}) found that both the redshift shuffled technique and the ${V_{\rm max}}$\ method underestimate galaxy clustering by 30\% and 20\%, respectively, on scales $\gtrsim ~ 10{h^{-1}\rm Mpc}$ for flux-limited samples. Consequently, as long as we continue to use the aforementioned radial selection methods to construct the redshifts for random catalogs for a flux-limited sample, our clustering measurement will contain an unavoidable systematic deviation from the true galaxy clustering. \cite{2011MNRAS.416..739C} proposes a density-corrected ${V_{\rm max}}$\ technique for concurrently estimating LF and generating a random catalog for a flux-limited sample. Unlike the conventional ${V_{\rm max}}$\ method, this technique can successfully eliminate density fluctuations. In \cite{2011MNRAS.416..739C}, they examine the radial distribution of random galaxies, which is in excellent agreement with the input galaxy sample. This method has been employed to determine property-dependent galaxy clustering \citep{2015MNRAS.454.2120F} and clustering analysis \citep{2017A&A...608A..44D,2017A&A...604A..33P, 2018MNRAS.474.3435L,2021A&A...646A.147J}. However, its clustering measurement performance has not been assessed. The purpose of this study is to test the \cite{2011MNRAS.416..739C} technique for clustering measurements using mock data. In addition, some modifications are made to the original approach in order to improve its measurement accuracy. This paper is structured as follows. In Section~\ref{sec:method}, we review the \cite{2011MNRAS.416..739C} method and introduce the smoothed density-corrected ${V_{\rm max}}$\ method. The constructions of mock galaxy catalogs are detailed in Section~\ref{sec:mocks}. We present the testing results of the correlation functions in Section~\ref{sec:tests}. In Section~\ref{sec:disc}, we assess the smoothed density-corrected ${V_{\rm max}}$\ method and discuss the potential sources of uncertainty in estimate. We conclude the paper in Section~\ref{sec:concls}. \section{The smoothed density-corrected ${V_{\rm max}}$\ method} \label{sec:method} To address the difficulty of recovering the radial selection function of property-dependent galaxy sample, \cite{2011MNRAS.416..739C} developed a density-corrected ${V_{\rm max}}$\ approach for galaxy clustering estimate. This section starts with a briefly overview of the \cite{2011MNRAS.416..739C} technique. Following that, we detail the improvements to the original \cite{2011MNRAS.416..739C} methodology, which we call the smoothed density-corrected ${V_{\rm max}}$\ method. \subsection{The \cite{2011MNRAS.416..739C} method} \label{sec:colemethod} On the basis of the standard ${V_{\rm max}}$\ approach, \cite{2011MNRAS.416..739C} presented a weighted ${V_{\rm max}}$\ method based on a joint stepwise maximum likelihood method, which effectively eliminates the influence of density fluctuation. In this method, a density-weighted maximum volume ${V^{\rm DC}_{\rm max}}$\ \footnote{See their equation (11) and (16) in \cite{2011MNRAS.416..739C}.} is defined, which is the normal ${V_{\rm max}}$\ weighted by the estimated galaxy overdensities $\Delta(z)$ and the LF density evolution $P(z)$. They further define a weight \begin{equation} w_{\alpha}\equiv \frac{V_{\alpha, \rm max}}{V^{\rm dc}_{\alpha,\rm max} + \mu V_{\alpha,\rm max}}, \end{equation} where $V_{\alpha, \rm max}$ and $V^{\rm dc}_{\alpha, \rm max}$ are the normal ${V_{\rm max}}$\ and density-corrected ${V_{\rm max}}$\ for the $\alpha$th galaxy in the observed sample. $\mu$ is a Lagrange multiplier providing constraints with $\langle \frac{V_{\alpha, \rm max}}{V^{\rm DC}_{\alpha,\rm max}+\mu V_{\alpha,\rm max}}\rangle =1$ when estimating LF for the galaxy sample. Lastly, a random catalog can be created by replicating individual galaxies $n_{\alpha}=nw_{\alpha}$ times and distributing them at random across the $V_{\alpha, \rm max}$ volume. Note that, unlike the standard ${V_{\rm max}}$\ approach, $n_{\alpha}$ is no longer the same for all galaxies and the selection rate of random galaxies is adjusted by weight $w_{\alpha}$. The brightness of the galaxy may be over- or under-represented in the observed sample as a result of the density variation in the ${V_{\rm max}}$\ volume being appropriately compensated by the weight $w_{\alpha}$. By comparing the output redshift distribution to that of the input galaxy sample, \cite{2011MNRAS.416..739C} proved that the random catalog created by this density-weighted ${V_{\rm max}}$\ technique could recover the genuine galaxy selection function. While this approach has not yet been tested on galaxy clustering using mock galaxy catalog, it remains to be validated using mocks. \subsection{The smoothed density-corrected ${V_{\rm max}}$\ method} \label{sec:sdcVmethod} Before testing the \cite{2011MNRAS.416..739C} method, we perform three modifications to the original public code~\footnote{\url{http://astro.dur.ac.uk/~cole/random_cats/}}. The original algorithm is only applicable to galaxy sample with a single faint flux cut, but by adding $z_{\rm min}$ estimate, our first update makes the code applicable to a generic double flux-cut sample \footnote{This modification primarily changes the step-function $S$ from $S(L^{\rm min} |L)$ to $S(L^{\rm min}, L^{\rm max} | L)$ in equation(5) and the lower limit of ${V_{\rm max}}$\ integration in equation (11) and (39) in \cite{2011MNRAS.416..739C}.}. The maximum(minimum) redshifts $z_{\rm max(min)}$ in our updated code is same as \citetalias{2020RAA....20...54Y} which are determined as follows: \begin{eqnarray}\label{eq:zz1} z_{\rm max}&=&\mathtt{min}[z_{\rm mag, max},~z_{\rm sample, max}],\\ z_{\rm min}&=&\mathtt{max}[z_{\rm mag, min},~z_{\rm sample, min}], \label{eq:zz2} \end{eqnarray} where $z_{\rm sample, max(min)}$ is the redshift limits of galaxy sample, and $z_{\rm mag, max(min)}$ is derived by \begin{eqnarray}\label{eq:mm1} m_{\rm faint}&=&M+DM(z_{\rm mag, max})+k(z)-E(z),\\ m_{\rm bright}&=&M+DM(z_{\rm mag, min})+k(z)-E(z), \label{eq:mm2} \end{eqnarray} where the flux limits are set by apparent magnitude $m_{\rm bright(faint)}$, $M$ is the absolute magnitude, the distance modulus is $DM=5{\rm log}_{10}(d_{\rm L})+25-5{\rm log}_{10}h$, $k(z)$ is the $k$-correction, and $E(z)$ is the luminosity evolution correction ($e$-correction). Our second code improvement is the $k$-correction. In the original code, the $k$-correction is performed for all galaxies depending on the input function $k(z)$, which hinders the method's ability to apply to a real galaxy sample whose $k$-correction is dependent not just on redshift but also on galaxy properties (e.g., color). We modify the code to take a $k(z,color)$ model as input, allowing $k$-correction to be conducted on individual galaxies based on their redshifts and colors. This makes the technique more applicable to observable galaxies. Following the aforementioned modifications, the output cloned random catalog from the updated algorithm is basically consistent with the genuine radial distribution of the galaxy number density $n_{\rm true}(z)$. However, there are small fluctuations in the output radial distribution that have a considerable influence on the final clustering estimate. Our final modification to the algorithm is to smooth the radial distribution of the output cloned random galaxies. In the smooth procedure, we begin by generating a histogram of comoving distance $d$ for the random galaxies. We set a bin size of $\Delta d=5 {h^{-1}\rm Mpc}$, and $N(d)_{\rm hist}$ represents the number of random galaxies in each bin. Secondly, we employ a convolution operator to smooth the histogram as $N^{\rm s}_{\rm hist}=[N_{\rm hist}\ast \Delta_{\rm smooth}]$, where $\Delta_{\rm smooth}=5$ is the smooth box size in 1D and $N^{\rm s}_{\rm hist}$ is the smooth radial distribution of random galaxies. Final redshifts for random galaxies are generated based on the profile of $N^{\rm s}_{\rm hist}$ that has been smoothed. In Section~\ref{sec:comparison}, we will observe that our modifications enhance the clustering measurement accuracy significantly. \cite{2015MNRAS.454.2120F} recently developed the \cite{2011MNRAS.416..739C} technique to quantify the property-dependent galaxy clustering of GAMA II data \citep{2011MNRAS.413..971D,2015MNRAS.452.2087L}. They found that the \cite{2011MNRAS.416..739C} technique yields a redshift distribution that is too broad for cloned random galaxies, which may be the result of luminosity evolution. To mitigate this unanticipated impact, \cite{2015MNRAS.454.2120F} developed a Gaussian window function to restrict the redshift distribution of the cloned galaxies. In the first place, the mock galaxy catalogs that we construct in this study resemble the low redshift SDSS data, as opposed to the GAMA data, which encompass a relatively broad redshift range of 0$\sim$0.5. In our mock galaxies, luminosity evolution is expected to have negligible effects. Second, our first adjustment to the $z_{\rm min}$ calculation narrows the distribution of cloned random galaxies. Our test findings in Section~\ref{sec:comparison} will demonstrate that the smoothed density-corrected ${V_{\rm max}}$\ approach is adequate for obtaining accurate galaxy clustering determination. \section{The mock galaxy catalogs} \label{sec:mocks} In this part, we describe the construction of mock galaxy catalogs for a robust test of the smooth density-corrected ${V_{\rm max}}$\ approach on clustering estimation. We build two sets of mock samples, one with simple $k+e$-corrections and the other with complex $k+e$-corrections for galaxies. The first group of mock galaxy catalogs is created in a manner similar to \citetalias{2020RAA....20...54Y}. For the halo catalog, we adopt the $\rm WMAP\_3072\_600$ cosmological $N$-body simulation from the CosmicGrowth simulation suite \citep{2019SCPMA..6219511J}. This simulation starts at redshift 144 with $3072^3$ particles evolving in a $600~{h^{-1}\rm Mpc}$ cube box. The simulation assumes a standard flat $\Lambda \rm CDM $ cosmology with $\{\Omega_m=0.268,~\Omega_b=0.045, ~\sigma_8=0.83,~n_s=0.968\}$ and $h=H_0/(100\,{\rm km}\,\mathrm{s}^{-1}\mathrm{Mpc}^{-1})=0.71$, which are compatible with the Nine-Year Wilkinson Microwave Anisotropy Probe (WMAP 9) observations \citep{2013ApJS..208...19H,2013ApJS..208...20B}. This simulation has a mass resolution of $5.54 \times 10^8~h^{-1}\rm M_{\sun}$. To identify halos for each output snapshot, the friends-of-friends technique is used with a linking length of 0.2 in units of the mean particle separation \citep{1985ApJ...292..371D}. Hierarchical Bound-Tracing technique is used to find subhalos and their merger histories. In this study, the snapshot at $z=0$ is utilized to build the halo catalog, and each halo contains at least 50 particles. The ``orphan" halos are also maintained in the catalog \footnote{In the evolution process, some subhalos go below the resolution limit due to the tidal stripping. We keep subhalos whose infall time is shorter than the merger time, and those subhalos do not merge into the core of the host halo and host the ``orphan" galaxies.} \citep{2019ApJ...872...26Y}. We use the subhalo abundance matching (SHAM) method to establish the connection between galaxies and subhalos. Based on the galaxy absolute magnitude $M^{0.1}_{\rm r}$ and the peak mass $M_{\rm peak}$ of subhalos, a monotonic relationship between the cumulative number density $n(<M^{0.1}_{\rm r})=n(>M_{\rm peak})$ has been constructed \citep{2006ApJ...647..201C,2014MNRAS.444..729H, 2018ARA&A..56..435W,2021MNRAS.508..175C}. We employ the luminosity function of the SDSS DR7 ${\tt full\_1}$\ sample of the New York University Value-Added catalog (NYU-VAGC)\footnote{$\tt lfvmax-q2.00a-1.00.dr72full1.fits$.} \citep{2001AJ....121.2358B,2003ApJ...592..819B,2005AJ....129.2562B}, for which the $r-$band absolute magnitude $M^{0.1}_{\rm r}$ of galaxies has been $k-$ and $e-$corrected to $z=0.1$. The $M_{\rm peak}$ is the maximum mass ever attained by a subhalo over its entire evolutionary history. Once a subhalo has been matched to a galaxy, its position and velocity are given to the galaxy. By periodically rotating and stacking the mock box, we generate 60 mock galaxy catalogs from the parent catalog. Random sites are assigned to the observer. The observed redshift $z_{\rm obs}$ is determined by the galaxy's position and velocity relative to the observer. To obtain the apparent magnitude $m_{\rm r}$, the $k-$ correction and $e-$ correction, as described in equation~\eqref{eq:mm1} and equation~\eqref{eq:mm2}, must be provided. Real data processing determines these values by fitting the observed galaxy flux to a library of synthetic spectrum models, which is generally inapplicable to mock galaxies and also beyond the scope of this work. For the sake of simplicity, we consider two simple $k-$ and $e-$correction cases here. In the first case, no $k+e$ corrections are applied to the mock galaxies. In the second case, we suppose that all galaxies follow a simple $k-$ and $e-$correction model. For the $k-$correction, we take the model of \cite{2017MNRAS.470.4646S}: \begin{equation}\label{eq: kcor} k^{0.1}(z)=\sum^{4}_{i=0} A_i(z-0.1)^{4-i}. \end{equation} \cite{2017MNRAS.470.4646S} fit the above fourth-order polynomial to individual GAMA galaxies, where $A_i$ is the polynomial's fitting coefficient \citep{2014MNRAS.445.2125M}. There are seven color-dependent $k(z)$ models (see section below) and we adopt the $(g-r)^{0.1}_{\rm med} =0.603$ model with the following fitting coefficients: $A_0=-3.428$, $A_1=9.478$, $A_2=-2.703$, and $A_3=0.7646$. For the $e-$correction, we use the SDSS model \citep{2006ApJ...648..268B} : \begin{equation}\label{eq:ecor} E(z)=q_0[1+q_1(z-z_0))](z-z_0), \end{equation} where $z_0=0.1$ is the zero point redshift for evolution correction, $q_0=2$ denotes the evolution of magnitude per redshift, $q_1= -1$ is the nonlinear parameter in redshift evolution. After applying the $k-$ and $e-$corrections to the mock galaxies, our final samples are constructed as follows. For each mock catalog in each $k+e$ correction case, we first generate a flux-limited sample with flux cuts at $m_{\rm r} = [15, 17]$ and a sky coverage of $\sim 5950~\rm deg^2$. The flux-limited catalog is then divided into two luminosity-dependent samples, named LC1 with $M^{0.1}_{\rm r} = [-19, -22]$ and LC2 with $M^{0.1}_{\rm r} = [-20, -23]$. Using these selection criteria, the galaxy sample's number density changes as a function of redshift. Figure~\ref{fig:nd_mocks} in Appendix~\ref{sec:append_nd} displays the average number density $\overline{n}(z)$ of 60 samples for two luminosity cuts in each $k+e$ correction case. This redshift-dependent number density typically prevents us from obtaining an accurate measurement of galaxy clustering particularly at scales $\le 30{h^{-1}\rm Mpc}$ for flux-limited samples \citep{2022arXiv221102068Y}. In the following text, the above mock samples generated from the simulation of \citep{2019SCPMA..6219511J} are referred to as LC samples. The second group of mock galaxy catalogs is built from the lightcone catalog of \cite{2017MNRAS.470.4646S} \footnote{http://icc.dur.ac.uk/data/}. It is essential to access the radial selection model using a catalog of galaxies that closely resembles the observed galaxies. The \cite{2017MNRAS.470.4646S} catalog is constructed using the MXXL simulation \citep{2012MNRAS.426.2046A}, which assumes a $\Lambda$CDM cosmology with WMAP1 parameters $\{\Omega_m=0.25, ~\sigma_8=0.9,~n_s=0.968, h=0.73\}$ and operates in a $3h^{-1}\rm Gpc$ box. The mass of the particle is $6.17\times 10^9 h^{-1}\rm M_{\sun}$. \cite{2017MNRAS.470.4646S} created the lightcone catalog by applying the halo occupation distribution method to link galaxies to subhalos. To assign colors to the galaxies, they utilize an enhanced redshift-dependent \cite{2009MNRAS.392.1080S} model. The galaxy $k+e$ corrections in their lightcone catalog are more complicated than the ones we use for LC samples. They employ color-dependent $k-$corrections obtained from the GAMA survey for the $k-$corrections. In brief, they estimate the $k-$corrections for individual galaxies in GAMA data by fitting with equation~\eqref{eq: kcor}, and they determine the median $k-$correction in seven evenly spaced color bins to construct seven $k-$correction models. These models are $(g-r)^{0.1}_{\rm med} =0.131,0.298,0.443,0.603,0.785,0.933,1.067$ with different polynomial coefficients. The $k(z,color)$ is then interpolated for the lightcone catalog using seven median color $(g-r)^{0.1}_{\rm med}$ models based on the galaxy's color and redshift \footnote{ For details see Setion4.3 in \cite{2017MNRAS.470.4646S}}. For the LF evolution, they employed the evolving Schechter function derived from GAMA data. In the low redshift region $z \lesssim 0.13$, the LF of their catalog coincides with the LF of \cite{2003ApJ...592..819B}, which we employ for LC samples, , and in the median redshift region, the LF evolves to the GAMA LF. The luminosity(color)-dependent galaxy clusterings in \cite{2017MNRAS.470.4646S} catalog are generally consistent with the SDSS DR7 results measured by \cite{2011ApJ...736...59Z} at low redshift, as well as the GAMA results measured by \cite{2015MNRAS.454.2120F} at the median redshift. Therefore, this catalog is suitable for testing different radial selection models for property-dependent clustering measurement. We construct ten flux-limited samples from the full-sky lightcone catalog by rotating the sky, using the galaxy selection criteria ($m_{\rm r} = [15, 17]$) and sky coverage ($\sim 5950~\rm deg^2$). Two luminosity-dependent galaxies, LS1 ($M^{0.1}_{\rm r} = [-19, -22]$) and LS2 ($M^{0.1}_{\rm r} = [-20, -23]$), are generated from each flux-limited sample, much as we did for the LC samples. As our sample selection resembles the SDSS DR7 data, we further divide the luminosity-dependent sample into blue subsample and red subsample using the color-cut equation $(g-r)^{0.1}_{\rm cut}=0.21-0.03M^{0.1}_{\rm r}$ of \cite{2011ApJ...736...59Z}. In the rest of this study, we refer to the mock galaxy samples built from \cite{2017MNRAS.470.4646S} catalog as LS samples. In summary, we generate two sets of mock samples from two simulations using the same selection criterion for galaxies. For the LC samples, flux-limited samples are constructed from sixty mocks with two absolute magnitude cuts. Two cases are considered for $k+e$ corrections: (1) there are no $k+e$ corrections; (2) all galaxies are assumed to follow a simple $k+e$ correction model. Ten LS samples are created in the same manner as LC samples, but using a public lightcone catalog. The LS samples, however, feature a color-dependent $k-$correction and a complex $e-$correction that are unknown to us. In order to examine the color-dependent clustering, the luminosity-dependent LS data are split into blue and red subsamples. We emphasize that neither the LC samples nor the LS samples are subjected to any deliberate impact (e.g., fiber collision) in order to decrease unknown systematic uncertainty in our later tests. In addition, when calculating the comoving distance from redshift, we employ the cosmological model of the simulation from which the samples are constructed, separately. \section{Test the smoothed density-corrected ${V_{\rm max}}$\ method with the 2PCFs} \label{sec:tests} In this section, we describe the construction of random galaxy catalog, focusing on the radial distribution of random galaxies derived from various radial selection models. Following that, we compare the correlation functions generated by the random catalogs used in these models. \subsection{Construction of the random catalogs} \label{sec:randoms} The random catalogs are constructed as follows. For the angular distribution, we first generate a large number of random points that are uniformly dispersed across the surface of a unit sphere. For each mock sample and subsample, we extract a collection of points with the same sky coverage as the corresponding sample and subsample. We consider the positions of these points to be the angular distribution ($\tt ra, dec$) of the random galaxies, with no angular selection effect or survey masks imposed. For the redshifts of random galaxies, the following radial selection models are used in our tests: \begin{enumerate} \item $\mathbf{n_{\rm \textbf{true}}}$ method, which generates the redshift distribution for random galaxies using the true galaxy number density $n(z)_{\rm true}$ taken from the LF of the parent mock catalog. \item $\mathbf{V^{\rm \textbf{SDC}}_{\rm \textbf{max}}}$ method, in which redshifts for random catalog are generated using the smoothed density-corrected ${V_{\rm max}}$\ method. \item $\mathbf{V^{\rm \textbf{DC}}_{\rm \textbf{max}}}$ method, in which the density-corrected ${V_{\rm max}}$\ method of \cite{2011MNRAS.416..739C} is utilized, but without the smoothing procedure. \item $\mathbf{V_{\rm \textbf{max}}}$ method, where the normal ${V_{\rm max}}$\ method is adopted. \item \textbf{Shuffled} method, which applies the redshift shuffled method. In this method, galaxy redshifts of the sample are randomly assigned to the random galaxies. \end{enumerate} For LC samples, it is simple to incorporate the $k+e$ corrections into the redshift generation process. Enabling the validation of the capacity of different radial selection models to restore the true radial selection function $n(z)_{\rm true}$. Figure~\ref{fig:histLC1} shows comparison between the radial distributions of a single LC sample and random catalogs generated by aforementioned radial selection methods in the case of no $k+e$ corrections. In the left and right panels, the comparisons for LC1 and LC2 samples are presented, respectively. The second row of panels displays the deviation of random galaxy number relative to the galaxy number in each comoving distance bin, which is defined as $\Delta_{\rm g} = (n_{\rm r}-n_{\rm g})/n_{\rm g}$. The third row of panels displays the deviation of the random galaxy number of the other four techniques from the number of the $n_{\rm true}$ approach, defined as $\Delta_{\rm n_{\rm true}}=(n_{\rm r}-n_{\rm r, true})/n_{\rm r, true}$. The black histograms in the top row of panels denote the distribution of galaxies in flux-limited samples. The radial distribution of random catalogs created by the $n_{\rm true}$ method is represented by green lines, which indicate the distribution arising from the genuine selection function. The purple dashed line is the distribution producing from the ${V^{\rm DC}_{\rm max}}$\ approach. We see small fluctuations in the radial distribution, which are notably clear in the bottom row of panels. These noisy fluctuations have been reduced by the smoothing process in the ${V^{\rm SDC}_{\rm max}}$\ approach; as indicated by the blue solid lines, the smoothed radial distribution is in excellent agreement with the distribution predicted by the $n_{\rm true}$ method. The radial distributions from the ${V_{\rm max}}$\ method and the shuffled method are represented by red and yellow lines, respectively. As shown in the bottom panels, $\Delta_{\rm n_{\rm true}}$ of the ${V_{\rm max}}$\ approach exhibits a systematic bias in both luminosity-dependent LC samples as a result of the influence of large-scale structures in galaxy radial distribution. The ${V_{\rm max}}$\ approach creates an excess of random galaxies near these structures; hence, the amount of random galaxies in the high-redshift tail have been decreased. Figure~\ref{fig:histLC2} shows the same comparison as Figure~\ref{fig:histLC1} for LC samples with the simple $k+e$ corrections. The deviations of different approaches from the $n_{\rm true}$ method shown in the bottom panels are similar to those in Figure~\ref{fig:histLC1}. Figure~\ref{fig:histLS1} shows a comparison for the LS samples, employing the same color-coded lines as Figure~\ref{fig:histLC1}. The left panels compare an LS1 sample, whereas the middle and right panels compare its blue and red subsamples, respectively. For the $n_{\rm true}$ method, the radial selection function derived from the LF of the lightcone catalog is applied. The $k+e$ corrections are appropriately incorporated into the redshift generation process for the $n_{\rm true}$ and ${V_{\rm max}}$\ methods. For the ${V^{\rm SDC}_{\rm max}}$\ and ${V^{\rm DC}_{\rm max}}$\ methods, the same $k-$correction models that \cite{2017MNRAS.470.4646S} performed for their lightcone database are employed, which interpolate the $k-$correction from seven models based on the color and redshift of individual galaxies. The $e-$correction is also properly applied to LS samples and their color-dependent subsamples by using the evolutionary property of the lightcone catalog. The results of the comparison are generally consistent with those of the LC samples. The redshifts generated by the ${V_{\rm max}}$\ technique are substantially influenced by the sample's structures; the bias in $\Delta_{\rm n_{\rm true}}$ is greater than that of LC samples, which reaches 20\% on the high redshift tail (red solid lines). The redshifts from the ${V^{\rm SDC}_{\rm max}}$\ approach successfully mitigate this impact, resulting in a relatively small deviation in $\Delta_{\rm n_{\rm true}}$ (blue solid lines). For both LC and LS samples, the redshifts of random catalogs obtained by the shuffled approach replicate the radial distribution of galaxies (yellow solid lines), hence, the structures are also cloned. In the following section, we will examine how galaxy clustering measurements are affected by the deviations in these radial distributions that differ from the expected distribution produced by the $n_{\rm true}$ model. \begin{figure*} \begin{center} \centering \epsscale{.8} \plotone{hist_LC_noke.png} \caption{In the case of no $k+e$ correction, a comparison of the radial distributions of one LC sample and its corresponding random catalogs. The bin size is $\Delta d=5~{h^{-1}\rm Mpc}$. The LC samples have a flux cut at $m_{\rm r}=[15,17]$ and two luminosity cuts at $M^{0.1}_{\rm r}=[-19,-22]$ (left panels) and $M^{0.1}_{\rm r}=[-20,-23]$ (right panels). The black histogram denotes galaxy distribution. Random catalogs generated by the $n(z)_{\rm true}$ method, the ${V^{\rm SDC}_{\rm max}}$\ method, the ${V^{\rm DC}_{\rm max}}$\ method, the ${V_{\rm max}}$\ method, and the shuffled method are represented by the green line, the blue line, the purple dashed line, the red line, and the yellow line, respectively. The second row of panels displays the number bias $\Delta_{\rm g}$ in each bin of the random catalogs compared to the galaxies, calculated as $\Delta_{\rm g} = (n_{\rm r}-n_{\rm g})/n_{\rm g}$. The third row of panels displays the number bias of random catalogs compared to $n(z)_{\rm true}$, which is defined as $\Delta_{\rm n_{\rm true}}=(n_{\rm r}-n_{\rm r, true})/n_{\rm r, true}$. } \label{fig:histLC1} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{.8} \plotone{hist_LC_kpluse.png} \caption{The same as Figure~\ref{fig:histLC1} but for the simple $k+e$ correction case of LC samples. } \label{fig:histLC2} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{1.1} \plotone{hist_LS2.png} \caption{The same as Figure~\ref{fig:histLC1} but for the LS1 samples. } \label{fig:histLS1} \end{center} \end{figure*} \subsection{Comparison of the correlation functions} \label{sec:comparison} This section introduces the 2PCF estimator that we employ to measure galaxy clustering. Then, we provide comparison of the projected 2PCFs and the redshift-space 2PCFs determined from random catalogs generated by the aforementioned radial selection methods. \subsubsection{Estimator} \label{sec:estimator} We measure the 2PCF in the same way as \citetalias{2020RAA....20...54Y}. First, we define the redshift separation vector $\bm{s}$ and the line-of-sight vector $\bm{l}$ as $\bm{s} \equiv \bm{\upsilon}_1-\bm{\upsilon}_2 $ and $\bm{l}\equiv (\bm{\upsilon}_1+\bm{\upsilon}_2)/2$, where $\bm{\upsilon}_1$ and $\bm{\upsilon}_2$ are redshift-space position vectors of a pair of galaxies \citep{1992ApJ...385L...5H, 1994MNRAS.266...50F}. Separations that are parallel ($\pi$) and perpendicular ($r_p$) to the line-of-sight direction are derived as \begin{equation} \pi \equiv \frac{\bm{s}\cdot \bm{l}}{|\bm{l}|}, ~~~~~~r^2_p \equiv \bm{s}\cdot \bm{s}-\pi^2. \end{equation} We construct a grid of $\pi$ and $r_p$ by taking $1~{h^{-1}\rm Mpc}$ as the bin size for $\pi$ from 0 up to $\pi_{\rm max}=40~{h^{-1}\rm Mpc}$ linearly, and a bin size of $0.2$ for $r_p$ is adopted logarithmically in the range of [$0.01$, $40$] ${h^{-1}\rm Mpc}$. Estimator of \cite{1993ApJ...412...64L} is used to calculate the 2D correlation function as \begin{equation} \xi(r_p,\pi) = \frac{DD-2DR+RR}{RR}, \end{equation} where $DD$, $DR$, and $RR$ are the numbers of data-data, data-random, and random-random pairs. Given $s^2=| \bm{s}|^2=r^2_p +\pi^2$, we derive the redshift-space correlation function $ \xi(s)$. By integrating $\xi(r_p,\pi)$ along the line-of-sight direction, we estimate the projected 2PCF \citep{1983ApJ...267..465D} by \begin{equation} w_p(r_p)\equiv 2\int^{\infty}_0 \xi(r_p,\pi)~d\pi \approx 2\int^{\pi_{max}=40}_0 \xi(r_p,\pi)~d\pi. \end{equation} We employ the public code $\tt{CORRFUNC}$ \citep{10.1007/978-981-13-7729-7_1} for pair counting in this work. To reduce the shot noise on small-scale clustering, we use 50 times the number of galaxies in the random catalogs for random galaxies. \subsubsection{Comparison of projected 2PCFs} \label{sec:compwps} \begin{figure*} \begin{center} \centering \epsscale{1.} \plotone{wp_noke.png} \caption{Top panels: The average projected correlation functions $\overline{w}_p$ for LC1 (left panel) and LC2 (right panel) samples in the case of no $k+e$ corrections. LC1 samples have a flux-cut at $m_r=[15,17]$ and a luminosity cut at $M^{0.1}_r=[-19, -22]$. LC2 samples have the same flux-cut as LC1 samples but a brighter luminosity cut at $M^{0.1}_r=[-20, -23]$. The solid black points with error bars represent the $\overline{w}_{p,true}$ and $1\sigma$ dispersion across 60 LS samples utilizing random catalogs generated by the $n_{\rm true}$ approach. $\overline{w}_p$ of the ${V^{\rm SDC}_{\rm max}}$\ method, ${V^{\rm DC}_{\rm max}}$\ method, ${V_{\rm max}}$\ method, and shuffled technique are shown by the blue dashed lines, the green dotted line, the red long-dashed lines, and the orange lines, respectively. Middle panels: The average deviations from $w_{p,true}$ for various techniques of assigning redshifts to random catalogs, as determined by $w_p$ of 60 LC samples. The blue open rolls with error bars represent the mean offset and $1\sigma$ deviations of $w_p$ for the ${V^{\rm SDC}_{\rm max}}$\ technique. The results of the ${V_{\rm max}}$\ technique are displayed as open red squares with error bars. The mean offsets computed from $w_p$ for the ${V^{\rm DC}_{\rm max}}$\ and shuffled methods are shown by green dashed lines and a yellow open diamond, accordingly. The gray lines represent the $1\sigma$ dispersion of $w_{p,true}$ among 60 LC samples. The horizontal dashed black lines indicate the zero offset. Bottom panels: The average bias of $w_p$ relative to $w_{p,true}$ for four radial selection models, defined as $\overline{[(w_p-w_{p,true})/w_{p,true}]}$. The color-coded lines and symbols are identical to those in the middle panels.} \label{fig:wp_noke} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{1.} \plotone{wp_kpluse.png} \caption{The same as Figure~\ref{fig:wp_noke} but for the LC samples with a simple $k+e$ corrections.} \label{fig:wp_kpluse} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{1.} \plotone{wp_smith_L1.png} \caption{Similar to Figure~\ref{fig:wp_noke}: comparison of $w_p$ for LS1 samples (left panels) and their blue (middle panels) and red (right panels) subsamples. The color-coded lines and symbols are identical to those in Figure~\ref{fig:wp_noke}, excluding the result of the ${V^{\rm DC}_{\rm max}}$\ technique.} \label{fig:wp_LS1} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{1.} \plotone{wp_smith_L2.png} \caption{The same as Figure~\ref{fig:wp_LS1} but for LS2 samples (left panels) and their blue (middle panels) and red (right panel) subsamples.} \label{fig:wp_LS2} \end{center} \end{figure*} The projected 2PCFs for LC samples without and with simple $k+e$ corrections are compared in Figure~\ref{fig:wp_noke} and Figure~\ref{fig:wp_kpluse}, respectively. We compare the average projected 2PCF estimated using random catalogs produced by the radial selection models outlined in Section~\ref{sec:randoms}. In the left and right panels for the LC1 and LC2 samples, respectively, the estimated mean $\overline{w}_p$ of 60 mock samples are displayed. In the top panels, $\overline{w}_{p,true}$ computed using random catalog from the $n_{true}$ model is represented by solid black points with errors representing the $1\sigma$ dispersion across individual $w_{p,true}$s of samples. The blue dashed lines, green dotted lines, red long-dashed lines, and orange lines represent $\overline{w}_{p}$s estimated from random catalogs of the ${V^{\rm SDC}_{\rm max}}$\ technique, ${V^{\rm DC}_{\rm max}}$\ method, ${V_{\rm max}}$\ method, and shuffled method, respectively. The average offsets $\overline{[w_p-w_{p,true}]}$ from $w_{p,true}$ for the models are shown in the middle row of panels, which are defined as $\overline{[w_p-w_{p,true}]}=\frac{1}{60}\sum^{60}_{i=1}{w^i_p-w^i_{p,true}}$, where $w^i_p$ is the projected 2PCF measured for the $i$th LC sample. The offsets increase when the scale drops below $1{h^{-1}\rm Mpc}$ for both the ${V^{\rm DC}_{\rm max}}$\ method (green dotted lines) and shuffled method (orange diamonds). When using the random catalogs of the ${V^{\rm SDC}_{\rm max}}$\ technique to measure $w_p$, the little positive offsets in the blue open rolls with error bars indicate a slight overestimation on scale $r_p \lesssim 0.4{h^{-1}\rm Mpc}$. On a small scale, there are apparent offsets for the ${V_{\rm max}}$\ approach for LC1 samples in both $k+e$ correction cases, as seen by the open red squares with error bars. For LC2 samples, there are extremely modest systematic offsets for the ${V_{\rm max}}$\ technique across all of the scales tested, and these offsets are smaller than those for the ${V^{\rm SDC}_{\rm max}}$\ method. Compared to the $1\sigma_{true}$ (gray solid lines) among 60 $w_{p,true}$s, the ${V^{\rm SDC}_{\rm max}}$\ and ${V_{\rm max}}$\ methods' offsets are essentially insignificant. In the bottom panels of Figure~\ref{fig:wp_noke} and Figure~\ref{fig:wp_kpluse}, we display the average deviation from $w_{p,true}$ for each model, using the same color-coded symbols and lines as the middle panels. The mean deviation $\overline{[(w_p-w_{p,true})/w_{p,true}]}$ is calculated from 60 mock samples in the same manner as $\overline{[(w_p-w_{p,true})]}$. Clearly, $w_p$s derived using random catalogs from the ${V^{\rm SDC}_{\rm max}}$\ approach provide a mostly unbiased estimate of the genuine projected 2PCFs for both LC1 and LC2 samples in both no $k+e$ correction case (Figure~\ref{fig:wp_noke}) and simple $k+e$ correction case (Figure~\ref{fig:wp_kpluse}). The $1\sigma$ deviations among 60 samples for the ${V^{\rm SDC}_{\rm max}}$\ approach (blue error bars) are significantly smaller than those for the ${V_{\rm max}}$\ method (red error bars). For LC1 samples in both $k+e$ correction cases, the ${V_{\rm max}}$\ approach underestimates $w_p$ by less than 1\%, and this bias worsens as scale grows. At $r_p\sim 30{h^{-1}\rm Mpc}$, the bias reaches 13\% with a substantial variance \footnote{This bias is marginally less than the 20\% bias found for the ${V_{\rm max}}$\ approach by \citetalias{2020RAA....20...54Y}. This may be owing to the increase in the number of galaxies in the samples, as the LC samples cover twice as much sky as the flux-limited samples in \citetalias{2020RAA....20...54Y}.}. For LC2 samples, the measurement accuracies for both the ${V^{\rm SDC}_{\rm max}}$\ and ${V_{\rm max}}$\ methods are equivalent at scale $r_p\lesssim 4{h^{-1}\rm Mpc}$ for both methods. On a larger scale, deviation of the ${V_{\rm max}}$\ method grows to 4\%, but remains within the margin of error. These discrepancies in $w_p$ from $w_{p,true}$ for the ${V_{\rm max}}$\ model are mostly attributable to density fluctuations in galaxy samples. $w_p$s measured using random catalogs from the ${V^{\rm DC}_{\rm max}}$\ approach are overestimated at scale $r_p \lesssim 2{h^{-1}\rm Mpc}$ and underestimated at larger scales for both LC1 and LC2 samples as shown in the bottom panels (green dashed lines) of Figure~\ref{fig:wp_noke} and Figure~\ref{fig:wp_kpluse}. As seen in Figure~\ref{fig:histLC1} and Figure~\ref{fig:histLC2}, this tendency of deviation is the result of small fluctuations in the radial distribution of the random catalog generated by the ${V^{\rm DC}_{\rm max}}$\ model. In essence, the fluctuations increase the number of RR pairs at the fluctuation-scale, resulting in an underestimating of $w_p$. Due to the integral constraint effect, a small-scale overestimation of $w_p$ is unavoidable. After smoothing out the fluctuations, the ${V^{\rm SDC}_{\rm max}}$\ approach yields estimates that are almost unbiased of $w_{p,true}$. The results of the shuffled technique are consistent with \citetalias{2020RAA....20...54Y}, which shows that an underestimating of $w_p$ grows as the scale increases. Due to the severe deviations of $w_p$s for the ${V^{\rm DC}_{\rm max}}$\ model in the tests using LC samples, the following comparison for LS samples will focus on testing for the ${V^{\rm SDC}_{\rm max}}$\ method, ${V_{\rm max}}$\ method, and shuffled method. Figure~\ref{fig:wp_LS1} and Figure~\ref{fig:wp_LS2} display comparison results for LS samples with the two luminosity-cuts, respectively. The left, middle, and right panels, respectively, present $w_p$ comparisons for luminosity-dependent samples and their blue and red subsamples. From 10 mock galaxy samples, the mean $\overline{w}_p$, $\overline{[w_p-w_{p,true}]}$, $\overline{[(w_p-w_{p,true})/w_{p,true}]}$ are calculated (from top to bottom panels). The $n_{true}$ method, the ${V^{\rm SDC}_{\rm max}}$\ method, the ${V_{\rm max}}$\ method, and the shuffled method all utilize the same color-coded lines and symbols as those used for figures of LC samples. For the LS1 samples in Figure~\ref{fig:wp_LS1}, the ${V^{\rm SDC}_{\rm max}}$\ model produces tiny $w_p$ offsets from $w_{p,true}$, which are consistent with the findings for LC samples. Significant offsets are seen for the ${V_{\rm max}}$\ and shuffled methods, notably for the LS1 samples and their blue subsamples, where the offsets are more than $1\sigma$ dispersion of $w_{p,true}$ at $r_p \lesssim 3{h^{-1}\rm Mpc}$ scale. The average deviations displayed in the bottom panels clearly demonstrate the superiority of the ${V^{\rm SDC}_{\rm max}}$\ approach over the ${V_{\rm max}}$\ method and the shuffled method when measuring projected 2PCFs. $\sim 0.5\%$ deviations are detected for both LS1 samples and their color-dependent subsamples, which is essentially within the $1\sigma$ error margin. For the ${V_{\rm max}}$\ approach, $\overline{[(w_p-w_{p,true})/w_{p,true}]}$s deviate by 6\%, 5\%, and 9\% for LS1 samples, blue subsamples, and red subsamples, respectively, which are considerably larger than $1\sigma$ errors. At $r_p \lesssim 10 {h^{-1}\rm Mpc}$, the mean deviations for the shuffled approach are marginally better than those for the ${V_{\rm max}}$\ method, but worsen as the scale increases, which is consistent with the test results for LC samples. Figure~\ref{fig:wp_LS2} presents a comparison of $w_p$s for the LS2 samples. The offsets from $w_{p,true}$ for the ${V^{\rm SDC}_{\rm max}}$\ technique are roughly comparable with the LS1 sample results. $w_p$s measured using random catalogs from the ${V_{\rm max}}$\ approach exhibits large offsets from $w_{p,true}$ that are worse than the offsets for the shuffled method on small scales, particularly for LS2 samples (left middle panel) and red subsamples (right middle panel). In the bottom panels of Figure~\ref{fig:wp_LS2}, the accuracy of measurement for three models is shown clearly. At scale $r_p < 1{h^{-1}\rm Mpc}$, there is a $\sim 0.5\%$ underestimate for the LS2 samples (bottom left panel). At a larger scale, this deviation becomes an overestimation, reaching 2\% at $r_p \sim 30 {h^{-1}\rm Mpc}$ while being within the margin of error. The mean deviations for the blue and red subsamples are well constrained within 1\%. The results of the ${V_{\rm max}}$\ approach exhibit larger mean deviations than the LS1 samples, which are even worse than the results of the shuffled method. The deviations for LS2 samples, blue subsamples, and red subsamples are roughly 9\%, 8\%, and 10\%, respectively. $w_p$s determined for red subsamples exhibit more severe departures from $w_{p,true}$ for the ${V_{\rm max}}$\ technique for both LS1 and LS2 samples, demonstrating density fluctuations have a greater impact on clustering determination for red galaxies. To better quantify the measurement accuracy of projected 2PCF for various radial selection models, we calculate the $\chi^2$ between $w_p$ and $w_{p,true}$ for the ${V^{\rm SDC}_{\rm max}}$\ technique, the ${V_{\rm max}}$\ method, and the shuffled method, respectively, as shown in Table~\ref{tab:chi2_wp}. $\chi2$ is computed as follows: \begin{equation}\label{eq:chi2} \chi^2=\sum^{N}_{i=0} \frac{(w^i_p-w_{p,true})^2}{\sigma^2_{true}}. \end{equation} The number of mock samples N is 60 for LC samples and 10 for LS samples. For the LC samples, with the exception of the LC2 samples with simple $k+e$ corrections for which $\chi^2$s of the ${V^{\rm SDC}_{\rm max}}$\ method and the ${V_{\rm max}}$\ method are essentially equal, $w_p$s of the ${V^{\rm SDC}_{\rm max}}$\ method exhibit the least $\chi^2$ from $w_{p,true}$ when compared to other two models. For all LS samples and their blue and red subsamples, the ${V^{\rm SDC}_{\rm max}}$\ approach also yields the least $\chi^2$ among three methods. The $\chi^2$ values for the LS samples are greater than those for the LC samples for all three models. This may probably due to the fact that the LS samples built from a lightcone catalog contain more complicated $k+e$ corrections than LC samples. On the basis of the preceding figures and $\chi^2$ tests, we demonstrate that $w_p$s measured using the random catalogs generated by the ${V^{\rm SDC}_{\rm max}}$\ approach result in the least deviation from $w_{p,true}$ for both flux-limited samples and their color-dependent subsamples. In Section~\ref{sec:disc}, we provide more discussion on the performance of the radial selection models for LC and LS samples \begin{table}[h!] \caption{$\chi^2$ of the projected 2PCFs for the mock samples }\label{tab:chi2_wp} \centering \begin{tabular}{l c c c} \hline \hline Samples& & $\chi^2$ & \\ [1ex] \hline & ${V^{\rm SDC}_{\rm max}}$\ & ${V_{\rm max}}$\ & Shuffled \\ [1ex] \hline LC1($no~k+e$) & 1.364 & 6.264&107.225 \\ LC2($no~k+e$) & 1.460 & 4.254& 62.329 \\ LC1($simple~k+e$) & 3.531 & 6.351& 108.770 \\ LC2($simple~k+e$) & 2.757 & 2.667 &106.466 \\ \hline LS1 & 1.893 & 1618.495 & 977.362 \\ LS1 (blue)&33.013 & 161.187 & 124.543 \\ LS1 (red)& 19.525 & 2769.991 & 1988.678 \\ LS2 & 45.168 & 3416.047& 857.843 \\ LS2 (blue)& 63.572 & 925.400 & 240.416 \\ LS2 (red)& 71.431 & 5054.464 & 1562.508 \\ \hline \end{tabular} \end{table} \subsubsection{Comparison of the redshift-space 2PCFs} \label{sec:compRSD} \begin{figure*} \begin{center} \centering \epsscale{.8} \plotone{xi0_kpluse.png} \caption{Similar to Figure~\ref{fig:wp_noke}, a comparison of $\xi_0$s for the redshift-space 2PCFs of LC1 samples (left panels) and LC2 samples (right panels) with simple $k+e$ corrections.} \label{fig:xi0_kpluse} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{1.} \plotone{xi0_smith_L1.png} \caption{Similar to Figure~\ref{fig:wp_LS1}, a comparison of $\xi_0$s for the redshift-space 2PCFs of LS1 samples (left panels) and their blue (middle panels) and red (right panels) subsamples.} \label{fig:xi0_LS1} \end{center} \end{figure*} \begin{figure*} \begin{center} \centering \epsscale{1.1} \plotone{cf2d_LS.png} \caption{Comparison of the average 2D correlation function $\overline{\xi}(r_p,\pi)$ for the luminosity-dependent flux-limited samples. The L1-C1 sample, the L2-C1 sample, and the L3-C1 sample are shown from top to bottom accordingly, their blue/red subsamples are shown in the middle and right panels in each row. Here, $\overline{\xi}(r_p,\pi)$ is the averaged $\xi(r_p,\pi)$ among 60 mock samples. The true $\overline{\xi}(r_p,\pi)$ measured using the random catalog from the $n(z)_{\rm true}$ method is in the black contour. The gray shaded region with dotted lines mark the $1\sigma$ scatter of the true $\overline{\xi}(r_p,\pi)$ among 60 mock samples. The yellow, red, and blue dashed contours denote the $\overline{\xi}(r_p,\pi)$ of the shuffled method, ${V_{\rm max}}$\ method, and the ${V^{\rm SDC}_{\rm max}}$\ method, respectively. The contour levels from outside-in correspond to $\overline{\xi}(r_p,\pi)=[0.1, 0.2, 0.3, 0.5, 1.0, 2.0, 5.0]$. The middle column and right column panels show the comparison of the blue/red subsamples} \label{fig:2dcf_LS} \end{center} \end{figure*} The redshift-space correlation functions are compared in the same manner as $w_p$ for both the LC and LS samples, and the results for different radial selection models are generally consistent with the comparisons for $w_p$s in the previous section. The mean $\overline{\xi}_0$, $\overline{[\xi_0-\xi_{0,true}]}$, and $\overline{[(\xi_0-\xi_{0,true})/\xi_{0,true}]}$ for LC samples with simple $k+e$ corrections are shown in Figure~\ref{fig:xi0_kpluse}, from top to bottom, respectively. Estimates of $\xi_0$ derived from random catalogs created by the ${V^{\rm SDC}_{\rm max}}$\ approach display the smallest offsets and deviations from $\xi_{0,true}$ for both LC1 (left panels) and LC2 (right panels) samples. For the ${V^{\rm DC}_{\rm max}}$\ technique, $\xi_0$s at scale $r_p < 1 {h^{-1}\rm Mpc}$ exhibit large offsets and deviations compared to the findings of $w_p$. For the ${V_{\rm max}}$\ method, $\xi_0$ deviations are marginally attenuated compared to the results of $w_p$, indicating that the impact of density fluctuations on clustering is less significant in redshift space. The $\xi_0$s for the shuffled approach exhibit the same offsets and deviations from $\xi_{0,true}$ as $w_p$. As the results of LC samples without $k+e$ corrections are similar to Figure~\ref{fig:xi0_kpluse}, they are omitted here. Figure~\ref{fig:xi0_LS1} illustrates a comparison of $\xi_0$ for LS1 samples (left panels), their blue (middle panels), and red (right panels) subsamples, respectively. Compared to the ${V_{\rm max}}$\ and shuffled methods, the ${V^{\rm SDC}_{\rm max}}$\ approach produces the least offsets and deviations from $\xi_{0,true}$ for LS1 samples and red subsamples. For the blue subsamples, the ${V^{\rm SDC}_{\rm max}}$\ method's mean offset at $s \sim 0.07{h^{-1}\rm Mpc}$ is slightly larger than the ${V_{\rm max}}$\ method's mean offset, and both approaches have comparable deviations at that scale. This is not a worry because the amount of uncertainty at this scale is also high due to the shot noise. In general on $\xi_0$ measurements, the ${V^{\rm SDC}_{\rm max}}$\ technique continues to outperform the other two radial selection models. Since the findings of LS2 samples are basically consistent to Figure~\ref{fig:xi0_LS1}, they are also excluded here. In Figure~\ref{fig:2dcf_LS}, the average 2D correlation functions $\overline{\xi}(r_p,\pi)$ for LS samples are presented. $\overline{\xi}(r_p,\pi)$s for LS1 samples (left panel), blue subsamples (middle panel), and red samples (right panel) are displayed in the upper panels. $\overline{\xi}(r_p,\pi)$s for the $n_{true}$ method, the ${V^{\rm SDC}_{\rm max}}$\ method, the ${V_{\rm max}}$\ method, and the shuffled method are represented by black solid lines, blue dashed lines, red dashed lines, and yellow dashed lines, respectively. The $1\sigma_{true}$ dispersion of $\xi_{true}(r_p,\pi)$ among 10 mock samples is denoted by dotted gray lines in places with shading. $\overline{\xi}(r_p,\pi)$s of the ${V^{\rm SDC}_{\rm max}}$\ model provide the best agreement with $\overline{\xi}_{true}(r_p,\pi)$ for LS1 samples and color-dependent subsamples. For $\overline{\xi}(r_p,\pi)$ of the ${V_{\rm max}}$\ method and the shuffled method, there are offsets of varying degrees; yet, the offsets stay within the $1\sigma_{true}$ error margins; however, the contour shapes are altered. In the lower panels displaying $\overline{\xi}(r_p,\pi)$s for LS2 samples, the majority of contours for the ${V^{\rm SDC}_{\rm max}}$\ model are consistent with $\overline{\xi}_{true}(r_p,\pi)$. $1\%\sim 2\%$ deviations seen in $\overline{w}_p$ (bottom left panel in Figure~\ref{fig:wp_LS2}) for both LS2 samples and blue subsamples are also observed in contours at large scale. For the ${V_{\rm max}}$\ technique and the shuffled method, the offsets in the $\overline{\xi}(r_p,\pi)$ contours are close to the error margins of $1\sigma_{true}$; thus, the contour shapes are altered as well. Since the comparisons for LC samples are substantially identical to those in Figure~\ref{fig:2dcf_LS}, they are excluded here. \section{Discussion} \label{sec:disc} Our tests demonstrate that, for flux-limited sample with a redshift-dependent number density $n(z)$, utilizing the random catalog generated by the ${V^{\rm SDC}_{\rm max}}$\ technique to measure galaxy clustering produces the least deviation from the true clustering when compared to the other radial selection methods. Some aspects of the performance of the ${V^{\rm SDC}_{\rm max}}$\ technique remain to be clarified and discussed, as detailed below. \subsection{The impact of smoothness parameters on clustering estimation} \label{sec:dis_smooth} \begin{figure*} \begin{center} \centering \epsscale{1.} \plotone{wp_smooths_bins.png} \caption{The average deviations of $w_p$ from $w_{p,true}$ for the ${V^{\rm SDC}_{\rm max}}$\ method, in which alternative histogram bin sizes and smooth box sizes are adopted in the smooth process in order to assess the impact of multiple choices on clustering estimation. The fiducial bin size and smooth box size used in Section~\ref{sec:comparison} are $\Delta d=5 {h^{-1}\rm Mpc}$ and $\Delta_{\rm smooth}=5$, respectively, as indicated by the open blue circles with error bars. The alternate histogram bin sizes are $\Delta d=2.5 {h^{-1}\rm Mpc}$ and $\Delta d=10 {h^{-1}\rm Mpc}$ , with the same smooth box size as the fiducial one, as indicated by the green dashed lines and the light blue lines, respectively. The alternate smooth box sizes are $\Delta_{\rm smooth}=3$ and $\Delta_{\rm smooth}=7$, with the same fixed histogram bin size as the fiducial one, as shown by the yellow short-dashed and orange long-dashed lines, respectively. The zero deviation is shown by the horizontal black dashed lines. Upper panels: Tests for the LC1 samples (left panel) and LC2 samples (right panel) for the no $k+e$ correction case. Lower panels: Similar tests for LC1 and LC2 samples to those in the upper panels, but for the simple $k+e$ correction case.} \label{fig:comp_smooth} \end{center} \end{figure*} For the ${V^{\rm SDC}_{\rm max}}$\ approach, we add a smoothing step to eliminate the unanticipated small fluctuations in the redshift distribution of the cloned random galaxies generated by the ${V^{\rm DC}_{\rm max}}$\ method. Previous comparison of 2PCFs for the ${V^{\rm SDC}_{\rm max}}$\ and ${V^{\rm DC}_{\rm max}}$\ methods demonstrate the necessity of a smooth procedure for random catalog in order to produce a nearly unbiased clustering measurement for flux-limited sample. Smoothing requires a selection of histogram bin size $\Delta d$ and smooth box size $\Delta_{\rm smooth}$. To determine the effect of varying $\Delta d$ and $\Delta_{\rm smooth}$ values on the final galaxy clustering determination, we vary these two smoothness parameters and regenerate random catalogs to perform the estimate. First, we set $\Delta d=5 {h^{-1}\rm Mpc}$ and $\Delta_{\rm smooth}=5$ as the fiducial case, which we have used for the ${V^{\rm SDC}_{\rm max}}$\ technique in previous tests in Section~\ref{sec:comparison}. Second, we chose $\Delta d=2.5 {h^{-1}\rm Mpc}$ and $10 {h^{-1}\rm Mpc}$ for histogram bin size, with $\Delta_{\rm smooth}=5$ set to smooth. Thirdly, we select $\Delta_{\rm smooth}=3$ and $7$ for smooth with $\Delta d=5 {h^{-1}\rm Mpc}$ set. Figure~\ref{fig:comp_smooth} displays the average deviations of $w_p$ from $w_{p,true}$ for random catalogs created by the ${V^{\rm SDC}_{\rm max}}$\ technique with various $\Delta d$ and $\Delta_{\rm smooth}$ values. To simplify the assessment, we just test the projected 2PCFs of the LC samples here. In the absence of $k+e$ corrections, the upper panels of Figure~\ref{fig:comp_smooth} depict the mean deviations of $w_p$ for the LC1 (left panel) and LC2 (right panel) samples, respectively. We see that a finer value of $\Delta d=2.5 {h^{-1}\rm Mpc}$ (green dashed lines) and $\Delta_{\rm smooth}=3$ (light blue lines) lead to a constant drop of $\overline{[(w_p-w_{p,true})/w_{p,true}]}$ on all test scales, resulting in reduced deviations at $r_p \lesssim 2{h^{-1}\rm Mpc}$ and an underestimate on a larger scale, especially for LC1 samples. In contrast, a coarser size of $\Delta_{\rm smooth}=7$ (orange long-dashed lines) results in an overall increase relative to the mean deviation in fiducial case (open blue rolls with error bars), resulting in an overestimation at scale $r_p \lesssim 20 {h^{-1}\rm Mpc}$. A coarser size of $\Delta d = 10 {h^{-1}\rm Mpc}$ (yellow short-dashed lines) leads in a $\sim 1\%$ increase in the mean deviation of $w_p$ relative to the deviation in fiducial case; this is the only mean deviation that exceeds the $1\sigma$ errors but is still around $\sim 1\%$. In the lower panels, the test results for LC samples with simple $k+e$ corrections are displayed, which are essentially identical to the findings in the above panels, suggesting that the smooth process is insensitive to galaxy samples when different $k+e$ corrections are applied. Our tests indicate that the variation of $\Delta d$ and $\Delta_{\rm smooth}$ in the smooth process of the ${V^{\rm SDC}_{\rm max}}$\ technique affects the accuracy of clustering measurement, however the effect on deviations is much less than 1\%. The advantage of the ${V^{\rm SDC}_{\rm max}}$\ technique over other radial selection models still stands. \subsection{Difference in clustering uncertainty} \label{sec:dis_samples} In prior tests, the uncertainties in clustering deviations among 60 LC samples are significantly larger than the uncertainties in 10 LS samples, which is not expected intuitively. In addition, the deviation uncertainties for the ${V^{\rm SDC}_{\rm max}}$\ approach are approximately a fourth of those for the ${V_{\rm max}}$\ method in LC samples. As seen in Figure~\ref{fig:hist_bias}, we further investigate the radial distribution of the LC and LS samples in order to determine the probable distinct drivers of these discrepancies. Here, we take into account the LC samples without $k+e$ corrections and the LS1 samples, which are sufficient to explain the difference in uncertainty. Firstly, we compute the normalized radial distribution for galaxy samples and random catalogs created using the $n(z)_{\rm true}$ method, the ${V^{\rm SDC}_{\rm max}}$\ method, and the ${V_{\rm max}}$\ method, respectively. To quantify the density fluctuations relative to the true smooth distribution created by $n(z)_{\rm true}$ method, we estimate the average deviations $\overline{\Delta}$ and $1\sigma$ variances of these distributions from the genuine normalized distribution for sixty LC samples and ten LS1 samples separately, as shown in Figure~\ref{fig:hist_bias} from top to bottom. The $\overline{\Delta}$ and $1\sigma$ variance for the galaxy samples are shown by the thick gray line and thin light gray line. For both LC1 (upper panel) and LC2 (middle panel) samples, the variations across sixty individual samples vary greatly, as indicated by $1\sigma$ variance, whereas $\overline{\Delta}$ exhibits a relatively small deviation from the true normalized distribution. The light yellow and light orange regions denote the locations in which 90 percent and 60 percent of the expected random galaxies are likely to be distributed, and we anticipate that the bulk of pairs used to estimate clustering are from 90\% region. $\overline{\Delta}$ (red thick lines) and $\sigma$ (light red thin lines) of the ${V_{\rm max}}$\ technique reveal that this approach corrects the fluctuations in galaxy samples; nonetheless, the imprints of large-scale structures are still discernible. For instance, $\overline{\Delta}$ for LC1 samples shows a small but observable deviation at $100\sim 450 {h^{-1}\rm Mpc}$ where 90\% of galaxies are located. This explains the consistent bias noticed in $w_p$ and $\xi$ in previous testing. For LC2 samples, the systematic bias is almost imperceptible, with just a tiny overestimation at $d\gtrsim 500{h^{-1}\rm Mpc}$, indicating a clustering bias that has been detected in prior tests. For the ${V^{\rm SDC}_{\rm max}}$\ approach, there are noisy fluctuations in $\overline{\Delta}$ (blue thick lines) for both LC1 and LC2 samples, indicating that the smooth does not eliminate all noisy fluctuations in radial distribution and there is still room to improve the smooth. Fortunately, these fluctuations are complimentary in certain degree, yielding a substantially unbiased measurement for galaxy clustering. We observe that the $1\sigma$ errors (light blue thin lines) for the ${V^{\rm SDC}_{\rm max}}$\ approach are less than those for the ${V_{\rm max}}$\ method, especially for the LC1 samples at 60\% region. This is essentially the reason for the substantial difference in uncertainty found between the two techniques for $w_p$ and $\xi$, demonstrating once again that the ${V^{\rm SDC}_{\rm max}}$\ method can more successfully rectify the effect of density fluctuations on individual samples, and thus the clustering estimations converge to the genuine galaxy clustering. As demonstrated in the bottom panel, $\overline{\Delta}$ for the LS1 samples deviates significantly from the genuine distribution when compared to the LC samples. By rotating the sky, just 10 LS1 samples are created from a single lightcone catalog. These samples have a significantly reduced $1\sigma$ variance than LC samples, particularly at $60\%$ region. In LS1 samples, the advantage of density correction in the ${V^{\rm SDC}_{\rm max}}$\ approach is exhibited more clearly compared to the ${V_{\rm max}}$\ method. Both approaches have equal errors, but the $\overline{\Delta}$ of the ${V^{\rm SDC}_{\rm max}}$\ method deviates less from the true distribution, resulting in a more accurate clustering measurement. In contrast, the ${V_{\rm max}}$\ technique predicts too many random galaxies at $d \lesssim 400$ and fewer galaxies at high $d$ due to strong fluctuations in galaxy samples, hence exhibiting a greater deviation in $\overline{\Delta}$ in comparison to $\overline{\Delta}$ of LC samples. This also explains the extremely systematic bias in $w_p$ observed for the ${V_{\rm max}}$\ approach on all testing scales in earlier tests. Last but not least, the LC samples and LS samples are derived from distinct parent mock catalogs utilizing two simulations with different resolutions and galaxy-halo connection models. Both LC and LS samples are complete at $M^{0.1}_{\rm r} \leq -18$, however the simulation of \citep{2019SCPMA..6219511J} used to generate LC samples has a mass resolution that is an order of magnitude higher than that of MXXL simulation \citep{2012MNRAS.426.2046A}, implying that more halo and galaxy structures are resolved in LC samples. Moreover, despite the fact that the LC samples are constructed using a simple galaxy-halo connection model with simple $k+e$ corrections, the benefit is that all model parameters are clear and straightforward; hence, the potential deviation and error sources are comprehendible. For LS samples, with a more sophisticated galaxy evolution and $k-$correction, the lightcone catalog of \cite{2017MNRAS.470.4646S} is theoretically closer to actual observation data; the main drawback is a restricted number of samples. The test results of these two sample groups demonstrate that either the $k+e$ corrections are based on simple or more complex and realistic mock catalogs, the ${V_{\rm max}}$\ technique may produce an inaccurate measurement of galaxy clustering, whereas the ${V^{\rm SDC}_{\rm max}}$\ method can always produce an accurate and precise estimate of clustering. \begin{figure} \begin{center} \centering \epsscale{1.2} \plotone{hist_bias_2sim.png} \caption{Top panel: The average deviations $\overline{\Delta}$ and $1\sigma$ errors from the radial distribution of random catalog obtained by the $n(z)_{\rm true}$ method. The mean deviation is computed using the equation $\overline{\Delta} = \overline{(n^i-n^i_{true})/n^i_{true}}$, where $n^i$ is the normalized radial distribution of the $i$th LC1 sample and random catalog produced using the ${V^{\rm SDC}_{\rm max}}$\ and ${V_{\rm max}}$\ methods. The $\overline{\Delta}$ of the LC1 samples is shown by the thick gray lines, while the $1\sigma$ errors over 60 samples are represented by the thin gray lines. The thick blue lines and thin light blue lines represent $\overline{\Delta}$ and errors for the random catalogs generated by the ${V^{\rm SDC}_{\rm max}}$\ technique. The thick red lines and thin light red lines represent the ${V_{\rm max}}$\ algorithm. The light yellow and light orange regions indicate the locations of 90\% and 60\% of galaxies, respectively. Middle panel: Similar to the top panel, it presents the average deviations and errors for LC2 samples and their corresponding random catalogs. Bottom panel: Similar to top panel, it displays the average deviations and errors for LS1 samples and their corresponding random catalogs.} \label{fig:hist_bias} \end{center} \end{figure} \subsection{The effect of $k+e$ corrections on galaxy clustering} \label{sec:dis_ke} \section{Conclusions} \label{sec:concls} In this paper, we provide a radial selection model, the ${V^{\rm SDC}_{\rm max}}$\ approach, for generating the redshifts of random catalogs in galaxy two-point statistics that allows for a high level of accuracy and precision in the estimation. This method is an improvement on the density-corrected ${V_{\rm max}}$\ method proposed by \cite{2011MNRAS.416..739C}, and it consists mostly of three modifications: (1) Adding estimate of $z_{\rm min}$ and expanding the code's application to a general flux-limited sample; (2) Support for a redshift and color dependent $k-$correction model applicable to individual galaxies; (3) Adding a smooth step to the output cloned radial distribution of random galaxies. These modifications are crucial for obtaining a smooth radial distribution for a random catalog that is unaffected by galaxy density fluctuations, which is the key to a clustering measure with high precision and accuracy. We measure 2PCFs using two groups of flux-limited samples, designated LC and LS, to validate the ${V^{\rm SDC}_{\rm max}}$\ approach. The flux-limited LC samples are constructed from sixty mock catalogs with two luminosity cuts and two simple $k+e$ correction cases. Using the same sample selection criteria and luminosity thresholds as for the LC samples, ten LS samples are generated using the lightcone catalog of \cite{2017MNRAS.470.4646S}. To test property-dependent clustering, LS samples are subdivided into blue and red subsamples. We compare the projected and redshift-space 2PCFs using random catalogs created from the $n_{true}$ method, the ${V^{\rm SDC}_{\rm max}}$\ method, the ${V^{\rm DC}_{\rm max}}$\ method, the ${V_{\rm max}}$\ method, and the redshift shuffled method. Our test results demonstrate that the ${V^{\rm SDC}_{\rm max}}$\ approach is the only reliable radial selection model capable of achieving sub-percent accuracy for $w_p$ measurement on scales ranging from $0.07{h^{-1}\rm Mpc}$ to $\sim 40{h^{-1}\rm Mpc}$. A $2\%$ deviation arises on a large scale for the LS2 sample, however it is still less than the deviations of other radial selection models. In general, the ${V^{\rm SDC}_{\rm max}}$\ technique can constrain the measure accuracy of $w_p$ to within $1\%$ for color-dependent galaxy clustering, validating its superiority over the ${V_{\rm max}}$\ method and the redshift shuffled method. The next generation of spectroscopic surveys, specifically the DESI experiment, will obtain the spectra of around 40 million galaxies and quasars over 14,000 $deg^2$, which is almost an order of magnitude more than the previous observed galaxies \citep{2022arXiv220808518M}. These extra-galactic objects include 13 million bright galaxy sample (2 magnitude deeper than SDSS main sample) \citep{2022arXiv220808516L}, 8 million luminous red galaxies (LRGs), 16 million emission line galaxies (ELG), and 3 million quasars \citep{2013arXiv1308.0847L,2016arXiv161100036D, 2016arXiv161100037D,2022arXiv220808513R}. On the one hand, the two-point statistics of these up-coming galaxies will surely afford us an unprecedented opportunity to comprehend the physics of galaxy formation and evolution, improve the galaxy-halo connection, and shed light on the role of the halo environment in determining the galaxy's physical properties \citep{2022ApJ...938L...2F}. On the other hand, how to fully exploit these galaxies, particularly with the assistance of galaxy 2PCFs, remains a challenge. Using volume-limited catalogs to conduct the 2PCF analysis will not only result in the rejection of a considerable number of galaxies, but it may also lead to the loss of crucial information imprinted in clustering. The density-corrected ${V_{\rm max}}$\ approach proposed by \citep{2011MNRAS.416..739C} solves this problem, and our improvements and tests confirm that the ${V^{\rm SDC}_{\rm max}}$\ method is a viable technique for accurately measuring clustering for flux-limited and color-dependent samples, hence maximizing the use of galaxies. Our present tests are preliminary, concentrating mostly on low redshift galaxies. In the future, we will continue to improve this approach and conduct more tests on various properties of galaxies (e.g., stellar mass, star-formation rate, and so forth) as well as tests employing relative high redshift galaxies (e.g., CMASS, BOSS and eBOSS) and mocks. \begin{acknowledgments} We appreciate the referee's insightful comments and suggestions, which substantially improve this article. We would like to thank Yipeng Jing for carefully reading the manuscript and providing valuable comments. We are also grateful to Yipeng Jing for generously providing the simulation data. Lei Yang expresses gratitude to Chun Xia for assisting with the use of the Yunnan University Astronomy Supercomputer. This work is sponsored by grants from Yunnan University's Launching Research Fund for Postdoctoral Fellow (C176220200) and the China Postdoctoral Science Foundation (2020M683387). The majority of calculations were performed on the Yunnan University Astronomy Supercomputer. \end{acknowledgments} \newpage
proofpile-arXiv_068-1780
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }
\section{Introduction} The flux of the gas kinetic scheme (GKS) is based on the time-dependent evolution solution of the kinetic equation, such as the Bhatnagar-Gross-Krook (BGK) model \cite{BGK}. It targets on the Euler and NS solutions \cite{originalGKS,implicitGKS}. In comparison with traditional Riemann solver based CFD methods, the distinguishable points of GKS include the following. Firstly, the time evolving gas distribution function provides a multiple scale flow physics from the kinetic particle transport to the hydrodynamic wave propagation \cite{xu-liu}. The particle transport supplies numerical dissipation in the discontinuous region and the wave propagating solution provides accurate solution in the smooth region. The multiple scale nature of the flux function bridges the evolution from the upwind flux vector splitting to the central difference Lax-Wendroff type discretization, where both inviscid and viscous fluxes are obtained from the moments of a single gas distribution function \cite{originalGKS,implicitGKS,multi-tem-GKS}. Secondly, the GKS is intrinsically a multi-dimensional scheme, where both normal and tangential derivatives of flow variables around a cell interface participate the time evolution of the gas distribution function \cite{implicitGKS}. The hyperbolic kinetic equation with local relaxation provides a compact physical domain for the design of the numerical schemes in comparison with the direct solvers of the Navier-Stokes equations. The numerical solutions in GKS are not sensitive to the mesh distributions due to the absence of direct evaluation of dissipative terms. Thirdly, the time dependent flux function achieves higher order time accuracy than that in the Riemann flux. A one-step 3rd-order scheme can be directly constructed \cite{3rdGKS-Li}. Fourthly, a unified GKS (UGKS) can be developed for multi-scale gas dynamics \cite{UGKS,xu-liu}. The second order schemes were mostly developed in 1980s and they are the main numerical schemes used in engineering applications. Great efforts have been paid on the development of high order (3rd order or higher) methods in the past decades, which are expected to provide more accurate solutions with less computational cost than second order methods \cite{high-order-review}. It is still early to point out the appropriate approaches for high order schemes, especially for high speed compressible flow with discontinuities. There are many attempts in the construction of high order schemes, such as the k-exact \cite{k-exact}, weighted essentially non-oscillatory (WENO) \cite{weno,wenoz,wenoz+}, multi-moment constrained (MCV) methods \cite{mcv}, discontinuous Galerkin (DG) \cite{DG}, flux reconstruction approach \cite{CPR}, and many others. Most approaches are focusing on high order initial reconstruction, which achieves high order accuracy in smooth region and avoids oscillation in the discontinuous region. In terms of evolution model, same as the second order schemes, the Riemann solver is commonly used for the flux evaluation \cite{rm-book}. In order to improve the time accuracy, the traditional Runge-Kutta (RK) time-stepping method is used \cite{RK-Jameson}. The RK methods separate the spatial and temporal discretization, which could improve the stability for hyperbolic problems in comparison with the single stage or the Adams family of methods under the same reconstruction \cite{RK-advantage1,RK-advantage2}. The RK method can be used in GKS as well under FV or DG frameworks \cite{HGKS-Magnet,RK-DG-Ren1,RK-DG-Ren2}. However, the nth-order accuracy in RK method requires no less than n stages. For a classical 4th-order RK method, we need 4 stages. While for a 5th-order RK method, usually 6 stages are needed \cite{fifth-RK}. For a high order scheme, the initial reconstruction for the middle stages may take most of the computational time. All those are related to the use of Riemann solver, which may be the real barrier for the development of efficient high order schemes. On the other hand, the GKS is based on the high order gas evolution model, which provides time-dependent flux function. The second and third order schemes can be developed through a single updating step without middle stages \cite{3rdGKS-Li}. With the 5th-order WENO reconstruction, a 3rd order GKS has been developed for both 2D and 3D inviscid and viscous flow computations \cite{3rdGKS-Luo,3rdGKS-3D-Pan}. A compact 3rd-order scheme in structured and unstructured meshes has been developed as well \cite{structured-compact-gks,unstructured-compact-gks}. Although the 3rd-order GKS flux function takes approximately $4$ to $6$ times of the computational time of a 2nd order GKS flux, the one step method still shows high efficiency against a 3 stages RK method, since the spatial reconstructions take a significant amount of CPU time. The 3rd-order GKS flux function depends on the initial reconstructions of derivatives, such as 1st, 2nd, and 3rd-order ones, which become more and more unreliable numerically, especially close to the discontinuous regions. The one step 3rd-order GKS becomes less robust than the second order one. Instead of continuously constructing high order one step GKS methods, it may become a good choice to combine the advantages of both time accuracy of the GKS flux function and the RK technique for the robustness. Starting from 1940s, methods with multiple stages and multiple derivatives (MSMD) have been used for numerical solution of ODEs \cite{MMMD1}. This group of methods was reviewed and defined by Hairer and Wanner \cite{MMMD2}, where the MSMD including 2nd-order derivatives was studied up to 7th-order accuracy method and was compared with RK methods. However, this technique has hardly been applied to CFD methods since most schemes here are based on the 1st-order Riemann solver. After realizing the benefits from the MSMD, a DG method with MSMD has been proposed recently \cite{multi-derivative}. Almost at the same time, a 4th-order 2 stages scheme based on the generalized Riemann problem (GRP) for the Euler equations has been proposed \cite{4th2stage-Li}. Similarly, a 4th-order 2 stages GKS has been designed for the Navier-Stokes solutions \cite {4th2stage-Pan}. Benefiting from the 2nd-order GKS flux function and two stages strategy, the 4th-order 2 stages GKS shows great accuracy and outstanding robustness due to the absence two reconstructions, and the efficiency of the scheme is also superior in comparison with other Riemann solver based 4th-order schemes with 4 stages RK technique. In this paper, two kinds of 5th-order GKS will be proposed by using MSMD technique and taking the advantages of high order time-accurate GKS flux function. One of the 5th-order schemes has 3 stages GKS with the use of a 2nd-order GKS flux function. Another 5th-order scheme has 2 stages only, where a 3rd-order flux function is used. To further improve the efficiency of the 5th-order 2 stages GKS, a simplified 3rd-order GKS flux function is adopted \cite{3rdGKS-simplified}. Meanwhile, in order to present a complete picture about the high order gas kinetic schemes and get a comparison among GKS methods, the 3rd-order 1 stage multi-dimensional GKS \cite{3rdGKS-Luo} and a 4th-order 2 stage GKS \cite{4th2stage-Pan} will be summarized in this paper as well. Thus a family and a variation of high order GKS from 3rd-order up to 5th-order will be presented. For spatial reconstruction, the WENO technique has achieved great success, especially on structured mesh. Since we are focusing on the improvement of time accuracy for the schemes, the same 5th-order WENOZ reconstruction \cite{wenoz} based on characteristic variables will be used for all schemes, including the 5th-order Godunov method with RK technique. This paper is organized as follows. Section 2 introduces the multi-stage time integrating techniques. Section 3 gives a brief review of GKS flux solvers and introduces the numerical algorithm for MSMD GKS. Section 4 presents the numerical results from different schemes and their comparison with other standard high order methods with exact Riemann solver in terms of accuracy, efficiency, and robustness. Finally we end up with some concluding remarks. \section{Multi-stage multi-derivative Mathods} The conservation laws \begin{equation}\label{ms-1} \begin{split} \textbf{w}_t+ \nabla \cdot \textbf{F}(\textbf{w})=0,\textbf{w}(0,\textbf{x})=\textbf{w}_0(\textbf{x}),\textbf{x}\in \Omega \subseteq \mathbb{R}^d, \end{split} \end{equation} for conserved mass, momentum, and energy $\textbf{w}$ can be written as $$\textbf{w}_t=-\nabla \cdot {\textbf{F}}(\textbf{w}) .$$ With the spatial discretization $\textbf{w}^h$ and appropriate evaluation $-\nabla \cdot {\textbf{F}}(\textbf{w})$, the original PDEs become a system of ordinary differential equation (ODE) \begin{equation}\label{ms-2} \begin{split} \textbf{w}^h_t=L(\textbf{w}^h),t=t_n. \end{split} \end{equation} The well established numerical scheme for ODE can be used to solve this initial value problem. For a smooth function $L$, the solution $\textbf{w}(\Delta t)$ around ($t=t_n=0$) becomes \begin{equation}\label{ms-3} \begin{split} \textbf{w}(\Delta t)&=\textbf{w}(0)+\Delta t \textbf{w}^{(1)}(0)+\frac{{\Delta t}^2}{2} \textbf{w}^{(2)}(0)+\frac{{\Delta t}^3}{6} \textbf{w}^{(3)}(0) \\&+\frac{{\Delta t}^4}{24} \textbf{w}^{(4)}(0)+\frac{{\Delta t}^5}{120} \textbf{w}^{(5)}(0) +\cdots+\frac{{\Delta t}^n}{n!} \textbf{w}^{(n)}(0)+\mathcal{O}(\Delta t^{n+1}), \end{split} \end{equation} where $\textbf{w}^{(n)}(t)(n=1,2,3,...)$ refers to \begin{equation}\label{ms-4} \begin{split} \textbf{w}^{(n)}(t)=\frac{d^n\textbf{w}(t)}{dt^n}=\frac{d^{n-1}L(\textbf{w}(t))}{dt^{n-1}}. \end{split} \end{equation} For simplicity of presentation, we define $L = L(w(t))$ and $L^{(n)} = \frac{d^nL(t)}{dt^n}$. A $n$th-order time marching scheme can be constructed straightforwardly if the time derivatives of $L^{(n)}$ up to $(n-1)$th-order are provided. However, under most circumstances, we can easily evaluate low order derivatives, such as $L$ for the approximate Remiann solver, $L^{(1)}$ for the generalized Riemann problem (GRP) and the 2nd-order GKS flux function, and $L^{(2)}$ for the 3rd-order GKS flux function. A continuous construction of higher order derivatives becomes prohibited, such as the extremely complicated 4th-order GKS flux function \cite{liu-tang}. Another approach, similar to RK method, is to introduce the middle stages. The update at $t^{n+1}$ becomes a linear combination of $L$ and their derivatives in the multiple stages. If $L$ is used only, the traditional RK method is recovered. But, with the inclusion of $L^{(n)}$, the multi-stage multi-derivative (MSMD) method can be constructed. \subsection{Multi-stage Multi-derivative High Order Method } \textbf{Definition 1} According to \cite{multi-derivative}, given a collection of real numbers ${a_{ij}^{(1)},a_{ij}^{(2)},a_{ij}^{(3)},b_{i}^{(1)},b_{i}^{(2)},b_{i}^{(3)}}$, a multi-derivative (up to 3), s-stage method can be defined as the following \begin{equation}\label{multiequation1} \begin{split} \textbf{w}_{n+1}=\textbf{w}_{n}+\Delta{t}\sum_{i=1}^sb_i^{(1)}L(\textbf{w}^{i})+{\Delta{t}}^2\sum_{i=1}^sb_{i}^{(2)}L^{(1)}(\textbf{w}^{i}) +{\Delta{t}}^3\sum_{i=1}^sb_{i}^{(3)}L^{(2)}(\textbf{w}^{i}), \end{split} \end{equation} where intermediate stage values are given by \begin{equation}\label{multiequation2} \begin{split} \textbf{w}^{i}=\textbf{w}_{n}+\Delta{t}\sum_{j=1}^{i-1}a_{ij}^{(1)}L(\textbf{w}^{j})+{\Delta{t}}^2\sum_{j=1}^{i-1}a_{ij}^{(2)}L^{(1)} (\textbf{w}^{j})+{\Delta{t}}^3\sum_{j=1}^{i-1}a_{ij}^{(3)}L^{(2)}(\textbf{w}^{j}). \end{split} \end{equation} Since MSMD is an explicit method, at every intermediate stage the state only depends on the states and derivatives of previous ones. The Butcher tableau, which is widely used to list all coefficients in Runge-Kutta and MSMD method \cite{multi-derivative}, is shown in Table. \ref{buther_tableau}, where $c_i=\sum_{j=1}^s a_{ij}$. Note that the explicit method makes all coefficient $a_{ij}=0$ if $i<=j$. \begin{table}[!h] \begin{center} \begin{tabular}{c|ccc|ccc|ccc $c_1$&$a_{11}^{(1)}$&$\cdots$&$a_{1s}^{(1)}$&$a_{11}^{(2)}$&$\cdots$&$a_{1s}^{(2)}$&$a_{11}^{(3)}$&$\cdots$&$a_{1s}^{(3)}$\\ $\vdots$&$\vdots$&$\ddots$&$\vdots$&$\vdots$&$\ddots$&$\vdots$&$\vdots$&$\ddots$&$\vdots$\\ $c_s$&$a_{s1}^{(1)}$&$\cdots$&$a_{ss}^{(1)}$&$a_{s1}^{(2)}$&$\cdots$&$a_{ss}^{(2)}$&$a_{s1}^{(3)}$&$\cdots$&$a_{ss}^{(3)}$\\ \hline ~&$b_{1}^{(1)}$&$\cdots$&$b_{s}^{(1)}$&$b_{1}^{(2)}$&$\cdots$&$b_{s}^{(2)}$&$b_{1}^{(3)}$&$\cdots$&$b_{s}^{(3)}$\\ \end{tabular} \vspace{-1mm} \caption{\label{buther_tableau} Butcher tableau for a multi-derivative (up to 3) multi-stage method.} \end{center} \end{table} In the following, a few cases which are related high order CFD methods are presented. \subsection{Tradition Runge-Kutta Methods: RK4 and RK5} The Butcher tableau for classical 4th order, four-stages Runge-Kutta (RK4) \cite{weno} and 5th-order, six-stages Runge-Kutta (RK5) methods \cite{RK56} are given in Table \ref{4th4stage} and Table \ref{5th6stage}. The computational time and robustness of the above RK5 with Riemann solvers for the flux evaluation will be compared with our newly proposed 5th-order MSMD GKS methods. Since most high order schemes with Riemann solver use 3rd-order or 4th-order time accurate RK methods, in this paper many comparison will be done with 4th-order RK (RK4) scheme. \begin{table}[!h] \begin{center} \begin{tabular}{c|cccc} 0&0&0&0&0\\ 1/2&1/2&0&0&0\\ 1/2&0&1/2&0&0\\ 1&0&0&1&0\\ \hline ~&1/6&1/3&1/3&1/6\\ \end{tabular} \vspace{-1mm} \caption{\label{4th4stage} Butcher tableau for RK4.} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{c|cccccc} 0&0&0&0&0&0&0\\ 1/4&1/4&0&0&0&0&0\\ 3/8&3/32&9/32&0&0&0&0\\ 12/13&1932/2197&-7200/2197&7296/2197&0&0&0\\ 1&439/216&-8&3680/513&-845/4104&0&0\\ 1/2&-8/27&2&-3544/2565&1859/4104&-11/40&0\\ \hline ~&16/135&0&6656/12825&28561/56430&-9/50&2/55\\ \end{tabular} \vspace{-1mm} \caption{\label{5th6stage} Butcher tableau for RK5.} \end{center} \end{table} \begin{comment} \subsection{A Fifth-Order Tradition Runge-Kutta Method: RK5} The Butcher tableau for 5th-order, six-stages Runge-Kutta (RK5) method \cite{RK56} is given in Table \ref{5th6stage}. The computational time and robustness of the above RK5 with Riemann solvers will be compared with our newly proposed fifth-order MSMD GKS methods. Since most high order schemes with Riemann solver use 3rd-order or 4th-order time accurate RK methods, in this paper many comparison will be done with 4th-order RK (RK4) scheme \cite{weno}. \begin{table}[!h] \begin{center} \begin{tabular}{c|cccccc} 0&0&0&0&0&0&0\\ 1/4&1/4&0&0&0&0&0\\ 3/8&3/32&9/32&0&0&0&0\\ 12/13&1932/2197&-7200/2197&7296/2197&0&0&0\\ 1&439/216&-8&3680/513&-845/4104&0&0\\ 1/2&-8/27&2&-3544/2565&1859/4104&-11/40&0\\ \hline ~&16/135&0&6656/12825&28561/56430&-9/50&2/55\\ \end{tabular} \vspace{-1mm} \caption{\label{5th6stage} Butcher tableau for RK5.} \end{center} \end{table} \begin{table}[!h] \begin{center} \begin{tabular}{c|cccc} 0&0&0&0&0\\ 1/2&1/2&0&0&0\\ 1/2&0&1/2&0&0\\ 1&0&0&1&0\\ \hline ~&1/6&1/3&1/3&1/6\\ \end{tabular} \vspace{-1mm} \caption{\label{5th6stage} Butcher tableau for RK4.} \end{center} \end{table} \end{comment} \subsection{A 3rd-Order 1 stage Method: S1O3} A 3rd-order accurate GKS flux function provides up to 3rd-order time derivatives $\textbf{w}^{(3)}=L^{(2)}$. A direct Taylor expansion method can be used to update the solution, $$\textbf{w}_{n+1}=\textbf{w}_{n}+\Delta{t}L+\frac{1}{2}\Delta{t}^2L^{(1)}+\frac{1}{6}\Delta{t}^3L^{(2)}.$$ This is the 3rd-order one stage time-accuracy scheme \cite{3rdGKS-Li,3rdGKS-Luo,3rdGKS-simplified}. The corresponding Butcher tableau is given in Table \ref{3rd1stage}. \begin{table}[!h] \begin{center} \begin{tabular}{c|c|c|c} 0&0&0&0\\ \hline ~&1&1/2&1/6\\ \end{tabular} \vspace{-1mm} \caption{\label{3rd1stage} Butcher tableau for one stage 3rd-order method (S1O3).} \end{center} \end{table} \subsection{A 4th-Order 2 Stages Method: S2O4} There is uniqueness for the 2 stages (S2) 4th-order (O4) method \cite{MMMD3}. This method has been used in CFD applications \cite{multi-derivative,4th2stage-Li,4th2stage-Pan}, which show good accuracy, high efficiency, and robustness. It can be written as \begin{equation}\label{4th2stage1} \begin{split} \textbf{w}^{1}&=\textbf{w}_{n}+\frac{1}{2}\Delta{t}L(\textbf{w}_n)+\frac{1}{8}\Delta{t}^2L^{(1)}(\textbf{w}_n),\\ \textbf{w}_{n+1}&=\textbf{w}_{n}+\Delta{t}L(\textbf{w}_n)+\frac{1}{2}\Delta{t}^2[\frac{1}{3}L^{(1)}(\textbf{w}_n)+\frac{2}{3}L^{(1)}(\textbf{w}^{1})]. \end{split} \end{equation} The Butcher tableau for the two-stage fourth-order (S2O4) method is given in Table \ref{4th}. The region of A-stability is plotted in Fig. \ref{A-stability}. \begin{table}[!h] \begin{center} \begin{tabular}{c|cc|cc} 0&0&0&0&0\\ 1/2&1/2&0&1/8&0\\ \hline ~&1&0&1/6&1/3\\ \end{tabular} \vspace{-1mm} \caption{\label{4th} Butcher tableau for S2O4.} \end{center} \end{table} \subsection{5th-Order 3 Stage Methods: S3O5} For the 5th-order MSMD methods, the coefficients are not uniquely defined. With the constraints of $a^{(1)}_{ij}=0, j\neq 1$, several choices are given in \cite{MMMD3}. The choices of the coefficients have been studied in \cite{ssp}. However, the real performance from different schemes for the Euler and N-S equations haven't been reported yet. Here we will construct two kinds of 5th-order 3 stages (S3O5) methods. The first choice is given by: \begin{equation}\label{s3o51} \begin{split} \textbf{w}^{1}&=\textbf{w}_{n}+\frac{2}{5}\Delta{t}L(\textbf{w}_n)+\frac{2}{25}\Delta{t}^2L^{(1)}(\textbf{w}_n),\\ \textbf{w}^{2}&=\textbf{w}_{n}+\Delta{t}L(\textbf{w}_n)+\frac{1}{2}\Delta{t}^2[-\frac{1}{2}L^{(1)}(\textbf{w}_n)+\frac{3}{2}L^{(1)}(\textbf{w}^{1})],\\ \textbf{w}_{n+1}&=\textbf{w}_{n}+\Delta{t}L(\textbf{w}_n)+\frac{1}{2}\Delta{t}^2[\frac{1}{4}L^{(1)}(\textbf{w}_n)+\frac{25}{36}L^{(1)}(\textbf{w}^{1})+\frac{1}{18}L^{(1)}(\textbf{w}^{2})], \end{split} \end{equation} which is denoted by S3O5 and the Butcher tableau is given in Table \ref{s3o511}. Note that $a^{(2)}_{31}=-1/4 <0$. \begin{table}[!h] \begin{center} \begin{tabular}{c|ccc|ccc} 0&0&0&0&0&0&0\\ 2/5&2/5&0&0&2/25&0&0\\ 1&1&0&0&-1/4&3/4&0\\ \hline ~&1&0&0&1/8&25/72&1/36\\ \end{tabular} \vspace{-1mm} \caption{\label{s3o511} Butcher tableau for S3O5.} \end{center} \end{table} The second choice is given by: \begin{equation}\label{s3o52} \begin{split} \textbf{w}^{1}&=\textbf{w}_{n}+\frac{3}{10}\Delta{t}L(\textbf{w}_n)+\frac{9}{200}\Delta{t}^2L^{(1)}(\textbf{w}_n),\\ \textbf{w}^{2}&=\textbf{w}_{n}+\frac{3}{4}\Delta{t}L(\textbf{w}_n)+\frac{9}{32}\Delta{t}^2L^{(1)}(\textbf{w}^{1}),\\ \textbf{w}_{n+1}&=\textbf{w}_{n}+\Delta{t}L(\textbf{w}_n)+\frac{1}{2}\Delta{t}^2[\frac{5}{27}L^{(1)}(\textbf{w}_n)+\frac{50}{81}L^{(1)}(\textbf{w}^{1})+\frac{16}{81}L^{(1)}(\textbf{w}^{2})], \end{split} \end{equation} which is named S3O5+ and the Butcher tableau is given in Table \ref{s3o522}. \begin{table}[!h] \begin{center} \begin{tabular}{c|ccc|ccc} 0&0&0&0&0&0&0\\ 3/10&3/10&0&0&9/200&0&0\\ 3/4&3/4&0&0&0&9/32&0\\ \hline ~&1&0&0&5/54&25/81&8/81\\ \end{tabular} \vspace{-1mm} \caption{\label{s3o522} Butcher tableau for S3O5+.} \end{center} \end{table} In comparison with S305, S305+ keeps all coefficients positive, which may have better stability property in numerical simulations. The numerical performance will be conducted in this paper, and S3O5+ does have a better robustness. \subsection{A 5th-Order 2 Stages Methods: S2O5} To achieve fifth-order time accuracy, we may construct a scheme with 3rd-order derivatives and two stages \cite{4th2stage-Pan}. The scheme is given by, \begin{equation}\label{s2o51} \begin{split} \textbf{w}^{1}&=\textbf{w}_{n}+\frac{2}{5}\Delta{t}L(\textbf{w}_n)+\frac{2}{25}\Delta{t}^2L^{(1)}(\textbf{w}_n),\\ \textbf{w}_{n+1}&=\textbf{w}_{n}+\Delta{t}L(\textbf{w}_n)+\frac{1}{2}\Delta{t}^2L^{(1)}(\textbf{w}_n)+\frac{1}{6}\Delta{t}^3[\frac{3}{8}L^{(2)}(\textbf{w}_n)+\frac{5}{8}L^{(2)}(\textbf{w}^{1})]. \end{split} \end{equation} The Butcher tableau for the 5th order two-stages (S2O5) scheme with up to 3rd-order derivatives is given in Table \ref{5th2stage}. The region of stability is plotted in Fig. \ref{A-stability}. \begin{table}[!h] \begin{center} \begin{tabular}{c|cc|cc|cc} 0&0&0&0&0&0&0\\ 2/5&2/5&0&2/25&0&0&0\\ \hline 1&1&0&1/2&0&1/16&5/48\\ \end{tabular} \vspace{-1mm} \caption{\label{5th2stage} Butcher tableau for S2O5.} \end{center} \end{table} For RK method, the range covered in the imaginary axis indicates the stability region. As shown in Fig. \ref{A-stability}, the S2O4 method has the largest imaginary axis covered by the stability contour among all schemes. This is confirmed by the numerical results, which shows that S2O4 has best robustness. On the other hand, the S2O5 method in Table. \ref{5th2stage} shows the weakest A-stability. However, for the S2O5 method the coefficient $a^{(3)}_{21}$ is a free parameter, which can be chosen to increase its stability without losing its accuracy. How to optimize this coefficient is still not clear. But, with the modified coefficients in Table. \ref{5th2stage_modified}, which is named S2O5+, the scheme improves the A-stability, as shown in Fig. \ref{A-stability}. The S2O5+ method contains the largest A-stable area on the negative left plane in Fig. \ref{A-stability}. The improvement of robustness from S2O5+ will be shown in the numerical tests. In the RK method, lot of efforts have been paid for minimizing the dissipation and dispersion error of the schemes \cite{RK-Acoustic}, which is important in turbulence and acoustic computations. For different schemes studied in this paper, the dissipation rate and phase error defined in \cite{RK-Acoustic} are plotted in Fig. \ref{Dissipation}, where $c$ is the wave speed for linear advection equation and $k$ is the wave number. It shows that with the same time step both S3O5+ and S2O5+ have less dissipation and dispersion error than that of RK5. This indicates that high order MSMD methods have potential to present more accurate solutions than those with traditional RK technique. \begin{table}[!h] \begin{center} \begin{tabular}{c|cc|cc|cc} 0&0&0&0&0&0&0\\ 2/5&2/5&0&2/25&0&4/375&0\\ \hline 1&1&0&1/2&0&1/16&5/48\\ \end{tabular} \vspace{-1mm} \caption{\label{5th2stage_modified} Butcher tableau for S2O5+.} \end{center} \end{table} \begin{figure}[!h] \centering \includegraphics[width=0.444\textwidth]{a-stability}\includegraphics[width=0.444\textwidth]{a-stability-local} \caption{\label{A-stability} Left figure: regions of A-stability for RK5, S2O4, S2O5, S2O5+, S3O5, and S3O5+ schemes. Right figure: local enlarged region of left figure. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.5\textwidth]{Dissipation-new}\includegraphics[width=0.5\textwidth]{phase-new} \caption{\label{Dissipation} The dissipation (Left) and dispersion (Right) properties for different schemes. Dissipation rate $||r|-1|<0.001$ and phase error $|\delta|<0.001$ are set as accuracy limit in \cite{RK-Acoustic}.} \end{figure} \section{High order MSMD Gas Kinetic Schemes} \subsection{General finite volume framework and kinetic model equation} In a 2D rectangular mesh, Eq.(\ref{ms-2}) can be written in a semi-discrete form \begin{equation}\label{finite_volume1} \begin{split} \frac{d\bar{W}_{ij}^n}{dt}=L_{ij}(W):=-\frac{1}{\Delta x_{ij}}({F}_{i+1/2,j}^n-{F}_{i-1/2,j}^n)-\frac{1}{\Delta y_{ij}}({G}_{i,j+1/2}^n-{G}_{i,j-1/2}^n), \end{split} \end{equation} where $W=(\rho ,\rho U,\rho V,\rho E)$ are the conservative flow variables, $\bar{W}$ are the cell average values. Here $F(W(t))=(F_{\rho},F_{\rho U},F_{\rho V},F_{\rho E})$ are the corresponding fluxes across the cell interface in the x-direction, similarly for $G(W(t))$ in the y-direction. The key point for constructing MSMD method is to obtain a time dependent flux function $F(t)$ and $G(t)$. Most Godunov type schemes solve the Riemann problem with time-independent flux. But, for the GKS solver and generalized Riemann problem (GRP), time-dependent fluxes are provided. The gas kinetic scheme is to solve the kinetic equation \begin{equation}\label{finite_volume2} \begin{split} f_t+\textbf{u}\cdot\nabla f=\frac{g-f}{\tau}, \end{split} \end{equation} where $\textbf{u}$ is particle velocity. Here $f$ is the gas distribution function, which is a function in the physical and velocity space, i.e., $f(x,y,t,u,v,\xi)$ as a function of particle velocity $(u,v)$, space and time coordinates $(x,y,t)$, and internal variable $\xi$. Here $g$ is the corresponding equilibrium state of $f$, and $\tau$ is the relaxation time from $f$ to $g$. The collision term satisfies the compatibility condition \begin{equation}\label{finite_volume3} \begin{split} \int \frac{g-f}{\tau}\psi d\Xi=0, \end{split} \end{equation} where $\psi =(\psi_1, \psi_2, \psi_3, \psi_4)^T = (1,u,v,\frac{1}{2}(u^2+v^2+\xi ^2))^T$, $d\Xi=dudvd\xi$, $d\xi = d\xi_1d\xi_2...d\xi_K$, $K$ is the degrees of internal variable $\xi$ with the relationship $K=(4-2\gamma)/(\gamma-1)$ fora 2D problem, and $\gamma$ is the specific heat ratio. The connections between macroscopic mass $\rho$, momentum ($\rho U, \rho V$), and energy $\rho E$ with the distribution function $f$ are \begin{equation}\label{finite_volume4} \left( \begin{array}{c} \rho\\ \rho U\\ \rho V\\ \rho E\\ \end{array} \right) =\int \psi fd\Xi. \end{equation} The time-dependent numerical fluxes across a cell interface, for example in the x-direction, can be evaluated by \begin{equation}\label{finite_volume5} F(t)=\int_{-\frac{1}{2}\Delta y}^{\frac{1}{2}\Delta y}\int u\psi f(0,y,t,u,v,\xi)d\Xi dy, \end{equation} where the construction of the time-dependent cell interface distribution function $f$ is the core of the gas kinetic scheme. \subsection{GKS flux solver} The integral solution of kinetic model equation is \begin{equation}\label{flux_solver1} \begin{split} f(0,y,t,u,v,\xi)=&\frac{1}{\tau} \int_0^t g(-u(t-t'),y-v(t-t'),t',u,v,\xi)e^{-(t-t')/\tau}dt' \\&+e^{-t/\tau}f_0(-ut,y-vt,u,v,\xi). \end{split} \end{equation}\\ The initial term $f_0$ in the above integral solution is defined as, \begin{equation}\label{flux_solver2} \begin{split} f=f_0^l(x,y,u,v,\xi)(H(x))+f_0^r(x,y,u,v,\xi)(1-H(x)), \end{split} \end{equation} where $H(x)$ is the Heaviside function, $f_0^l$ and $f_0^r$ are the initial gas distribution functions on both sides of a cell interface. To keep a third-order accuracy, the gas distribution function in space around $(x,y)=(0,0)$ can be expanded as \begin{equation}\label{3rd-left-right} \begin{split} f_0^{l,r}(x,y)=f_0^{l,r}(0,0)+\frac{\partial{f_0^{l,r}}}{\partial{x}}x+\frac{\partial{f_0^{l,r}}}{\partial{y}}y +\frac{1}{2}\frac{\partial^2{f_0^{l,r}}}{\partial{x^2}}x^2 +\frac{\partial^2{f_0^{l,r}}}{\partial{xy}}xy +\frac{1}{2}\frac{\partial^2{f_0^{l,r}}}{\partial{y^2}}y^2. \end{split} \end{equation} \\ According to the Chapman-Enskog theory, for the Euler equations $f_0^{l,r}(0,0)$ are the equilibrium states \begin{equation}\label{flux_solver4} \begin{split} &f_0^{l,r}(0,0)=g_0^{l,r}. \end{split} \end{equation} For the Navier-Stokes equations, they are given by \begin{equation}\label{flux_solver4} \begin{split} &f_0^{l,r}(0,0)=g_0^{l,r}-\tau(\frac{\partial{g_0^{l,r}}}{\partial{x}}u+\frac{\partial{g_0^{l,r}}}{\partial{y}}v+\frac{\partial{g_0^{l,r}}}{\partial{t}}), \end{split} \end{equation} where the Maxwellian distribution function $g_0^{l,r}$ are written as \begin{equation}\label{flux_solver4} \begin{split} g_0=\rho(\frac{\lambda}{\pi})^{\frac{K+2}{2}}e^{\lambda((u-U)^2+(v-V)^2+\xi^2)}, \end{split} \end{equation} where $\lambda =m/2kT $, and $m, k, T$ represent the molecular mass, the Boltzmann constant, and temperature. $g_0^{l,r}$ are the equilibrium states corresponding to the macroscopic flow variables $W_l,W_r$ at the left and right hand sides of a cell interface. After determining the non-equilibrium part $f_0$, the equilibrium part $g$ in the integral solution can be expanded in space and time as follows \begin{equation}\label{3rd-equ} \begin{split} g&=\bar{g} +\frac{\partial{\bar{g}}}{\partial{x}}x+\frac{\partial{\bar{g}}}{\partial{y}}y+\frac{\partial{\bar{g}}}{\partial{t}}t \\&+\frac{1}{2}\frac{\partial^2{\bar{g}}}{\partial{x^2}}x^2 +\frac{\partial^2{\bar{g}}}{\partial{xy}}xy +\frac{1}{2}\frac{\partial^2{\bar{g}}}{\partial{y^2}}y^2 +\frac{1}{2}\frac{\partial^2{\bar{g}}}{\partial{t^2}}t^2 +\frac{\partial^2{\bar{g}}}{\partial{xt}}xt +\frac{\partial^2{\bar{g}}}{\partial{yt}}yt, \end{split} \end{equation} where $\bar{g}$ can be obtained using the compatibility condition in Eq.(\ref{finite_volume3}), \begin{equation}\label{g0-collision} \begin{split} \int \psi \bar{g}d\Xi=\bar{W}=\int_{u>0} \psi g_ld\Xi+\int_{u<0} \psi g_rd\Xi. \end{split} \end{equation} Before calculating all derivatives, let's introduce the following notations \begin{equation}\label{coe-relation} \begin{split} &a_1=g_x/g,a_2=g_y/g,A=g_t/g,d_{11}=\frac{\partial{a_1}}{\partial{x}}, \\&d_{12}=\frac{\partial{a_1}}{\partial{y}}=\frac{\partial{a_2}}{\partial{x}}, d_{22}=\frac{\partial{a_2}}{\partial{y}}, b_1=\frac{\partial{a_1}}{\partial{t}}=\frac{\partial{A}}{\partial{x}}, b_2=\frac{\partial{a_2}}{\partial{t}}=\frac{\partial{A}}{\partial{y}}, B=\frac{\partial{A}}{\partial{t}}. \end{split} \end{equation} All coefficients $a_1,a_2,A,...$ are determined by conservative flow variables and their gradients. Each coefficient can be written as $\Lambda=\Lambda_1 \psi_1+\Lambda_2 \psi_2+\Lambda_3 \psi_3+\Lambda_4 \psi_4$, which is determined in the following. \\ First order derivatives: \begin{equation}\label{coe-determine-1st} \begin{split} &\left\langle a_1\right\rangle=\frac{\partial{W}}{\partial{x}}, \left\langle a_2\right\rangle=\frac{\partial{W}}{\partial{y}}, \left\langle A+a_1u+a_2v\right\rangle=0. \end{split} \end{equation} Second order derivatives: \begin{equation}\label{coe-determine-2nd} \begin{split} &\left\langle a_1^2+d_{11}\right\rangle=\frac{\partial^2{W}}{\partial{x^2}}, \left\langle a_2^2+d_{22}\right\rangle=\frac{\partial^2{W}}{\partial{y^2}}, \left\langle a_1a_2+d_{12}\right\rangle=\frac{\partial^2{W}}{\partial{xy}}, \\ &\left\langle (a_1^2+d_{11})u+(a_1a_2+d_{12})v+(Aa_1+b_1)\right\rangle=0, \\ &\left\langle (a_1a_2+d_{12})u+(a_2^2+d_{22})v+(Aa_2+b_2)\right\rangle=0, \\ &\left\langle (Aa_1+b_1)u+(Aa_2+b_2)v+(A^2+B)\right\rangle=0, \end{split} \end{equation} where $\left\langle ... \right\rangle$ are the moments of a gas distribution function defined by \begin{equation}\label{flux_solver10} \begin{split} \left\langle ...\right\rangle=\int \psi g(...)d\Xi. \end{split} \end{equation} In the following subsections the final expressions for the 3rd-order and 2nd-order GKS flux functions are listed. The detailed consideration for the GKS flux construction is given in \cite{originalGKS} for the 2nd-order flux and \cite{3rdGKS-Luo} for the 3rd-order one. \subsubsection{Full 3rd-order GKS flux in 2D} With the definition of a physical particle collision time $\tau$ and a numerical one $\tau_n$ for proving additional dissipation in unresolved region, the integral solution becomes \begin{equation}\label{tau-n} \begin{split} f(0,y,t,u,v,\xi)=&\frac{1}{\tau_n} \int_0^t g(-u(t-t'),y-v(t-t'),t',u,v,\xi)e^{-(t-t')/\tau_n}dt' \\&+e^{-t/\tau_n}f_0(-ut,y-vt,u,v,\xi). \end{split} \end{equation} Substituting Eq.(\ref{3rd-left-right}) and (\ref{3rd-equ}) with coefficients in Eq.(\ref{coe-relation}) into the above equation, we get \begin{equation}\label{3rdsolver_equ1} \begin{split} &\frac{1}{\tau_n} \int_0^t g(-u(t-t'),y-v(t-t'),t',u,v,\xi)e^{-(t-t')/\tau_n}dt' \\&=C_1 \bar{g}+C_2 \bar{g}\bar{a_1}u+C_1 \bar{g}\bar{a_2}y+C_2 \bar{g}\bar{a_2}v+C_3 \bar{g}\bar{A}+\frac{1}{2}C_4 \bar{g}(\bar{a_1}^2+\bar{d_{11}})u^2 \\+&\frac{1}{2}C_1 \bar{g}(\bar{a_2}^2+\bar{d_{22}})y^2+C_2 \bar{g}(\bar{a_2}^2+\bar{d_{22}})vy+\frac{1}{2}C_4 \bar{g}(\bar{a_2}^2+\bar{d_{22}})v^2 \\+&C_2 \bar{g}(\bar{a_1}\bar{a_2}+\bar{d_{12}})uy+C_4 \bar{g}(\bar{a_1}\bar{a_2}+\bar{d_{12}})uv+\frac{1}{2}C_5 \bar{g}(\bar{A}^2+\bar{B}) \\+&C_6 \bar{g}(\bar{A}\bar{a_1}+\bar{b_{1}})u+C_3 \bar{g}(\bar{A}\bar{a_2}+\bar{b_{2}})y+C_6 \bar{g}(\bar{A}\bar{a_2}+\bar{b_{2}})v, \end{split} \end{equation} and \begin{equation}\label{3rdsolver_equ2} \begin{split} e^{-t/\tau_n}f_0(-ut,y-vt,u,v,\xi)= \begin{cases} e^{-t/\tau_n}f_0^l(-ut,y-vt,u,v,\xi),&u>0,\\ e^{-t/\tau_n}f_0^r(-ut,y-vt,u,v,\xi),&u<0, \end{cases} \end{split} \end{equation} where \begin{equation}\label{3rdsolver_equ3} \begin{split} &e^{-t/\tau_n}f_0^{l,r}(-ut,y-vt,u,v,\xi) \\&=C_7 g_0^{l,r}[1-\tau(a_1^{l,r}u+a_2^{l,r}v+A^{l,r})] \\&+C_8 g_0^{l,r}[a_1^{l,r}u-\tau (((a_1^{l,r})^2+d_{11}^{l,r})u^2+(a_1^{l,r}a_2^{l,r}+d_{12}^{l,r})uv+(A^{l,r}a_1^{l,r}+b_1^{l,r})u)] \\&+C_7g_0^{l,r}[a_2^{l,r}-\tau ((a_1^{l,r}a_2^{l,r}+d_{12}^{l,r})u+((a_2^{l,r})^2+d_{22}^{l,r})v+A^{l,r}a_2^{l,r}+b_2^{l,r})]y \\&+C_8 g_0^{l,r}[a_2^{l,r}v-\tau (((a_1^{l,r}a_2^{l,r}+d_{12}^{l,r})uv+((a_2^{l,r})^2+d_{22}^{l,r})v^2+(A^{l,r}a_2^{l,r}+b_2^{l,r})v)] \\&+\frac{1}{2}C_9 g_0^{l,r}((a_1^{l,r})^2+d_{11}^{l,r})u^2+\frac{1}{2}C_7g_0^{l,r}((a_2^{l,r})^2+d_{22}^{l,r})y^2 \\&+C_8 g_0^{l,r}((a_2^{l,r})^2+d_{22}^{l,r})vy+\frac{1}{2}C_9 g_0^{l,r}((a_2^{l,r})^2+d_{22}^{l,r})v^2 \\&+C_8 g_0^{l,r}(a_1^{l,r}a_2^{l,r}+d_{12}^{l,r})uy+C_9 g_0^{l,r}(a_1^{l,r}a_2^{l,r}+d_{12}^{l,r})uv. \end{split} \end{equation} The time integral coefficients are given by \begin{equation}\label{3rdsolver_equ4} \begin{split} &C_1=1-e^{-t/ \tau _n},C_2=(t+\tau)e^{-t/ \tau _n}- \tau,C_3=t-\tau+\tau e^{-t/ \tau _n},C_4=(-t^2-2\tau t)e^{-t/\tau _n}, \\&C_5=t^2-2\tau t,C_6=-\tau t(1+e^{-t/ \tau _n}),C_7=e^{-t/ \tau _n},C_8=-te^{-t/ \tau _n},C_9=t^2e^{-t/ \tau _n}. \end{split} \end{equation} \subsubsection{Simplified 3rd-order GKS flux in 2D} The full 3rd-order flux function is very complicated. A simplified version has been proposed by Zhou et al. \cite{3rdGKS-simplified}. The new set of coefficients is introduced as \begin{equation}\label{flux_solver11} \begin{split} &a_x=a_1=g_x/g,a_y=a_2=g_y/g,a_t=A=g_t/g, \\ &a_{xx}=g_{xx}/g,a_{xy}=g_{xy}/g,a_{yy}=g_{yy}/g, \\ &a_{xt}=g_{xt}/g,a_{yt}=g_{yt}/g,a_{tt}=g_{tt}/g. \end{split} \end{equation} And Eq.(\ref{coe-determine-1st}) and (\ref{coe-determine-2nd}) are replaced by \begin{equation}\label{flux_solver12} \begin{split} &\left\langle a_x\right\rangle=\frac{\partial{W}}{\partial{x}}, \left\langle a_y\right\rangle=\frac{\partial{W}}{\partial{y}}, \left\langle a_t+a_xu+a_xv\right\rangle=0, \\ &\left\langle a_{xx}\right\rangle=\frac{\partial^2{W}}{\partial{x^2}}, \left\langle a_{xy}\right\rangle=\frac{\partial^2{W}}{\partial{xy}}, \left\langle a_{yy}\right\rangle=\frac{\partial^2{W}}{\partial{y^2}}, \\ &\left\langle a_{xx}u+a_{xy}v+a_{xt}\right\rangle=0, \\ &\left\langle a_{xy}u+a_{yy}v+a_{yt}\right\rangle=0, \\ &\left\langle a_{xt}u+a_{yt}v+a_{tt}\right\rangle=0. \end{split} \end{equation} The final distribution function becomes \begin{equation}\label{3rd-simplify-flux} \begin{split} f(0,y,t,u,v,\xi)=&\bar{g}+\frac{1}{2}\bar{g}_{yy}y^2+\bar{g}_tt+\frac{1}{2}\bar{g}_{tt}t^2-\tau[(\bar{g}_t+u\bar{g}_x+v\bar{g}_y)+(\bar{g}_{tt}+u\bar{g}_{xt}+v\bar{g}_{yt})t] \\&-e^{-t/\tau_n}[\bar{g}-(u\bar{g}_x+v\bar{g}_y)t] \\&+e^{-t/\tau_n}[g^l-(ug^l_x+vg_y^l)t]H(u)+e^{-t/\tau_n}[g^r-(ug^r_x+vg^r_y)t](1-H(u)). \end{split} \end{equation} Both the full and simplified 3rd-order GKS fluxes could achieve the theoretical accuracies. The reason may come from the insensitivity of macroscopic flux function to the microscopic particle distribution function once the conservation is fully imposed in the evolution process. Since the simplified 3rd-order flux has about 4 times speed-up in comparison with the complete 3rd-order one in the 2D case, the simplified flux function will be used in all test cases in this paper. \subsubsection{2nd-order GKS flux in 2D} If we drop all second-order derivative in Eq.(\ref{3rdsolver_equ1}) and Eq.(\ref{3rdsolver_equ3}), the traditional 2nd-order GKS flux solver can be recovered \begin{equation}\label{2ndgks-1} \begin{split} f(0,y,t,u,v,\xi)&=(1-e^{-t/\tau_n})\bar{g}+((t-\tau)e^{-t/ \tau_n}-\tau)(u\bar{a}_1+v\bar{a}_2)+(t-\tau +\tau e^{-t/ \tau_n})\bar{A}g_0 \\&+e^{-t/\tau_n}[g^l-(ug^l_x+vg_y^l)(\tau+t)-\tau A_r]H(u) \\&+e^{-t/\tau_n}[g^r-(ug^r_x+vg^r_y)(\tau+t)-\tau A_l](1-H(u)). \end{split} \end{equation} \subsection{Numerical Algorithm for MSMD GKS} The numerical flux of the GKS is a complicated function of time in the non-smooth region. In order to construct MSMD GKS, the 1st-order and 2nd-order time derivatives of the flux function have to be properly evaluated. Since the main contribution of a flux function in a numerical scheme is about the total transport within a time step between cells, the time derivatives of a flux function are evaluated on the average of a time step. Denote the total transport of flux at the cell interface $i+1/2$ within a time interval $\delta$, \begin{align*} \mathbb{F}_{i+1/2}(W^n,\delta) =\int_{t_n}^{t_n+\delta}F_{i+1/2}(W^n,t)dt&=\int_{t_n}^{t_n+\delta}\int u \psi f(x_{i+1/2},t,u, v,\xi)d\Xi dt. \end{align*} For a 2nd-order GKS flux, the flux can be approximated to be a linear function within a time step, \begin{align}\label{na1} F_{i+1/2}(W^n,t)=F_{i+1/2}^n+ \partial_t F_{j+1/2}^nt. \end{align} The coefficients $F_{j+1/2}^n$ and $\partial_tF_{j+1/2}^n$ can be determined as follows \begin{align*} F_{i+1/2}^n\Delta t&+\frac{1}{2}\partial_t F_{i+1/2}^n\Delta t^2 =\mathbb{F}_{i+1/2}(W^n,\Delta t)\\ \frac{1}{2}F_{i+1/2}^n\Delta t&+\frac{1}{8}\partial_t F_{i+1/2}^n\Delta t^2 =\mathbb{F}_{i+1/2}(W^n,\Delta t/2) \end{align*} By solving the linear system, we have \begin{align}\label{na2} F_{i+1/2}^n&=(4\mathbb{F}_{i+1/2}(W^n,\Delta t/2)-\mathbb{F}_{i+1/2}(W^n,\Delta t))/\Delta t,\nonumber\\ \partial_t F_{i+1/2}^n&=4(\mathbb{F}_{i+1/2}(W^n,\Delta t)-2\mathbb{F}_{i+1/2}(W^n,\Delta t/2))/\Delta t^2. \end{align} Similar formulation can be obtained for the flux in the y-direction. For the 3rd-order GKS flux, $F(t)$ is approximated by a quadratic function of time with second-order time derivative, \begin{align}\label{na3} F_{i+1/2}(W^n,t)=F_{i+1/2}^n+ \partial_t F_{i+1/2}^nt+\frac{1}{2}\partial_{tt}F_{i+1/2}^nt^2. \end{align} Three conditions \begin{align*} F_{i+1/2}^n\Delta t+\frac{1}{2}\partial_t F_{i+1/2}^n\Delta t^2+\frac{1}{6}\partial_{tt} F_{i+1/2}^n\Delta t^3&=\mathbb{F}_{i+1/2}(W^n,\Delta t),\\ \frac{2}{3}F_{i+1/2}^n\Delta t+\frac{2}{9}\partial_t F_{i+1/2}^n\Delta t^2+\frac{4}{81}\partial_{tt} F_{i+1/2}^n\Delta t^3&=\mathbb{F}_{i+1/2}(W^n,2\Delta t/3),\\ \frac{1}{3}F_{i+1/2}^n\Delta t+\frac{1}{18}\partial_t F_{i+1/2}^n\Delta t^2+\frac{1}{162}\partial_{tt} F_{i+1/2}^n\Delta t^3&=\mathbb{F}_{i+1/2}(W^n,\Delta t/3) \end{align*} can be used to determine these coefficients \begin{align}\label{na4} F_{i+1/2}^n&=\frac{1}{\Delta t}(\mathbb{F}_{i+1/2}(W^n,\Delta t)-\frac{9}{2}\mathbb{F}_{i+1/2}(W^n,2\Delta t/3)+9\mathbb{F}_{i+1/2}(W^n,\Delta t/3)) , \nonumber\\ \partial_t F_{i+1/2}^n&=-\frac{9}{\Delta t^2}(\mathbb{F}_{i+1/2}(W^n,\Delta t)-4\mathbb{F}_{i+1/2}(W^n,2\Delta t/3)+5\mathbb{F}_{i+1/2}(W^n,\Delta t/3)) , \nonumber\\ \partial_{tt} F_{i+1/2}^n&=\frac{9}{\Delta t^3}(3\mathbb{F}_{i+1/2}(W^n,\Delta t)-9\mathbb{F}_{i+1/2}(W^n,2\Delta t/3)+9\mathbb{F}_{i+1/2}(W^n,\Delta t/3)) .\nonumber\\ \end{align} \subsection{Remarks on spatial reconstructions} In 1-D case, the standard WENO5-Z reconstruction \cite{wenoz} based on characteristic variables is applied to obtain the cell interface values $W^{l,r}$. In 2-D case, the reconstruction is conducted direction by direction, and the multi dimensional effect is included through the flux evaluation at Gaussian quadrature points on each cell interface. For example, at the cell interface $(i+1/2,j)$, 1-D WENO5-Z reconstruction is first applied to get interface averaged values $\widetilde{W}^{l,r}_{i+1/2,j}$ by using the averaged values within the neighboring cells $W_{i-2,j}...W_{i+3,j}$. Then, the tangential reconstruction based on $\widetilde{W}^{l,r}_{i+1/2,j-2}...\widetilde{W}^{l,r}_{i+1/2,j+2}$ is conducted by using the 1-D WENO5-Z again in the y-direction to obtain the values at the Gaussian points. The flux transport in the x-direction through the three Gaussian points is evaluated by quadratures \begin{align} \frac{1}{\Delta y}\int_{y_{j-1/2}}^{y_{j+1/2}}F(W(x_{i+1/2},y,t))dy=\sum_{l=1}^3\omega_lF(W(x_{i+1/2},y_l,t)), \end{align} where $y_l=0,\pm\frac{1}{2}\sqrt{\frac{3}{5}}\Delta y$ and $\omega_l=\frac{4}{9},\frac{5}{18},\frac{5}{18}$ accordingly. This guarantees a fifth-order accuracy for the flux calculation in the tangential direction. This reconstruction procedure is exactly the same as the Class B method defined in \cite{accuracy-FVM}. For GKS, besides the point-wise values at a cell interface, the slopes on both sides of a cell interface are also needed in the flux evaluations. For the initial discontinuous non-equilibrium part $g_0^{l,r }|_{x_{i+1/2},y_l}$, they have one to one correspondences to $W^{l,r}_{i+1/2,j_l}$ at each Gaussian point, which are reconstructed in the same way as that for the Riemann solvers. For the equilibrium part $\bar{g}$, the interface averaged value $\widetilde{\bar{W}}_{i+1/2,j}$ is obtained by Eq.(\ref{g0-collision}). Since $\bar{g}$ represents a continuous equilibrium flow, a 4th-order polynomial can be uniquely determined by using $\widetilde{\bar{W}}_{i+1/2,j-2}...\widetilde{\bar{W}}_{i+1/2,j+2}$ and the point-wise value $\widetilde{\bar{W}}_{i+1/2,j_l}$ can be obtained. For the non-equilibrium state within each cell $i$, based on the the reconstructed cell interface values $(W_{i-1/2}^r, W_{i+1/2}^l)$ and cell averaged $W_i$, a 2nd order polynomial could be constructed within the cell. For the equilibrium state reconstruction at the cell interface $i+1/2$, the stencil is the four cell average values $W_{i-1}...W_{i+2}$ and the interface value $\bar{W}_{i+1/2}$ itself. Then, a 4th-order polynomial could be obtained without using any limiter. This equilibrium construction has a 5th-order accuracy in smooth region. The detail construction could be found in \cite{3rdGKS-Luo}. In the 2-D case, at the cell interface $(i+1/2,j)$, after obtaining the interface averaged derivatives $\partial_x \widetilde{W}^{l,r}, \partial_{xx} \widetilde{W}^{l,r}$, the derivatives at each Gaussian point is constructed by the same WENO reconstruction in tangential direction as for the reconstruction of $W^{l,r}$. On the other hand, after obtaining $W^{l,r}$ at these three Gaussian points, $\partial_y W^{l,r},\partial_{yy} W^{l,r}$ could be determined by a 2nd-order polynomial which passes through these three points. And $\partial_{xy} W^{l,r}$ could be calculated in the same way from the data $\partial_x W^{l,r}$ at the Gaussian points. For the equilibrium part, rather than the WENO reconstruction, a 4th-order polynomial is used to determine all derivatives along the tangential direction, and a 5th-order accuracy can be achieved in smooth region. Since he Gaussian points are used to evaluate the flux transport along the cell interface, the $y$ and $y^2$ terms in Eq.(\ref{3rdsolver_equ1}), (\ref{3rdsolver_equ3}), and (\ref{3rd-simplify-flux}) can be ignored in the flux evaluation. \section{Numerical Tests} The schemes tested in this section include many MSMD GKS methods. The names of the schemes are defined as SnOr with the definition of n-stages and r-th-order accuracy, such as S1O2 (single stage, 2nd-order accuracy), S1O3 (single stage, 3rd-order accuracy), etc. S3O5 (three stages, 5th-order) takes the time marching strategy in Table. \ref{s3o511}, while S3O5+ takes the strategy in Table. \ref{s3o522}. For different kinds of S2O5 schemes, the suffix "c" and "s" indicates its usage of complete 3rd-order flux or the simplified one. And the suffix "+" refers the coefficients in Table. \ref{5th2stage_modified}, while the absence of "+" refers the one in Table. \ref{5th2stage}. If no special illustration, the numerical results in comparison, especially with the Godunov type schemes, are based on the same spatial reconstruction. For inviscid flow computations, the physical collision time $\tau = \mu /p = 0$, where $\mu$ is the dynamical viscosity coefficient and $p$ is the pressure, the numerical collision time \begin{align*} \tau_{n}=C_1 \Delta t+C_2\displaystyle|\frac{p_l-p_r}{p_l+p_r}|\Delta t, \end{align*} where $C_1,C_2$ are two constants. Generally, $C_1\ll 1$ and $C_2\sim \mathcal{O}(1)$ under current WENOZ reconstruction. For viscous flow computation, the physical collision time is defined as $\tau=\mu/{p}$ and the numerical collision time is $$\tau_{n}= \frac{\mu}{p} + C_2 \displaystyle|\frac{p_l-p_r}{p_l+p_r}|\Delta t .$$ \subsection{Accuracy tests} For the Euler equations, the smooth density propagation is used for the accuracy evaluation. In these cases, both the physical viscosity and the collision time related to the numerical dissipation are set to zero. The initial condition for the 1-D density advection is given by \begin{align*} \rho(x)=1+0.2\sin(\pi x), U(x)=1, p(x)=1, x\in[0,2]. \end{align*} The exact solution under periodic boundary condition is \begin{align*} \rho(x,t)=1+0.2\sin(\pi(x-t)), U(x,t)=1, p(x,t)=1. \end{align*} The numerical solutions from different schemes after one period of propagation at time $t=2$ are obtained and compared with the exact solution. Since the same fifth-order spatial reconstruction is used for all schemes, the leading truncation error of a $r$th-order GKS is expected to be $\mathcal{O}(\Delta x^5 +\Delta t^r)$. Assume that $\Delta t = c \Delta x$ with the CFL condition, the truncation error is then proportional to $\mathcal{O}(\Delta x^5 + c^r (\Delta x)^r)$. If $c\ll 1$, the leading error of $\mathcal{O}(\Delta x^5) $ due to spatial discretization might become dominant, and it is hard to evaluate the time accuracy. So a rather large time step $\Delta t = 0.25 \Delta x$, which corresponds to $CFL \approx 0.5$, is used to test the spatial and temporal accuracy together. Based on $L^1$ error, the orders of different schemes at $t=2$ are presented in Table. \ref{1d_accuracy2}. All schemes from 2nd-order up to 5th-order ones achieve their theoretical accuracy. Among all schemes, the 5th-order 2-stages method with the complete 3rd-order GKS flux function has the smallest absolute error, while the same scheme with simplified 3rd-order GKS flux function has slightly larger absolute error. But, the order of accuracy of the scheme with simplified flux keeps the theoretical value. The test is extended to 2-D case, where the density perturbation propagates in the diagonal direction, \begin{align*} &\rho(x,y)=1+0.2\sin(\pi x)sin(\pi y), \\& U(x,y)=1,V(x,y)=1, p(x)=1, \end{align*} with the exact solution \begin{align*} &\rho(x,y)=1+0.2\sin(\pi (x-t))sin(\pi (y-t)), \\&U(x,y)=1,V(x,y)=1, p(x)=1. \end{align*} The computation domain is $[-1,1]\times[-1,1]$ and the $N{\times}N$ uniform mesh points are used with periodic boundary condition. The results are shown in Table. \ref{2d_accuracy2} and \ref{2d_accuracy3}, with validated theoretical accuracy. \begin{table}[!h] \small \begin{center} \def1\textwidth{0.85\textwidth} {\rule{1\textwidth}{1pt}} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}c|cc|cc|cc} ~ & S1O2 &~ & S1O3 & ~& S2O4 & ~ \\ \hline mesh & $L^1$ error & Order & $L^1$ error & Order& $L^1$ error & Order \\ \hline 160&1.6449e-005&~ &6.465536e-008&~ &1.762567e-009&~\\ 320& 4.11231e-006 &2.000&7.934459e-009&3.027&5.558891e-011&4.954\\ 640& 1.02808e-006 &2.000&9.871997e-010&3.001&1.793678e-012&5.166\\ 1280& 2.57021e-007 &2.000 &1.232565e-010&3.001&6.391980e-014&4.811\\ \Xhline{1.2pt} ~ & S3O5 & & S3O5+ &~ & S2O5c \\ \hline mesh & $L^1$ error & Order & $L^1$ error & Order& $L^1$ error & Order \\ \hline 160&1.72333e-09&~&1.72327e-09&~&1.55396e-09&~\\ 320& 5.38507e-11&5.000&5.38492e-11&5.000&4.85575e-11 &5.000\\ 640& 1.68291e-12&5.000&1.68297e-12&5.000&1.51832e-12&5.000\\ 1280& 5.36963e-14&4.970&5.34834e-14&4.976&4.82038e-14&4.977\\ \Xhline{1.2pt} ~ & S2O5c+ & & S2O5s &~ & S2O5s+ \\ \hline mesh & $L^1$ error & Order & $L^1$ error & Order& $L^1$ error & Order \\ \hline 160&1.55413e-09&~&1.55396e-09&~&1.578850e-009&~\\ 320& 4.85622e-11&5.000&4.85568e-11&5.000&4.933414e-011&5.000\\ 640& 1.51805e-12&5.000&1.51805e-12&5.000&1.541736e-012&5.000\\ 1280& 4.83028e-14&4.974&4.90587e-14&4.952&4.924277e-014&4.969\\ \end{tabular*} {\rule{1\textwidth}{0.1pt}} \end{center} \vspace{-4mm} \caption{\label{1d_accuracy2} Accuracy test for the 1-D advection of density perturbation by GKS with different temporal accuracy under same fifth order reconstruction. $\Delta t = 0.25\Delta x$.} \end{table} \begin{table}[!h] \small \begin{center} \def1\textwidth{1\textwidth} {\rule{1\textwidth}{1pt}} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}c|cc|cc|cc} mesh & $L^1$ error & Order & $L^2$ error & Order& $L^{\infty}$ error & Order \\ \hline 20*20&3.67810e-06&~ & 4.46724e-06 &~ &8.88218e-06&~\\ 40*40&5.85160e-08&5.974 & 7.09236e-08 & 5.976&1.40828e-07&5.979\\ 80*80&9.17888e-10 &5.994 & 1.11208e-09&5.995 &  2.20522e-09&5.997\\ 160*160&1.43537e-11&5.999 & 1.73878e-11& 5.999& 3.44513e-11&6.000\\ 320*320&2.24360e-13 &5.999 &2.71745e-13 & 6.000&5.435787e-13&5.989\\ \end{tabular*} {\rule{1\textwidth}{0.1pt}} \end{center} \vspace{-4mm} \caption{\label{2d_accuracy2} Accuracy test for the 2D advection of density perturbation for S3O5+ scheme, $\Delta t = 0.1\Delta x$.} \end{table} \begin{table}[!h] \small \begin{center} \def1\textwidth{1\textwidth} {\rule{1\textwidth}{1pt}} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}c|cc|cc|cc} mesh & $L^1$ error & Order & $L^2$ error & Order& $L^{\infty}$ error & Order \\ \hline 20*20&3.63607e-06&~ & 4.41462e-06 &~ &8.77532e-06&~\\ 40*40&5.78414e-08&5.974 & 7.00846e-08 & 5.976&1.39129e-07&5.979\\ 80*80&9.07345e-10 &5.994 & 1.09893e-09&5.995 &  2.17865e-09&5.997\\ 160*160&1.41889e-11&5.999 & 1.71825e-11& 5.999& 3.40366e-11&6.000\\ 320*320&2.21805e-13 &5.999 &2.68546e-13 & 6.000&5.38458e-13&5.981\\ \end{tabular*} {\rule{1\textwidth}{0.1pt}} \end{center} \vspace{-4mm} \caption{\label{2d_accuracy3} Accuracy test for the 2D advection of density perturbation for S2O5s+ scheme, $\Delta t = 0.1\Delta x$.} \end{table} \subsection{One dimensional test cases} Three Riemann problems in 1-D are selected to validate the high order GKS. At the same time, the results from the RK4 Godunov method with exact Riemann solver are also included. All simulations are based on the same initial reconstruction. \bigskip \noindent{\sl{(a) Sod problem}} The initial condition for the Sod problem is given by \begin{equation*} (\rho,u,p)=\left\{\begin{aligned} &(1, 0, 1), 0<x<0.5,\\ &(0.125,0,0.1), 0.5 \leq x<1. \end{aligned} \right. \end{equation*} The simulation domain is covered by $100$ uniform mesh points. The solutions at $t=0.2$ are presented. The WENO5-Z reconstruction is based on the characteristic variables. The results from S2O4, S2O5s+, S3O5+, and the exact Riemann solver are shown in Fig. \ref{sod}. \bigskip \noindent{\sl{(b) Blast wave problem}} The initial conditions for the blast wave problem are given as follows \begin{equation*} (\rho,u,p)=\left\{\begin{aligned} &(1, 0, 1000), 0\leq x<10,\\ &(1, 0, 0.01), 0.1\leq x<90,\\ &(1, 0, 100), 0.9\leq x\leq 100. \end{aligned} \right. \end{equation*} The computational domain is covered with $400$ uniform mesh points and reflection boundary conditions are applied at both ends. The CFL number takes $0.5$. The density and velocity distributions at $t=3.8$ are presented in Fig. \ref{blastwave}. All schemes give almost identical results. Based on the above observations, it seems that the Sod and blast wave cases may not be the appropriate tests to distinguish different kinds of high order schemes. \bigskip \noindent{\sl{(c) Titarev and Toro problem}} Titarev and Toro \cite{ttoro} problem is about high frequency oscillating sinusoidal waves propagation with shock interaction, which is a great challenge for spatial reconstruction \cite{wenoz+,gks-benchmark} and flux solver \cite{ttoro}. The initial conditions are given as follows \begin{equation*} (\rho,u,p)=\left\{\begin{aligned} &(1.515695, 0.523346, 1.805), 0\leq x<0.5,\\ &(1+0.1sin(20 \pi x), 0, 1), 0.5\le x<10. \end{aligned} \right. \end{equation*} A uniform mesh with $1000$ mesh points are used in the computational domain and CFL = 0.5 is used for all schemes. The result at $t=5$ is shown in Fig. \ref{ttoro}. Different from previous test cases, there are clear differences between GKS and Riemann solutions in the middle region, after shock interaction with smooth acoustic wave. All multi-stage schemes present less dissipative results than those from the high order RK4 Godunov method with exact Riemann solver. With the same initial reconstruction, the differences must come from the different type of temporal discretization and flux functions. This test case indicates the usefulness of the high order GKS on acoustic wave computations. \begin{figure}[!h] \centering \includegraphics[width=0.4\textwidth]{sodcompare345density} \includegraphics[width=0.4\textwidth]{sodcompare345u} \caption{\label{sod} The density and velocity distributions for 1-D sod problem at $t=0.2$ with $100$ cells. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.4\textwidth]{blastwavecomparewithexactdensity} \includegraphics[width=0.4\textwidth]{blastwavecomparewithexactu} \caption{\label{blastwave} The density and velocity distributions for 1-D blast wave problem at $t=3.8$ with $400$ cells. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.44\textwidth]{ttoro-compactwithrms} \includegraphics[width=0.44\textwidth]{ttoro-compactwithrms-local1} \caption{\label{ttoro} The density distributions for 1-D Titarev-Toro problem at $t=5.0$ with $1000$ cells. Left figure: solution in the whole domain. Right figure: local enlargement.} \end{figure} \subsection{Two dimensional inviscid test cases} \bigskip \noindent{\sl{(a) Isentropic vortex propagation with $100$ periods}} The isentropic vortex propagation is tested a 2-D domain for the smooth inviscid flow. The initial condition is given by \begin{equation} \begin{split} (U,V)=(1,1)+\frac{\kappa}{2\pi}e^{0.5(1-r^2)}(-\overline{y},\overline{x}), \\ T=1-\frac{(\gamma-1){\kappa}^2}{8\gamma{\pi}^2}e^{1-{\gamma}^2},\\ S=1, \end{split} \label{eqn_isen1} \end{equation} where the density $\rho$ and pressure $p$ are calculated from the temperature $T$ and the entropy $S$ by \begin{equation} T=\frac{p}{\rho},S=\frac{p}{\rho^{\gamma}}, \label{eqn_isen2} \end{equation} where $(\overline{x},\overline{y})=(x-5,y-5),~r^2=\overline{x}^2+\overline{y}^2$, and the vortex strength $\kappa=5$. The computational domain is $[0,10]\times[0,10]$. Periodic boundary condition is applied to all boundaries. To show performance of different time marching schemes, this case is tested under $1$ period, $10$ periods, and $100$ periods, with $t=10, 100$, and $100$ accordingly. Again, the same spatial reconstruction is used for all schemes. For $t=10$ and $t=100$, the error in density is less than $10^{-4}$ for all schemes, which could hardly be used to distinguish the performance of different high order schemes. However, with the output time $t=1000$ for $100$ periods vortex propagation, the one step 3rd-order S1O3c scheme shows the dispersion and dissipation error, see in Fig.\ref{isen_long1}, while other higher order schemes still keep the vortex center in the computational domain. It shows that the higher order time accurate scheme is important for capturing long time wave propagating behavior. Another observation is the anti-diffusive effect in the S1O3 and RK5 Godunov method with exact Riemann solver. A more quantitative comparison of the density distributions along $y=0.0$ are shown in Fig.\ref{isen_long2}. It demonstrates that the close coupling of space and time evolution is important for capturing long time wave propagation. \begin{figure}[!h] \centering \includegraphics[width=0.7\textwidth]{isendensityt1000compare-exact} \caption{\label{isen_long1} The density contours of isentropic vortex propagation after $100$ periods. CFL=0.4, mesh $80\times 80$. Up row from left to right: exact solution, S1O3c, and S2O4. Down row from left ro right: S2O5s+, S3O5+, RK5-ExactRS. } \end{figure} \begin{figure}[!h] \centering \includegraphics[width=0.44\textwidth]{isencomparet100line-withexact} \includegraphics[width=0.44\textwidth]{isencomparet1000line-withexact} \caption{\label{isen_long2} Density distributions for isentropic vortex propagation along $y=0$. Left: $10$ periods. Right: $100$ periods.} \end{figure} \bigskip \noindent{\sl{(b) Two dimensional Riemann problems}} For high speed compressible flow, two distinguishable flow patterns are the shock-vortex interaction and free shear layer \cite{shear-book}. Here, two dimensional Riemann problems are used for the study the complicated wave structures \cite{2dRM}. Although both test cases are for the inviscid flow, the inherent numerical viscosity in schemes can trigger shear instability. Another advantage for using 2-D Riemann problem is the rectangular domain and simple boundary condition. \noindent{\sl{(b1) Interaction of planar shocks}} Configuration 3 in \cite{2dRM} involves the shock-shock interaction and shock-vortex interaction. The initial condition in a square domain $[0,1]\times[0,1]$ is given by \begin{equation*} (\rho,u,v,p)=\left\{\begin{aligned} &(0.138, 1.206,1.206, 0.029),& x<0.7,y<0.7,\\ &(0.5323, 0,1.206, 0.3),& x\geq 0.7,y<0.7,\\ &(1.5, 0,0, 1.5),& x\geq 0.7,y\geq 0.7,\\ &(0.5323,1.206,0, 0.3),& x<0.7, y\geq 0.7. \end{aligned} \right. \end{equation*} At the output time $t=0.6$, the same $23$ density contours are plotted in Fig. \ref{rm2d-shock-500} from different schemes. All schemes could capture the shock sharply. The main differences among all schemes are the strength of the the shear layers, such as the V1 and V2 regions of Fig. \ref{shock-s3o5}. Both S3O5+ and S2O5s+ schemes could resolve the vortex pairs in V1 region better than those from the RK4 Riemann solvers based schemes. At the same time, gas kinetic schemes show stronger instabilities of the vortex sheets in V2 region. \begin{figure}[!h] \centering \subfigure[S3O5+]{ \label{shock-s3o5} \includegraphics[width=0.48\textwidth]{shock-s3o5-mesh500-t06new}} \subfigure[S2O5s+]{ \label{shock-s2o5} \includegraphics[width=0.48\textwidth]{shock-s2o5-mesh500-t06new}} \\ \subfigure[RK4-HLLC]{ \label{shock-hllc} \includegraphics[width=0.48\textwidth]{shock-hllcRK4-mesh500-t06new}} \subfigure[RK4-ExactRS]{ \label{shock-exact} \includegraphics[width=0.48\textwidth]{shock-exactRK4-mesh500-t06new}} \\ \caption{\label{rm2d-shock-500} The density contour of Configuration 3 case in \cite{2dRM}. $500\times 500$ mesh points are used in all calculations.} \end{figure} \noindent{\sl{(b2) Interaction of planar contact discontinuities}} The Configuration 6 in \cite{2dRM} is tested. The initial condition in a square domain $[0,2]\times [0,2]$ is given by \begin{equation*} (\rho,u,v,p)=\left\{\begin{aligned} &(1, -0.75,0.5, 1),& x<1,y<1,\\ &(3, -0.75,-0.5, 1),& x\geq 1,y<1,\\ &(1, 0.75,-0.5, 1),& x\geq 1,y\geq 1,\\ &(2,0.75,0.5, 1),& x< 1,y\geq 1, \end{aligned} \right. \end{equation*} where four zones have different density and velocity, but the same pressure. Four shear layers will be formed by these planar contact discontinuity interactions. Similar to the isentropic vortex case, in order to reveal the influence of different time marching schemes a large domain $[0,2]\times [0,2]$ covered by $800 \times 800$ mesh points is used with the output times $t=0.4$ and $t=1.6$. The CFL condition is $0.5$ in all calculations. As shown in the Fig. \ref{2drm-longshear-1}, at $t=0.4$ the results from all schemes seem identical. However, at time $t=1.6$, the flow structures become much more complicated. From the local enlargement of the central shear layer, the S2O4 and S3O5 schemes present more smaller vortexes, and the S2O5 scheme shows slightly less but a little bit larger size. In comparison with Riemann flux based RK4 Godunov method, as shown in Fig. \ref{2drm-longshear-2}, the traditional RK methods show more dissipative results than those from MSMD GKS. Different from GKS solutions, almost no instabilities for Sh1 wave are triggered for traditional RK4 methods. For Sh2 wave, both the number and the size of the vortexes from RK4 Godunov methods are inferior in comparison with MSMD GKS results. \begin{figure} \centering \subfigure[S2O4]{ \label{2ndt16} \includegraphics[width=0.25\textwidth]{4th_t04new} \includegraphics[width=0.25\textwidth]{4th_t16new} \includegraphics[width=0.25\textwidth]{4th_t16localnew}} \\ \subfigure[S3O5+]{ \label{2ndt16} \includegraphics[width=0.25\textwidth]{5th3stagessp_t04} \includegraphics[width=0.25\textwidth]{5th3stagessp_t16} \includegraphics[width=0.25\textwidth]{5th3stagessp_t16local}} \\ \subfigure[S2O5s+]{ \label{2ndt16} \includegraphics[width=0.25\textwidth]{5th2stage_t04} \includegraphics[width=0.25\textwidth]{5th2stage_t16} \includegraphics[width=0.25\textwidth]{5th2stage_t16local}} \\ \caption{The density contours of Configuration 6 case in \cite{2dRM} by using multi-stage GKS. Left: $t=0.4$. Middle: $t=1.6$. Right: local enlargement density contours of middle figure.} \label{2drm-longshear-1} \end{figure} \begin{figure} \centering \subfigure[RK4 HLLC]{ \label{2ndt16} \includegraphics[width=0.25\textwidth]{hllcRK4t04} \includegraphics[width=0.25\textwidth]{shear-hllcRK4-t16.png} \includegraphics[width=0.25\textwidth]{shear-hllcRK4-t16local.png}} \\ \subfigure[RK4 ExactRS]{ \label{2ndt16} \includegraphics[width=0.25\textwidth]{hllcRK4t04} \includegraphics[width=0.25\textwidth]{shear-exactRK4-t16.png} \includegraphics[width=0.25\textwidth]{shear-exactRK4-t16local.png}} \\ \caption{The density contours of Configuration 6 case in \cite{2dRM} by using RK4 Godunov methods. Left: $t=0.4$. Middle: $t=1.6$. Right: local enlargement of middle figure.} \label{2drm-longshear-2} \end{figure} \subsection{Computational Efficiency } Based on the above 2D Riemann problem test case of interaction of planar contact discontinuities, the computational efficiency from different schemes will be evaluated. For S2O4, S3O5, S2O5 methods, the fluxes at three Gaussian points along a cell interface are needed, which are the same as RK5 Godunov scheme with Riemann flux. The 2D Riemann problem (Configuration 6 in \cite{2dRM}) with different mesh sizes are tested for the comparison of computation cost, which is shown in Table. \ref{compT-1}. The CPU times are recorded after running $10$ time steps for each scheme with a single processor of Intel Xeon E5 2670 $@$2.60GHz. Based on the table, the computational time of S3O5 GKS is about $1.5$ times of S2O4 GKS due to the differences in number of stages. Another observation is that the computational speed for S2O5 and S3O5 is almost the same. However, for S2O5 scheme, only two stages of fluid variables are stored, which is less memory used than S3O5 scheme, but more computational time spent on the 3rd-order flux function. The computational time for the traditional RK methods based on Riemann solvers is also obtained in Table. \ref{5th6stage}. The RK5 method with HLLC flux seems faster than the S2O4 scheme. This is due to the simple HLLC flux function. The RK5 method with exact Riemann solver is almost two times more expensive than that of 5th-order GKS. But, even for the current 2D inviscid flow test case, the GKS always solves the Navier-Stokes (NS) equations. If RK5 Godunov type scheme is extended to solve the NS equations, the computational cost will at least be doubled, which will become inefficient in comparison with MSMD GKS. So, for the high order schemes and for the viscous flow computations, the design of high order time accurate flux solver is worthwhile. \begin{table}[!h] \small \begin{center} \def1\textwidth{1\textwidth} {\rule{1\textwidth}{1pt}} \begin{tabular*}{1\textwidth}{@{\extracolsep{\fill}}c|c|c|c|c|c} \diagbox{Mesh size}{CPU time}{Schemes} & S204& S2O5s+ &S3O5+&RK5 HLLC&RK5 Exact-RS \\ \hline 100*100&7.640 & 8.057 &10.959&6.028&20.223\\ 200*200&26.309 & 45.551&46.801&24.022&80.880\\ 300*300&58.524 &94.102 &94.523&53.833&171.647\\ 400*400&102.985& 138.153&158.364&94.293&292.566\\ \end{tabular*} {\rule{1\textwidth}{0.1pt}} \end{center} \vspace{-4mm} \caption{\label{compT-1} Computational time (in seconds) of different schemes for the 2D Riemann problem.} \end{table} \subsection{Two dimensional viscous test cases} \bigskip \noindent{\sl{(a) Viscous shock tube problem under low Reynolds numbers}} For N-S solvers, the viscous shock tube problem is a nice test case for the validity of the scheme due to the complicated flow structure and the shock boundary layer interaction \cite{daru}. The geometry is a two-dimensional unit box $[0,1]\times [0,1]$, with no-slip adiabatic walls. Two different constant conditions are given on both sides of $x=0.5$, \begin{equation*} (\rho,u,v,p)=\left\{\begin{aligned} &(120,0,0, 120/\gamma), 0<x<0.5,\\ &(1.2,0,0,1.2/\gamma), 0.5\leq x<1, \end{aligned} \right. \end{equation*} where $\gamma=1.4$ and the Prandtl number $Pr=0.73$. The computational domain is chosen as $[0,1]\times [0,0.5]$ due to the symmetry of the problem. The upper boundary is set as a symmetric boundary condition and the others boundaries are no-slip adiabatic condition. Two Reynolds number $Re=200$ and $1000$ are selected in the simulations. The results at $t=1$ are presented. For such a low Reynolds number low, the time step is determined by $$ \Delta t = C_{CFL} \mbox{Min} ( \frac{ \Delta x}{|\sqrt{U^2+V^2}|+a}, \frac{ (\Delta x)^2}{4\nu}) ,$$ where $C_{CFL}$ is the CFL number, $a$ is the sound speed, and $\nu = \mu /\rho$ is the kinematic viscosity coefficient. The results from different GKS schemes are shown in Fig. \ref{vistube}. For both $Re=200$ and $1000$, almost identical results are obtained from different schemes. The height of primary vortex for $Re =200$ is shown in Table. \ref{vistube_table} \cite{vistube}. \begin{table}[!h] \small \begin{center} \begin{tabular}{c|c|c|c} \Xhline{1.2pt} Schemes&AUSMPW+&M-AUSMPW+&S2O4\\ \hline Height&0.163&0.168&0.173\\ \Xhline{1.2pt} Schemes&S2O5s+&S3O5+&~\\ \hline Height&0.174&0.173&~\\ \Xhline{1.2pt} \end{tabular} \vspace{-1mm} \caption{\label{vistube_table} Heights of primary vortex from different schemes for $Re =200$ and $\Delta x= \Delta y=1/500$.} \end{center} \end{table} \begin{figure}[!h] \centering \subfigure[S204]{ \label{vishocktube-s2o4} \includegraphics[width=0.44\textwidth]{vishocktube-s2o4-re200-mesh500.png} \includegraphics[width=0.44\textwidth]{vishocktube-s2o4-re1000-mesh1000.png}} \subfigure[S3O5+]{ \label{vishocktube-s3o5ssp} \includegraphics[width=0.44\textwidth]{vishocktube-s3o5ssp-re200-mesh500.png} \includegraphics[width=0.44\textwidth]{vishocktube-s3o5ssp-re1000-mesh1000.png}} \subfigure[S2O5s+]{ \label{vishocktube-s2o5} \includegraphics[width=0.44\textwidth]{vishocktube-s2o5-re200-mesh500.png} \includegraphics[width=0.44\textwidth]{vishocktube-s2o5-re1000-mesh1000.png}} \caption{\label{vistube} The density contours of viscous shock tube problems at $t=1$ from different MSMD GKS. Left: $Re=200$, $500\times250$ mesh points. Right: $Re=1000$, $1000\times500$ mesh points. } \end{figure} \bigskip \noindent{\sl{(b) A planar jet under high Reynolds number}} Free supersonic jet flow is widely studied. A simplified 2-D planar jet, which was proposed by Zhang et al \cite{planar-jet}, is used here to test high order GKS. A Mach $1.4$ jet is injected through a width $L=0.01m$ entrance into a square computational domain with the size $10L \times 10L$. The sketch of the geometry is plotted in Fig. \ref{planarjet-initial}. The Reynolds number is set as $Re_{\infty}={U_{jet}L}/{\nu}=2.8 \times 10^5$, and the dynamic viscosity coefficient $\nu=1.73 \times 10^{-5}m^2 /s$. The same mesh size of $1200 \times 1200$ as that in \cite{planar-jet} is used here. Fig. \ref{planer_jet2} shows the evolution process of the jet at three different output times $t=0.08ms$, $0.15ms$, and $0.32ms$. At $t=0.08ms$, the three schemes of S2O4, S3O5+, and S2O5s+, give almost identical results. At $t=0.15ms$, the jet structures from different schemes are similar, with small variations, such as the vortex sheets along the main shear layers due to the K-H instability. Current results can be compared with the one in Fig. \ref{planarjet-2d-1}, while the high order MSMD GKS seems present more clear flow structure, especially for the capturing of small size vortices. As the jet further develops up to $t=0.32ms$, significant differences in the shear layer and main vortex pairs could be observed from different schemes. The S205s+ scheme shows more distinct flow pattern in the center of vortex pairs. Fig. \ref{planer_jet2} provides abundant flow structures, such as shear layer instability, shock vortex interaction, and a wide range of vortex strength and sizes, which clearly demonstrate the power of high order schemes. \begin{figure} \centering \subfigure[Schematic of the computational domain]{ \label{planarjet-initial} \includegraphics[width=0.44\textwidth]{planarjet-initial.jpg}} \subfigure[Case 3 result at t=0.15ms]{ \label{planarjet-2d-1} \includegraphics[width=0.44\textwidth]{planarjet-2d-1.jpg}} \caption{(a) The sketch of a planar jet case proposed in \cite{planar-jet}. (b) The 2-D result without entrance perturbation in \cite{planar-jet}, at t=0.15ms. The numerical schlieren-type images are plotted for visualization. } \label{planer_jet1} \end{figure} \begin{figure} \centering \subfigure[S204] { \label{planarjet-s2o4} \includegraphics[height=0.35\textwidth]{planarjet-4th-t08.png} \includegraphics[height=0.35\textwidth]{planarjet-4th-t15.png} \includegraphics[height=0.35\textwidth]{planarjet-4th-t32.png} } \subfigure[S3O5+] { \label{planarjet-s3o5} \includegraphics[height=0.35\textwidth]{planarjet-s3o5ssp-t08.png} \includegraphics[height=0.35\textwidth]{planarjet-s3o5ssp-t15.png} \includegraphics[height=0.35\textwidth]{planarjet-s3o5ssp-t32.png} } \subfigure[S2O5s+] { \label{planarjet-s2o5} \includegraphics[height=0.35\textwidth]{planarjet-s2o5-t08.png} \includegraphics[height=0.35\textwidth]{planarjet-s2o5-t15.png} \includegraphics[height=0.35\textwidth]{planarjet-s2o5-t32.png} } \caption{The 2-D simulation of starting structures of Mach $1.4$ planar jet under different MSMD GKS. Mesh size $1200\times1200$. The schlieren images at $t=0.08ms, 0.15ms, 0.32ms$ are given from left to right to present the jet evolution. } \label{planer_jet2} \end{figure} \section{Conclusion} In this paper, a family of high order gas kinetic schemes with multi-stage multi-derivative techniques have been proposed, especially the two stages and three stages fifth-order schemes S2O5 and S3O5. Due to the use of time derivatives of the flux function, such as the second and third-order GKS time accurate flux functions, for the same temporal accuracy the current schemes can reduce the number of middle stages in comparison with the traditional Runge-Kutta methods, such as the RK5 method with the time independent Riemann flux. Therefore, the current MSMD GKS becomes more efficient than the RK methods, especially for the Navier-Stokes solutions. The high order MSMD GKS provides accurate numerical solutions for the compressible flows with the same robustness as the second-order methods in the flow simulations with strong shock interactions. The jet simulation provides the state-of-art numerical results from high order schemes. The current MSMD gas kinetic schemes use the WENO type reconstruction for initial condition and spatial accuracy, which has a large stencils. Even though the MSMD GKS can be easily extended to unstructured mesh for the NS solutions, it is still preferred to develop high order compact gas kinetic schemes for the engineering applications with complicated geometry. As a continuation of the compact third-order GKS \cite{unstructured-compact-gks}, the developments of the fourth-order and fifth-order compact schemes with the implementation of MSMD technique are on the development. \section*{Acknowledgement} The current work is supported by Hong Kong Research Grant Council (16211014, 16207715), HKUST research fund (PROVOST13SC01, IRS16SC42, SBI14SC11), and National Science Foundation of China (91330203,91530319). \bibliographystyle{plain}%
proofpile-arXiv_068-1871
{ "file_path": "/home/ubuntu/dolma-v1_7/algebraic-stack-train-0000.json.gz" }